At least 6+ years of experience and strong knowledge in Java and related frameworks.Expertise in building robust, scalable and maintainable applications with Java and Spark .Able to write clean, maintainable and efficient Java code following best practices.
- Having deep understanding of Java core and Java EE concepts, JVM internals such as class loading and memory management.
- At least 5+ years of experience in designing and developing large scale, distributed data processing pipelines using Apache Spark and related technologies.
- Having expertise in Spark Core, Spark SQL, Batch processing and Spark Streaming with Spark Java API.
- Expertise in Kafka messaging, producer, consumer operations and offset management for real-time data streaming with Apache Kafka / Confluent Kafka.
- Experience with Hadoop, HDFS, Hive and other BigData technologies.
- Familiarity with Data warehousing and ETL concepts and techniques
- Having expertise in Database concepts and SQL / NoSQL operations.
- UNIX shell scripting will be an added advantage in scheduling / running application jobs.
- At least 5 years of experience in Project development life cycle activities and maintenance / support projects.
- Work in an Agile environment and participation in scrum daily standups, sprint planning reviews and retrospectives.
- Understand project requirements and translate them into technical solutions which meets the project quality standards
- Ability to work in team in diverse / multiple stakeholder environment and collaborate with upstream / downstream functional teams to identify, troubleshoot and resolve data issues.
- Strong problem solving and Good Analytical skills.
- Excellent verbal and written communication skills.
- Experience and desire to work in a Global delivery environment.
- Stay up to date with new technologies and industry trends in Development.
Skills Required
Data, Production, Spark Core, Scalability, Sql, Spark Streaming, Streaming