Job Description :
Roles And Responsibility :
- Collaborate closely with Product Management and Engineering leadership to devise and build the right solution.
- Participate in Design discussions and brainstorming sessions to select, integrate, and maintain Big Data tools and frameworks required to solve Big Data problems at scale.
- Design and implement systems to cleanse, process, and analyze large data sets using distributed processing tools like Akka and Spark.
- Understanding and critically reviewing existing data pipelines, and coming up with ideas in collaboration with Technical Leaders and Architects to improve upon current bottlenecks
- Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior Individual contributor on the multiple products and features we have.
- In-depth understanding of the Big Data ecosystem including processing frameworks like Spark, Akka, Storm,and Hadoop, and the file types they deal with.
- Experience with ETL and Data pipeline tools like Apache NiFi, Airflow etc.
- Excellent coding skills in Java or Scala, including the understanding to apply appropriate Design Patterns when required.
- Experience with Git and build tools like Gradle / Maven / SBT.
- Have elegant, readable, maintainable and extensible code style.
You are someone who would easily be able to :
Work closely with the US and India engineering teams to help build the Java / Scala based data pipelinesLead the India engineering team in technical excellence and ownership of critical modules; own the development of new modules and featuresTroubleshoot live production server issues.Handle client coordination and be able to work as a part of a team, be able to contribute independently and drive the team to exceptional contributions with minimal team supervisionFollow Agile methodology, JIRA for work planning, issue management / tracking(ref : hirist.tech)