Description :
Key Responsibilities :
- Design, build, and optimize robust data streaming and batch processing pipelines using Apache Flink.
- Develop scalable and efficient ETL processes and data models to support analytical and operational needs.
- Work closely with cross-functional teams to integrate data engineering solutions with other platforms and applications.
- Implement and maintain data pipelines on streaming platforms such as Apache Kafka or Apache Pulsar.
- Ensure high availability, performance, and security of data processing systems.
- Participate in code reviews, design discussions, and mentoring junior team members.
- Employ CI / CD best practices for continuous integration and deployment of data engineering workflows.
- Collaborate within Agile teams to deliver high-quality software on time.
Mandatory Skills & Qualifications :
Bachelors or Masters degree in Computer Science, Engineering, or a related field.7+ years of experience in data engineering roles with a strong focus on Apache Flink.Proficiency in Java or Scala; Python is a plus.Hands-on experience with streaming platforms such as Apache Kafka or Apache Pulsar.Deep understanding of stream processing and batch processing paradigms.Experience in building and maintaining ETL pipelines, data modeling, and working with distributed systems.Familiarity with CI / CD pipelines, version control systems like Git, and Agile methodologies.Strong analytical, problem-solving, and communication skills(ref : hirist.tech)