Description : About the Role :
We are hiring an experienced Data Engineer to design and maintain scalable data pipelines and architectures. Youll play a key role in building robust data infrastructure to support analytics, ML models, and business insights.
Key Responsibilities :
- Build, optimize, and maintain ETL pipelines for large-scale data ingestion and transformation.
- Design scalable data architectures using AWS (S3, Glue, Redshift, Lambda, EMR).
- Develop automation scripts using Python / PySpark.
- Collaborate with data scientists and analysts to ensure high-quality data availability.
- Implement data governance, monitoring, and quality checks.
- Tune SQL queries and manage large datasets efficiently.
Technical Skills Required :
Strong experience in Python, SQL, and ETL tools.Hands-on with AWS data stack : S3, Redshift, Glue, Lambda, EMR, Athena.Experience with Apache Airflow / Spark / Kafka preferred.Familiarity with data warehousing and modeling concepts.Strong understanding of version control (Git) and CI / CD pipelines.Qualifications :
B.Tech / M.Tech / MCA in Computer Science, IT, or related discipline.4 - 8 years of relevant experience in data engineering.What We Offer :
Opportunity to work on large-scale distributed data systems.Collaborative environment with cross-functional exposure.Great learning curve and market-leading compensation(ref : hirist.tech)