This role is for one of Weekday’s clients
Salary range : Rs - Rs (ie INR 15-30 LPA)
Min Experience : 5 years
Location : Chennai
JobType : full-time
Requirements
We are seeking a highly skilled and motivated Data Engineer with 5–8 years of experience in building and managing data pipelines, optimizing data workflows, and supporting large-scale data platforms. The ideal candidate will have strong expertise in AWS cloud technologies, Python, PySpark, SQL, and Snowflake , along with a proven background in working with both structured and unstructured data. This role requires an individual who can collaborate with cross-functional teams to design, implement, and maintain scalable and reliable data solutions that empower analytics, business intelligence, and decision-making across the organization.
As a Data Engineer, you will be responsible for designing and optimizing ETL pipelines, ensuring data quality, and enabling real-time as well as batch data processing capabilities. You will also leverage modern data warehousing solutions and cloud-native tools to enhance our data ecosystem.
Key Responsibilities
- Design, develop, and maintain robust, scalable, and efficient ETL / ELT pipelines to ingest, transform, and deliver data across multiple systems and applications.
- Build and manage data workflows using AWS services such as S3, Glue, Athena, Redshift, Lambda, and EMR .
- Develop and optimize data models within Snowflake to support business intelligence, analytics, and reporting needs.
- Leverage PySpark and Python to process and analyze large datasets efficiently in distributed environments.
- Work with relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Cassandra) to integrate diverse data sources.
- Utilize Kafka for building and maintaining real-time streaming data pipelines and event-driven architectures.
- Ensure data quality, integrity, and consistency by implementing monitoring, validation, and governance best practices.
- Collaborate closely with data scientists, analysts, and business stakeholders to deliver reliable and accessible data solutions.
- Troubleshoot, optimize, and fine-tune pipelines for performance and cost-efficiency.
- Document technical processes, workflows, and system configurations to support knowledge sharing and operational readiness.
- Stay up to date with emerging tools, frameworks, and cloud-native technologies to continuously improve data engineering practices.
Skills & Experience
5–8 years of proven experience in data engineering, data pipeline design, and ETL development.Strong hands-on expertise in AWS services (S3, Glue, Athena, Lambda, Redshift, EMR).Proficiency in Python and PySpark for data processing and analytics.Advanced skills in SQL for querying, optimization, and relational database management.Solid experience with Snowflake for data warehousing and analytics.Familiarity with both relational (MySQL, PostgreSQL) and NoSQL (MongoDB, Cassandra) databases.Hands-on experience with Kafka or other streaming platforms for real-time data ingestion and processing.Strong understanding of ETL / ELT processes, data modeling, and data integration best practices .Knowledge of big data ecosystems (Hadoop, Spark) is a plus.Exposure to cloud platforms such as AWS, Azure, or GCP with emphasis on scalable data infrastructure.Excellent problem-solving skills, analytical mindset, and attention to detail .Strong interpersonal and collaboration skills with experience working in Agile / Scrum environments.Education & Qualifications
Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Engineering, or a related field .Relevant certifications in AWS, Snowflake, or Big Data technologies will be an added advantage.