Job Summary :
We are looking for a skilled and motivated Software Engineer with strong experience in data engineering and ETL processes. The ideal candidate should be comfortable working with any object-oriented programming language, possess strong SQL skills, and have hands-on experience with AWS services like S3 and Redshift. Experience in Ruby and working knowledge of Linux are a plus.
Key Responsibilities :
- Design, build, and maintain robust ETL pipelines to handle large volumes of data.
- Work closely with cross-functional teams to gather data requirements and deliver scalable solutions.
- Write clean, maintainable, and efficient code using object-oriented programming and SOLID principles.
- Optimize SQL queries and data models for performance and reliability.
- Use AWS services (S3, Redshift, etc.) to develop and deploy data solutions.
- Troubleshoot issues in data pipelines and perform root cause analysis.
- Collaborate with DevOps / infra teams for deployment, monitoring, and scaling data jobs.
Required Skills :
6+ years of experience in Data Engineering.Programming : Proficiency in any object-oriented language (e.g., Java, Python, etc.)Bonus : Experience in Ruby is a big plus.SQL : Moderate to advanced skills in writing complex queries and handling data transformations.AWS : Must have hands-on experience with services like S3 and Redshift .Linux : Familiarity with Linux-based systems is good to have.Preferred Qualifications :
Experience working in a data / ETL-focused role.Familiarity with version control systems like Git.Understanding of data warehouse concepts and performance tuning.