Key to Success : Scalable Data Engineering
About the Role
This is a Data Engineer position that demands exceptional skills in designing, building, and optimizing data pipelines.
Mandatory Requirements :
- At least three years of professional experience in Data Engineering with end-to-end ownership of ETL pipelines.
- Hands-on experience with AWS services including EC2, Athena, Lambda, and Step Functions.
- Strong proficiency in MySQL and Docker for setup, deployment, and troubleshooting.
Highly Preferred Skills :
Orchestration tool expertise such as Airflow or similar.PySpark hands-on experience.Familiarity with the Python data ecosystem including SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy.DLT exposure.Ideal Candidate Profile :
The ideal candidate possesses a builder's mindset and independent thought process. They should have clear communication and thrive in fast-paced startup environments. Self-driven contributors are motivated by impact rather than lines of code.
Responsibilities :
Architect scalable data pipelines and workflows.Manage AWS resources from configuration to optimization and debugging.Work closely with product and engineering teams for high-velocity business impact.Automate and scale data processes without manual workflows.Built foundational data systems driving critical business decisions.Salary Range : ₹8.4–12 LPA (excluding equity, performance bonus, and revenue share components).