Minimum Requirements :
At least 3 years of professional experience in Data Engineering
Demonstrated end-to-end ownership of ETL pipelines
Deep, hands-on experience with AWS services : EC2, Athena, Lambda, and Step Functions (non-negotiable)
Strong proficiency in MySQL (non-negotiable)
Working knowledge of Docker : setup, deployment, and troubleshooting
Highly Preferred Skills :
Experience with orchestration tools such as Airflow or similar
Hands-on with PySpark
Familiarity with the Python data ecosystem : SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
Exposure to DLT (Data Load Tool)
Ideal Candidate Profile :
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities :
Architect, build, and optimize scalable data pipelines and workflows
Manage AWS resources end-to-end : from configuration to optimization and debugging
Work closely with product and engineering to enable high-velocity business impact
Automate and scale data processes—manual workflows are not part of the culture
Build foundational data systems that drive critical business decisions
Data Engineering • Belgaum, Karnataka, India