Minimum Requirements :
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services : EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker : setup, deployment, and troubleshooting
Highly Preferred Skills :
Experience with orchestration tools such as Airflow or similarHands-on with PySparkFamiliarity with the Python data ecosystem : SQLAlchemy, DuckDB, PyArrow, Pandas, NumPyExposure to DLT (Data Load Tool)Ideal Candidate Profile :
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities :
Architect, build, and optimize scalable data pipelines and workflowsManage AWS resources end-to-end : from configuration to optimization and debuggingWork closely with product and engineering to enable high-velocity business impactAutomate and scale data processes—manual workflows are not part of the cultureBuild foundational data systems that drive critical business decisions