Minimum Requirements : At least 3 years of professional experience in Data EngineeringDemonstrated end-to-end ownership of ETL pipelinesDeep, hands-on experience with AWS services : EC2, Athena, Lambda, and Step Functions (non-negotiable)Strong proficiency in My SQL (non-negotiable)Working knowledge of Docker : setup, deployment, and troubleshootingHighly Preferred Skills : Experience with orchestration tools such as Airflow or similarHands-on with Py SparkFamiliarity with the Python data ecosystem : SQLAlchemy, Duck DB, Py Arrow, Pandas, Num PyExposure to DLT (Data Load Tool)Ideal Candidate Profile : The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.Key Responsibilities : Architect, build, and optimize scalable data pipelines and workflowsManage AWS resources end-to-end : from configuration to optimization and debuggingWork closely with product and engineering to enable high-velocity business impactAutomate and scale data processes—manual workflows are not part of the cultureBuild foundational data systems that drive critical business decisions
Data Engineering • Hyderabad, Andhra Pradesh, India