Mastering Data Engineering
We are seeking a highly skilled Data Engineer to lead the development of scalable data pipelines and systems.
Key Responsibilities :
- Design, build, and optimize complex data workflows and infrastructure
- Manage end-to-end data operations : from configuration to optimization and debugging
- Collaborate closely with product and engineering teams to drive high-velocity business impact
- Automate and scale data processes—manual workflows are not acceptable
- Build foundational data systems that inform critical business decisions
The ideal candidate will possess independence, strong communication skills, and a focus on delivering tangible results over code lines. We are looking for contributors who thrive in fast-paced startup environments and take ownership of their work.
Requirements :
At least 3 years of professional experience in Data EngineeringDemonstrated end-to-end ownership of ETL pipelinesDeep, hands-on experience with AWS services : EC2, Athena, Lambda, and Step Functions (non-negotiable)Strong proficiency in MySQL (non-negotiable)Working knowledge of Docker : setup, deployment, and troubleshootingPreferred Skills :
Experience with orchestration tools such as Airflow or similarHands-on with PySparkFamiliarity with the Python data ecosystem : SQLAlchemy, DuckDB, PyArrow, Pandas, NumPyExposure to DLT (Data Load Tool)Why This Role Matters :
This position plays a critical role in shaping our organization's data strategy and driving business growth. As a key member of our team, you will have the opportunity to make a meaningful impact and contribute to the success of our company.