Experince- 3+ years
About the Role
We are looking for a skilled Data Engineer with strong Python programming experience to design, build, and maintain scalable data pipelines and infrastructure. You will work closely with data scientists, analysts, and engineering teams to ensure efficient data flow and availability for analytics and business intelligence.
Key Responsibilities
- Design, develop, and maintain robust ETL / ELT pipelines using Python
- Build and optimize data models and data warehouses for performance and scalability
- Integrate data from various sources including APIs, databases, and third-party platforms
- Collaborate with cross-functional teams to understand data requirements and deliver solutions
- Ensure data quality, consistency, and governance across systems
- Monitor and troubleshoot data workflows and resolve issues proactively
- Implement best practices in data engineering, including version control, testing, and documentation
Required Skills & Qualifications
Strong proficiency in Python for data engineering tasksExperience with SQL and relational databases (e.g., PostgreSQL, MySQL, SQL Server)Hands-on experience with data pipeline tools (e.g., Airflow, Luigi, Prefect)Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and cloud-native data servicesExperience with big data technologies (e.g., Spark, Hadoop) is a plusUnderstanding of data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery)Knowledge of data security, privacy, and compliance best practicesStrong problem-solving skills and ability to work independentlyPreferred Qualifications
Bachelor's or Master's degree in Computer Science, Engineering, or a related fieldExperience working in Agile environmentsExposure to containerization (Docker, Kubernetes) and CI / CD pipelinesWhy Join Us
Work on high-impact data projects with modern tech stacksCollaborative and growth-oriented team cultureFlexible work arrangements and competitive compensation