We are looking for a highly skilled Data Engineer with strong expertise in AWS, Databricks, PySpark, and Airflow to join our growing Data Engineering team. The ideal candidate will be responsible for designing, building, and optimizing scalable data pipelines and solutions that enable advanced analytics, machine learning, and business intelligence across the organization.
Key Responsibilities
Design, develop, and maintain scalable ETL / ELT pipelines using Databricks, PySpark, and Airflow.
Build and optimize data models and data lakes / warehouses on AWS
Implement best practices for data quality, data governance, and performance optimization.
Collaborate with cross-functional teams (data scientists, analysts, product, and business teams) to deliver data-driven solutions.
Ensure reliability, scalability, and efficiency of data workflows through automation and monitoring.
Troubleshoot complex data engineering issues and optimize processing performance.
Required Skills & Qualifications
Bachelor’s / Master’s degree in Computer Science, Engineering, or related field.
4+ years of hands-on experience in Data Engineering.
Strong expertise in PySpark, Databricks, and Airflow for large-scale data processing and orchestration.
Solid experience with AWS services such as S3, Glue, Redshift, EMR, Lambda, and IAM.
Strong knowledge of SQL and performance tuning.
Experience with CI / CD pipelines, Git, and containerization (Docker / Kubernetes) is a plus.
Strong problem-solving skills, communication, and ability to work in a fast-paced environment.
Aws Data Engineer • Bengaluru, Republic Of India, IN