Role : Data Engineer
Location : Pune (On-site)
Job Type : Full-Time
Experience : 4–7 Years
Key Responsibilities
Design, develop, and maintain scalable data pipelines and ETL workflows.
Work with large datasets using PySpark , Python , and SQL to ensure efficient data transformation and integration.
Implement data solutions on AWS, leveraging services like S3, Glue, Lambda, and Redshift.
Collaborate with cross-functional teams to define data requirements and optimize data flows.
Work within Databricks to develop and deploy data engineering solutions.
Ensure data quality, reliability, and security across all environments.
Monitor and optimize data processes for performance and cost efficiency.
Required Skills
Strong experience in Python , PySpark , and SQL .
Hands-on experience with AWS data services (Glue, Redshift, Lambda, S3, etc.).
Expertise in the Databricks platform and data pipeline orchestration.
Solid understanding of data warehousing , data modeling , and ETL design principles .
Excellent problem-solving skills and ability to work in a fast-paced environment.
Good to Have
Exposure to the Fintech domain and understanding of financial data structures.
Experience with CI / CD, Airflow, or other workflow orchestration tools.
Familiarity with data governance and compliance best practices.
Data Engineer • Pune, Maharashtra, India