Company Profile
Our client is a global IT services company that helps businesses with digital transformation with offices in India and the United States. It helps businesses with digital transformation, provide IT collaborations and uses technology, innovation, and enterprise to have a positive impact on the world of business.
With expertise is in the fields of Data, IoT, AI, Cloud Infrastructure and SAP, it helps accelerate digital transformation through key practice areas - IT staffing on demand, innovation and growth by focusing on cost and problem solving.
Job Profile : Data Engineer
Location : Pune
Employment Type : Full-time, WFO, Regular shift
Preferred experience : 6+ years
The Role
Responsible for day-to-day tasks related to data engineering, data modeling, ETL (Extract Transform Load), data warehousing, and data analytics. Work with AWS and Databricks to design, develop, and maintain data pipelines and data platforms. Build operational and automated dashboards, reports with help of Power BI and other reporting tools
Responsibilities
- Work extensively on Databricks and its modules using PySpark for data processing
- Designs, Develops, and optimize scalable ETL pipelines using the Databricks platform and cloud services to transform raw data into actionable insights.
- Designing and implementing data storage solutions on AWS
- Develop and maintain data models and schemas optimized for specific use cases.
- Building and maintaining data pipelines for data integration and processing
- Optimizing data processing performance through tuning and monitoring
- Developing and maintaining data models and schemas
- Ensuring data security and privacy prerequisites are followed
- Designing and developing BI reports and dashboards
- Develop operational and functional reports
- Build automated reports and dashboards with the help of Power BI and other reporting tools
- Understand business requirements to set functional specifications for reporting applications
Must - Have Qualifications :
6+ years of proven experience in data engineering roles.Strong expertise in Databricks and PySpark for building scalable data solutions.Advanced proficiency in SQL (optimizations, complex transformations, performance tuning).Solid hands-on experience with AWS cloud data stack (S3, Glue, Lambda, Redshift, Step Functions, EMR, etc.).Strong understanding of data modeling, warehousing, and ETL / ELT best practices.Pharma / healthcare domain knowledge – ability to work with clinical, commercial, and regulatory datasets.Experience with data governance, security, and compliance in a regulated industry.Preferred Qualifications :
Excellent communication, problem-solving, and leadership skills
Application Method
Apply on LinkedIn or email your resume to : careers@speedmart.co.in