Role- Data Engineer – Databricks & Integration Pipelines
Exp Range- 6 to 12 years
Location- Chennai
Design and implement data pipelines in Azure Databricks for ingesting and transforming data from upstream systems (SAP, PIPS, POS) into WFM and UKG.
Optimize ETL / ELT workflows for performance and scalability.
Collaborate with Java / API developers to integrate event-driven triggers into data pipelines.
Implement data quality checks , schema validation, and error handling.
Support batch and near-real-time data flows for operational and analytics use cases.
Work with Boomi and WFM teams to ensure data contracts and canonical models are enforced.
Required Skills
Strong experience with Azure Databricks (PySpark, Delta Lake).
Proficiency in SQL and data modeling .
Experience with Azure Data Lake , and Event Hub .
Familiarity with data governance and security best practices .
Ability to work with large datasets and optimize for performance.
Familiarity with CI / CD (Jenkins and Terraform preferred).
Knowledge of JSON, XML .
Nice-to-Have
Experience with UKG Data Hub or similar HR / payroll data platforms.
Knowledge of API integration and microservices .
Familiarity with Power BI or other reporting tools.
Data Engineer • Chennai, India