Be a part of large Conglomorate's central teamStable and large scale of operationsAbout Our Client
A digital services company, part of a large Indian conglomerate, focuses on creating digital products and services for consumers and businesses
Job Description
- Architect and build scalable data ingestion, transformation, and processing pipelines on Azure Data Lake and Databricks.
- Lead migration from legacy systems to modern, cloud-native data platforms.
- Implement data governance and cataloging using Unity Catalog.
- Ensure strict compliance with data privacy regulations (GDPR, DPDP) and work with InfoSec to embed security best practices.
- Drive cloud cost optimization and performance tuning across Azure services.
- Collaborate with cross-functional teams including Product, Analytics, DevOps, and InfoSec.
- Mentor and guide engineers and analysts to deliver high-impact solutions.
The Successful Applicant
- Proven expertise in Azure Cloud (Data Lake, Data Factory, Event Hubs, Key Vault).
- Hands-on experience with Apache Spark (PySpark / Scala), Kafka, NiFi, Delta Lake, and Databricks.
- Deep understanding of PII data encryption, tokenization, and access controls.
- Familiarity with Unity Catalog or similar data governance tools.
- Skilled in CI / CD, infrastructure as code (Terraform / ARM), and containerization (Docker / Kubernetes).
- Strong analytical, communication, and leadership skills.
- Experience delivering complex projects in Agile / Scrum environments.