Experience Required : 8+Years
Mode of work : Remote
Skills Required : Azure DataBricks, Eventhub, Kafka, Architecture, Azure Data Factory, Pyspark, Python, SQL, Spark
Notice Period : Immediate Joiners / Permanent / Contract role (Can join within September 29th 2025)
- Translate business rules into technical specifications and implement scalable data solutions.
- Manage a team of Data Engineers and oversee deliverables across multiple markets .
- Apply performance optimization techniques in Databricks to handle large-scale datasets.
- Collaborate with the Data Science team to prepare datasets for AI / ML model training.
- Partner with the BI team to understand reporting expectations and deliver high-quality datasets.
- Perform hands-on data modeling , including schema changes and accommodating new data attributes.
- Implement data quality checks before and after data transformations to ensure reliability.
- Troubleshoot and debug data issues, collaborating with source system / data teams for resolution.
- Contribute across project phases : requirement analysis, development, code review, SIT, UAT, and production deployment .
- Utilize GIT for version control and manage CI / CD pipelines for seamless deployment across environments.
- Adapt to dynamic business requirements and ensure timely delivery of solutions.
Requirements
Strong expertise in Azure Databricks, PySpark, and SQL .Proven experience in data engineering leadership and handling cross-market deliverables.Solid understanding of data modeling and ETL / ELT pipelines.Hands-on experience with performance optimization in big data processing.Proficiency in Git, CI / CD pipelines , and cloud-based deployment practices.Strong problem-solving and debugging skills with large, complex datasets.Excellent communication skills and ability to collaborate with cross-functional teams.