- Line management for a high-performing, cross-functional data engineering team.
- Drive skill development, mentorship, and performance management.
- Foster a culture of accountability and trust.
- Own timely delivery of data & analytics assets from data acquisition to semantic layers.
- Align work with business priorities and architectural standards.
- Ensure quality gates and documentation.
- Act as primary escalation and coordination point across business domains.
- Bridge infrastructure, functional IT, cybersecurity, and platform decisions.
- Advocate for team in global forums.
- Guide adoption of engineering best practices (TDD, CI / CD, IaC) & guide building all technical artefacts as code, creating scalable batch and streaming pipelines in Azure Databricks using PySpark and / or Scala
- Leading the design and operation of scalable batch / stream pipelines in Databricks, including ingestion from structured / semi-structured sources and implementation of bronze / silver / gold layers under lakehouse governance.
- Overseeing dimensional modeling and curated data marts for analytics use cases, while ensuring semantic layer compatibility and collaboration on enterprise 3NF warehouse integration.
- Ensuring high-quality engineering practices across data validation, CI / CD-enabled TDD, performance tuning, metadata governance, and stakeholder collaboration via agile methods.
- Build an inclusive, high-performance team culture in Bengaluru.
- Champion DevSecOps, reuse, automation, and reliability. Commit all artifacts to version control with peer review and CI / CD integration
- Ensure documentation, knowledge sharing, and continuous improvement.
- Leading the design and operation of scalable, secure ingestion services—including CDC, delta, full-load, and SAP extractions via tools like Theobald Extract Universal.
- Overseeing integration with APIs, legacy systems, Salesforce, and file-based sources, while aligning all interfaces with cybersecurity standards and compliance protocols.
- Driving the development of the enterprise data catalog application, supporting dataset discoverability, metadata quality, and Unity Catalog–aligned access workflows.
Qualifications
Degree in Computer Science, Data Engineering, Information Systems, or related discipline.
Certifications in software development and data engineering (e.g., Databricks DE Associate, Azure Data Engineer, or relevant DevOps certifications).
Minimum 8 years in enterprise data engineering, including data ingestion and pipeline design. Experience across structured and semi-structured source systems is required. Demonstrated experience building production-grade codebases in IDEs, with test coverage and version control.
Hands-on experience with secure SAP / API ingestion, lakehouse development in Databricks, and metadata-driven data platforms. Delivered high-impact enterprise data products in cross-functional environments.
At least 3 years of team leadership or technical lead experience, including hiring, mentoring, and representing team interests in enterprise-wide planning forums.
Demonstrated success leading globally distributed teams and collaborating with stakeholders across multiple time zones and cultures.