- Line management for a high-performing cross-functional data engineering team.
- Drive skill development mentorship and performance management.
- Foster a culture of accountability and trust.
- Own timely delivery of data & analytics assets from data acquisition to semantic layers.
- Align work with business priorities and architectural standards.
- Ensure quality gates and documentation.
- Act as primary escalation and coordination point across business domains.
- Bridge infrastructure functional IT cybersecurity and platform decisions.
- Advocate for team in global forums.
- Guide adoption of engineering best practices (TDD CI / CD IaC) & guide building all technical artefacts as code creating scalable batch and streaming pipelines in Azure Databricks using PySpark and / or Scala
- Leading the design and operation of scalable batch / stream pipelines in Databricks including ingestion from structured / semi-structured sources and implementation of bronze / silver / gold layers under lakehouse governance.
- Overseeing dimensional modeling and curated data marts for analytics use cases while ensuring semantic layer compatibility and collaboration on enterprise 3NF warehouse integration.
- Ensuring high-quality engineering practices across data validation CI / CD-enabled TDD performance tuning metadata governance and stakeholder collaboration via agile methods.
- Build an inclusive high-performance team culture in Bengaluru.
- Champion DevSecOps reuse automation and reliability. Commit all artifacts to version control with peer review and CI / CD integration
- Ensure documentation knowledge sharing and continuous improvement.
- Leading the design and operation of scalable secure ingestion servicesincluding CDC delta full-load and SAP extractions via tools like Theobald Extract Universal.
- Overseeing integration with APIs legacy systems Salesforce and file-based sources while aligning all interfaces with cybersecurity standards and compliance protocols.
- Driving the development of the enterprise data catalog application supporting dataset discoverability metadata quality and Unity Catalogaligned access workflows.
Qualifications :
Degree in Computer Science Data Engineering Information Systems or related discipline.
Certifications in software development and data engineering (e.g. Databricks DE Associate Azure Data Engineer or relevant DevOps certifications).
Minimum 8 years in enterprise data engineering including data ingestion and pipeline design. Experience across structured and semi-structured source systems is required. Demonstrated experience building production-grade codebases in IDEs with test coverage and version control.
Hands-on experience with secure SAP / API ingestion lakehouse development in Databricks and metadata-driven data platforms. Delivered high-impact enterprise data products in cross-functional environments.
At least 3 years of team leadership or technical lead experience including hiring mentoring and representing team interests in enterprise-wide planning forums.
Demonstrated success leading globally distributed teams and collaborating with stakeholders across multiple time zones and cultures.
Additional Information :
The well-being of our employees is important to us. Thats why we offer exciting career prospects and support you in achieving a good work-life balance with additional benefits such as :
- Training opportunities
- Mobile and flexible working models
- Sabbaticals
and much more...
Sounds interesting for you Click here to find out more.
Diversity Inclusion & Belonging are important to us and make our company strong and successful. We offer equal opportunities to everyone - regardless of age gender nationality cultural background disability religion ideology or sexual orientation.
Ready to drive with Continental Take the first step and fill in the online application.
Remote Work : No
Employment Type : Full-time
Key Skills
Law Enforcement,ABB,Marine Biology,Filing,Automobile,AV
Experience : years
Vacancy : 1