Job Title
IT engineer Data & Analytics DevOps
Company : ContiTech
Company Description
Continental develops pioneering technologies and services for sustainable and connected mobility of people and their goods. Founded in 1871, the technology company offers safe, efficient, intelligent, and affordable solutions for vehicles, machines, traffic and transportation. In 2023, Continental generated sales of €41.4 billion and currently employs around 200,000 people in 56 countries and markets.
Guided by the vision of being the customer's first choice for material-driven solutions, the ContiTech group sector focuses on development competence and material expertise for products and systems made of rubber, plastics, metal, and fabrics. These can also be equipped with electronic components in order to optimize them functionally for individual services. ContiTech's industrial growth areas are primarily in the areas of energy, agriculture, construction, and surfaces. In addition, ContiTech serves the automotive and transportation industries as well as rail transport.
The IT Digital and Data Services Competence Center of ContiTech caters to all the Business Areas in ContiTech and responsible among other on areas of Data & Analytics, Web and Mobile Software Development and AI
The team for Data services specializes in all platforms, business applications and products in the domain of data and analytics, covering the entire spectrum including AI, machine learning, data science, data analysis, reporting and dashboarding.
Job Description
- Ensure stable, scalable, and secure operation of the Azure-based Data & Analytics platform, including Databricks, Azure-native components, Power BI, and CI / CD infrastructure
- Offload operational workload from platform architects by taking ownership of infrastructure, deployment automation, and pipeline reliability
- Enable smooth execution and troubleshooting of data pipelines written in Scala and PySpark, including hybrid integration scenarios such as Power BI with gateway infrastructure
- Reports to : Head of Data & Analytics IT Competence Center
- Collaborates with : Platform Architects, Data Engineers, ML Engineers, Power BI Developers
- Geography : Global (stakeholders in Germany, India, Manila)
- Operational Scope : Azure services, Databricks workspaces, CI / CD toolchains, Power BI service (incl. gateways), and Spark-based data pipelines
Main Tasks
Operate and optimize Azure resources (ADF, Key Vault, Monitor, Event Hub)Administer Databricks workspace access and cluster configsApply Infrastructure-as-Code (Terraform / Bicep)Manage CI / CD pipelines for Scala and PySpark-based pipelinesIntegrate build steps (e.g., Maven / SBT, Python wheels) into automated deploymentsEnforce DevSecOps and IaC standardsMonitor Spark job execution, analyze failures and stage-level issues using Spark UI and logsConfigure alerts, metrics, and dashboards for pipelines and infrastructureLead post-incident reviews and reliability improvementsAdminister Power BI tenant configuration, workspace access, and usage monitoringOperate and monitor on-premises or VM-hosted enterprise gatewaysTroubleshoot dataset refreshes and hybrid data integrationSupport runtime execution of production pipelines and ensure SLA adherenceCollaborate with engineers to resolve Spark performance issues or deployment errorsParticipate in schema evolution and environment transitionsEnforce platform policies (tagging, RBAC, audit logging)Maintain credential and secrets security using Key Vault and managed identityConduct audits across Azure, Databricks, and Power BI environmentsQualifications
Education / Certification :Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or related field.Preferred : Azure DevOps Engineer Expert, Power BI Admin, or Databricks Admin certificationsProfessional Experience :Minimum 5 years in cloud platform engineering, DevOps, or SRE roles within data or analytics platformsHands-on experience with Spark (Databricks), PySpark, and CI / CD for JVM-based data applicationsProject or Process Experience :Proven ability to deploy and operate complex data pipeline ecosystems using Scala and PySparkExperience in managing Power BI service in enterprise setups, including hybrid gateway environmentsLeadership Experience :No formal people leadership required; expected to lead through technical authority and cross-team collaborationIntercultural / International Experience :Experience working in distributed teams across time zones and cultures; strong communication skills and resilience