Position Summary
We are looking for a highly skilled and experienced Data Engineering Manager to lead our data engineering team. The ideal candidate will possess a strong technical background, strong project management abilities, and excellent client handling / stakeholder management skills. This role requires a strategic thinker who can drive the design, development and implementation of data solutions that meet our clients' needs while ensuring the highest standards of quality and efficiency.
Job Responsibilities
- Technology Leadership – Lead guide the team independently or with little support to design, implement deliver complex cloud-based data engineering / data warehousing project assignments
- Solution Architecture & Review – Expertise in conceptualizing solution architecture and low-level design in a range of data engineering (Matillion, Informatica, Talend, Python, dbt, Airflow, Apache Spark, Databricks, Redshift) and cloud hosting (AWS, Azure) technologies
- Managing projects in fast paced agile ecosystem and ensuring quality deliverables within stringent timelines
- Responsible for Risk Management, maintaining the Risk documentation and mitigations plan.
- Drive continuous improvement in a Lean / Agile environment, implementing DevOps delivery approaches encompassing CI / CD, build automation and deployments.
- Communication & Logical Thinking – Demonstrates strong analytical skills, employing a systematic and logical approach to data analysis, problem-solving, and situational assessment. Capable of effectively presenting and defending team viewpoints, while securing buy-in from both technical and client stakeholders.
- Handle Client Relationship – Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills.
Education
BE / B.Tech
Master of Computer Application
Work Experience
Should have expertise and 6+ years of working experience in at least two ETL tools among Matillion, dbt, pyspark, Informatica, and TalendShould have expertise and working experience in at least two databases among Databricks, Redshift, Snowflake, SQL Server, OracleShould have strong Data Warehousing, Data Integration and Data Modeling fundamentals like Star Schema, Snowflake Schema, Dimension Tables and Fact Tables.Strong experience on SQL building blocks. Creating complex SQL queries and Procedures.Experience in AWS or Azure cloud and its service offeringsAware of techniques such as : Data Modelling, Performance tuning and regression testingWillingness to learn and take ownership of tasks.Excellent written / verbal communication and problem-solving skills andUnderstanding and working experience on Pharma commercial data sets like IQVIA, Veeva, Symphony, Liquid Hub, Cegedim etc. would be an advantageHands-on in scrum methodology (Sprint planning, execution and retrospection)Behavioural Competencies
Teamwork & Leadership
Motivation to Learn and Grow
Ownership
Cultural Fit
Talent Management
Technical Competencies
Problem Solving
Lifescience Knowledge
Communication
Project Management
Capability Building / Thought Leadership
Databricks
PySpark
Data Modelling
AWS EMR
Skills Required
Airflow, Apache Spark, Informatica, Redshift, Sql, dbt, Databricks, Azure, Talend, Python, Aws