Description :
Key Responsibilities :
- Lead the design and implementation of modern data platforms across Azure, AWS, and Snowflake.
- Translate business requirements into robust technical solutions covering ingestion, transformation, integration, warehousing, and validation.
- Architect, build, and maintain data pipelines for analytics, reporting, and machine learning use cases.
- Develop and maintain ETL processes to move data from multiple sources into cloud data lakes and warehouses.
- Design and implement data models, lineage, and metadata management to ensure consistency and traceability.
- Optimize pipelines and workflows for performance, scalability, and cost efficiency.
- Enforce data quality, security, and governance standards across all environments.
- Support migration of legacy / on-premises ETL solutions to cloud-native platforms.
- Develop and tune SQL queries, database objects, and distributed processing workflows.
- Drive adoption of CI / CD, test automation, and DevOps practices in data engineering.
- Collaborate with architects, analysts, and data scientists to deliver end-to-end data solutions.
- Provide technical leadership, mentorship, and training to junior engineers.
- Produce and maintain comprehensive technical documentation.
Requirements & Skills :
Strong experience designing and developing ETL / data pipelines on Azure, AWS, and Snowflake.Proficiency in SQL, Python, and distributed processing (e.g., Spark, Databricks, EMR).Hands-on expertise with :1. Azure : Data Factory, Synapse, Databricks, Azure SQL
2. AWS : Glue, Redshift, S3, Lambda, EMR
3. Snowflake : Data warehousing, performance optimization, security features
Solid understanding of data modeling, lineage, metadata management, and governance.Experience with CI / CD, infrastructure-as-code, and automation frameworks.Strong problem-solving and communication skills with the ability to work across teams.Desired Profile :
Bachelors or masters degree in computer science, Engineering, or related discipline.6 - 10 years of progressive data engineering experience, with at least 5 years in cloud-based data platforms.Strong expertise in data modelling, database design, and warehousing concepts.Proficiency in Python (including Pandas, API integrations, and automation).Familiarity with varied data formats and sources (CSV, Parquet, JSON, APIs, relational and NoSQL databases).Exposure to modern orchestration and workflow tools, with strong understanding of CI / CD practices.Experience with Databricks and Microsoft Fabric is a plus.Excellent analytical, problem-solving, and communication skills.Ability to evaluate new technologies and adopt them where appropriate.(ref : hirist.tech)