Responsibilities :
- Architect and implement modular, test-driven ELT pipelines using dbt on Snowflake.
- Design layered data models (e.g., staging, intermediate, mart layers / medallion architecture) aligned with dbt best practices.
- Lead ingestion of structured and semi-structured data from APIs, flat files, cloud storage (Azure Data Lake, AWS S3), and databases into Snowflake.
- Optimize Snowflake for performance and cost : warehouse sizing, clustering, materializations, query profiling, and credit monitoring.
- Apply advanced dbt capabilities including macros, packages, custom tests, sources, exposures, and documentation using dbt docs.
- Orchestrate workflows using dbt Cloud, Airflow, or Azure Data Factory, integrated with CI / CD pipelines.
- Define and enforce data governance and compliance practices using Snowflake RBAC, secure data sharing, and encryption strategies.
- Collaborate with analysts, data scientists, architects, and business stakeholders to deliver validated, business-ready data assets.
- Mentor junior engineers, lead architectural / code reviews, and help establish reusable frameworks and standards.
- Engage with clients to gather requirements, present solutions, and manage end-to-end project delivery in a consulting Qualifications :
- 5 to 8 years of experience in data engineering roles, with 3+ years of hands-on experience working with Snowflake and dbt in production Skills :
Cloud Data Warehouse & Transformation Stack :
Expert-level knowledge of SQL and Snowflake, including performance optimization, storage layers, query profiling, clustering, and cost management.Experience in dbt development : modular model design, macros, tests, documentation, and version control using and Integration :Proficiency in orchestrating workflows using dbt Cloud, Airflow, or Azure Data Factory.Comfortable working with data ingestion from cloud storage (e.g., Azure Data Lake, AWS S3) and Modelling and Architecture :Dimensional modelling (Star / Snowflake schemas), Slowly changing dimensions.Knowledge of modern data warehousing principles.Experience implementing Medallion Architecture (Bronze / Silver / Gold layers).Experience working with Parquet, JSON, CSV, or other data Languages :Python : For data transformation, notebook development, automation.SQL : Strong grasp of SQL for querying and performance tuning.Jinja (nice to have) : Exposure to Jinja for advanced dbt Engineering & Analytical Skills :ETL / ELT pipeline design and optimization.Exposure to AI / ML data pipelines, feature stores, or MLflow for model tracking (good to have).Exposure to data quality and validation frameworks.(ref : hirist.tech)