Shift time : 4 : 30PM - 1 : 30AM IST
Experience Required : 7+ years in Data Engineering, with strong expertise in DBT
About the Role :
We are seeking a Senior Data Engineer with deep experience in DBT (Data Build Tool) to join our data team. You will be responsible for building scalable and maintainable data pipelines, transforming raw data into actionable insights, and helping shape the future of our data architecture and governance practices.
Key Responsibilities :
- Design, develop, and maintain data pipelines using DBT, SQL, and orchestration tools like Airflow or Prefect
- Collaborate with data analysts, scientists, and stakeholders to understand data needs and deliver clean, well-modeled datasets
- Optimize DBT models for performance and maintainability
- Implement data quality checks, version control, and documentation standards in DBT
- Work with cloud data warehouses like Snowflake, BigQuery, Redshift, or Databricks
- Own and drive best practices around data modeling (Kimball, Star / Snowflake schemas), transformation layers, and CI / CD for data
- Collaborate with cross-functional teams to integrate data from various sources (APIs, third-party tools, internal services)
- Monitor and troubleshoot data pipelines and ensure timely delivery of data to business stakeholders
- Mentor junior engineers and contribute to team growth and development
Required Skills :
7+ years of experience in Data Engineering or related fields4+ years of hands-on experience with DBT (Core or Cloud)Strong SQL skills and experience with modular data modelingExperience with ELT / ETL pipelines using orchestration tools like Airflow, Dagster, Prefect, or similarSolid understanding of data warehouse architecture and performance tuningProficient with one or more cloud platforms : AWS, GCP, or AzureFamiliarity with version control (Git), CI / CD pipelines, and testing frameworks in data engineeringExperience working with structured, semi-structured (JSON, Parquet) dataExcellent communication and documentation skillsPreferred Qualifications :
Experience with DataOps practices and monitoring toolsFamiliarity with Python or Scala for data processingExposure to Looker, Tableau, or other BI toolsKnowledge of data governance, cataloguing, or lineage tools (e.g., Great Expectations, Monte Carlo, Atlan)(ref : hirist.tech)