Job Title : Data Engineer (Snowflake + dbt)
Location : Hyderabad, India
Job Type : Full-time
Job Description :
We are looking for an experienced and results-driven Data Engineer to join our growing Data Engineering team. The ideal candidate will be proficient in building scalable, high-performance data transformation pipelines using Snowflake and dbt, and able to effectively work in a consulting setup. In this role, you will be instrumental in ingesting, transforming, and delivering high-quality data to enable data-driven decision-making across the client’s organization.
Key Responsibilities :
- Design and implement scalable ELT pipelines using dbt on Snowflake, following industry-accepted best practices.
- Build ingestion pipelines from various sources including relational databases, APIs, cloud storage, and flat files into Snowflake.
- Implement data modelling and transformation logic to support layered architecture (e.g., staging, intermediate, and mart layers or medallion architecture) to enable reliable and reusable data assets.
- Leverage orchestration tools (e.g., Airflow, dbt Cloud, or Azure Data Factory) to schedule and monitor data workflows.
- Apply dbt best practices : modular SQL development, testing, documentation, and version control.
- Perform performance optimizations in dbt / Snowflake through clustering, query profiling, materialization, partitioning, and efficient SQL design.
- Apply CI / CD and Git-based workflows for version-controlled deployments.
- Contribute to growing internal knowledge base of dbt macros, conventions, and testing frameworks.
- Collaborate with multiple stakeholders such as data analysts, data scientists, and data architects to understand requirements and deliver clean, validated datasets.
- Write well-documented, maintainable code using Git for version control and CI / CD processes.
- Participate in Agile ceremonies including sprint planning, stand-ups, and retrospectives.
- Support consulting engagements through clear documentation, demos, and delivery of client-ready solutions.
Required Qualifications :
3 to 5 years of experience in data engineering roles, with 2+ years of hands-on experience in Snowflake and dbt.Experience building and deploying dbt models in a production environment.Expert-level SQL and strong understanding of ELT principles. Strong understanding of ELT patterns and data modelling (Kimball / Dimensional preferred).Familiarity with data quality and validation techniques : dbt tests, dbt docs, etc.Experience with Git, CI / CD, and deployment workflows in a team setting.Familiarity with orchestrating workflows using tools like dbt Cloud, Airflow, or Azure Data Factory.Core Competencies :
Data Engineering and ELT Development :
Building robust and modular data pipelines using dbt.Writing efficient SQL for data transformation and performance tuning in Snowflake.Managing environments, sources, and deployment pipelines in dbt.Cloud Data Platform Expertise :
Strong proficiency with Snowflake : warehouse sizing, query profiling, data loading, and performance optimization.Experience working with cloud storage (Azure Data Lake, AWS S3, or GCS) for ingestion and external stages.Technical Toolset :
Languages & Frameworks :Python : For data transformation, notebook development, automation.SQL : Strong grasp of SQL for querying and performance tuning.Best Practices and Standards :
Knowledge of modern data architecture concepts including layered architecture (e.g., staging → intermediate → marts, Medallion architecture).Familiarity with data quality, unit testing (dbt tests), and documentation (dbt docs).Security & Governance :
Access and Permissions :Understanding of access control within Snowflake (RBAC), role hierarchies, and secure data handling.Familiar with data privacy policies (GDPR basics), encryption at rest / in transit.Deployment & Monitoring :
DevOps and Automation :Version control using Git, experience with CI / CD practices in a data context.Monitoring and logging of pipeline executions, alerting on failures.Soft Skills :
Communication & Collaboration :Ability to present solutions and handle client demos / discussions.Work closely with onshore and offshore teams of analysts, data scientists, and architects.Ability to document pipelines and transformations clearly.Basic Agile / Scrum familiarity – working in sprints and logging tasks.Comfort with ambiguity, competing priorities, and fast-changing client environment.Nice to Have :
Experience in client-facing roles or consulting engagements.Exposure to AI / ML data pipelines, feature stores.Exposure to ML flow for basic ML model tracking.Experience / Exposure using Data quality tooling.Education :
Bachelor’s or master’s degree in computer science, Data Engineering, or a related field.Certifications such as Snowflake SnowPro, dbt Certified Developer Data Engineering are a plus.Why Join Us?
Opportunity to work on diverse and challenging projects in a consulting environment.Collaborative work culture that values innovation and curiosity.Access to cutting-edge technologies and a focus on professional development.Competitive compensation and benefits package.Be part of a dynamic team delivering impactful data solutions.About Us :
Logic Pursuits provides companies with innovative technology solutions for everyday business problems. Our passion is to help clients become intelligent, information-driven organizations, where fact-based decision-making is embedded into daily operations, leading to better processes and outcomes. Our team combines strategic consulting services with growth-enabling technologies to evaluate risk, manage data, and leverage AI and automated processes more effectively. With deep, big four consulting experience in business transformation and efficient processes, Logic Pursuits is a game-changer in any operations strategy.