Skills : Snowflake + DBT + Iceberg + Apacheflink
Exp : 5 to 10 years
NP : Immediate to 15 Days
Location : Hyderabad
About the Role
We are seeking a highly skilled Data Engineer to design, build, and optimize modern data pipelines and data models that power our analytics and data products. The ideal candidate will have strong experience with Snowflake and DBT , and familiarity or hands-on experience with stream processing using Apache Flink and data lakehouse technologies such as Apache Iceberg .
You will work closely with data analysts, data scientists, and platform engineers to ensure efficient, scalable, and reliable data workflows across the organization.
Key Responsibilities
- Design, develop, and maintain data pipelines and data models using Snowflake and DBT .
- Implement data transformations , data validation , and metadata management using modern ELT frameworks.
- Build and manage real-time data processing systems using Apache Flink (or similar stream-processing technologies).
- Integrate batch and streaming data into data lakehouse environments (Iceberg, Delta Lake, or similar).
- Optimize Snowflake performance through query tuning , clustering , and data partitioning strategies .
- Collaborate with cross-functional teams to understand business data needs and translate them into scalable solutions.
- Ensure data quality, security, and governance best practices are followed throughout the pipeline lifecycle.
- Support CI / CD automation for DBT and data infrastructure deployments.
- Monitor, troubleshoot, and enhance data systems for performance and reliability.
Required Qualifications
3–6 years of experience in Data Engineering or Analytics Engineering roles.Strong expertise in Snowflake (data modeling, performance tuning, security, cost optimization).Hands-on experience with DBT (Data Build Tool) — building modular, testable, and documented data models.Proficiency in SQL and one programming language (e.g., Python, Java, or Scala).Working knowledge of Apache Flink or other streaming frameworks (Kafka Streams, Spark Structured Streaming).Experience with Apache Iceberg , Delta Lake , or similar lakehouse formats.Familiarity with cloud platforms (AWS, GCP, or Azure) and modern data orchestration tools (Airflow, Dagster, Prefect).Strong understanding of data warehousing concepts , ETL / ELT , and data governance .Preferred Qualifications
Experience integrating Snowflake with streaming and lakehouse architectures .Knowledge of modern data stack tools (Fivetran, Airbyte, Great Expectations, etc.).Exposure to DevOps or DataOps principles (CI / CD, Git, Infrastructure as Code).Background in real-time analytics , event-driven architectures , or ML feature pipelines .Soft Skills
Excellent problem-solving and analytical thinking.Strong communication and collaboration across technical and non-technical teams.Ownership mindset and ability to work independently in a fast-paced environment.Education
Bachelor’s or master’s degree in computer science, Data Engineering , Information Systems , or a related field (or equivalent experience).