Lead Data Engineer
Location : Hyderabad or Ahmedabad
Experience : 8+ years
Skills : Snowflake, Python / Pyspark, SQL
Only Immediate to 15 Days joiners apply.
Key Responsibilities :
Lead the end-to-end Snowflake platform implementation, including architecture, design, data modeling, and governance.
Oversee the migration of data and pipelines from legacy platforms to Snowflake, ensuring quality, reliability, and business continuity.
Design and optimize Snowflake-specific data models, including use of clustering keys, materialized views, Streams, and Tasks.
Build and manage scalable ELT / ETL pipelines using modern tools and best practices.
Define and implement standards for Snowflake development, testing, and deployment, including CI / CD automation.
Collaborate with cross-functional teams including data engineering, analytics, DevOps, and business stakeholders.
Establish and enforce data security, privacy, and governance policies using Snowflake’s native capabilities.
Monitor and tune system performance and cost efficiency through appropriate warehouse sizing and usage patterns.
Lead code reviews, technical mentoring, and documentation for Snowflake-related processes.
Directs technical quality efforts, including code reviews and performance tuning, to ensure the system meets high standards before deployment.
Leads the technical planning for deployment, oversees the cutover process, and ensures the system is stable, monitored, and performant in the production environment.
Required Snowflake Expertise :
Snowflake Architecture – Deep understanding of virtual warehouses, data sharing, multi-cluster, zero-copy cloning.
Performance Optimization – Proficient in tuning queries, clustering, caching, and workload management.
Data Engineering – Experience with Snowpipe, Streams & Tasks, stored procedures (JavaScript-based), and data ingestion patterns.
Data Security & Governance – Strong experience with RBAC, dynamic data masking, row-level security, and tagging.
Advanced SQL – Expertise in complex SQL queries, transformations, semi-structured data handling (JSON, XML).
Cloud Integration – Integration with major cloud platforms (AWS / GCP / Azure) and services like S3, Lambda, Step Functions, etc.
Experience with ETL orchestration tools such as Airflow, DBT, and Matillion.
Proficiency in handling semi-structured data formats, including JSON and Parquet.
Familiarity with Git-based version control systems, including branching, merging, and pull request workflows.
engineering.
Lead Data Engineer • Hyderabad, Telangana, India