Role Overview :
We are seeking a skilled Snowflake Data Engineer with strong experience in Snowflake Data Cloud, PL / SQL, and modern data engineering practices. The ideal candidate will work on designing, building, and optimizing data pipelines, ETL workflows, and Snowflake data environments. The role requires deep expertise in SQL / PLSQL, cloud platforms, Snowflake features, and data ingestion frameworks.
Key Responsibilities (KRA) :
- Designing, developing, and managing end-to-end data pipelines on Snowflake
- Implementing data ingestion processes using formats such as CSV, JSON, Parquet, and Avro
- Writing advanced SQL, PL / SQL, SnowSQL, and stored procedures for data transformation
- Performing Snowflake staging (internal / external) and implementing efficient loading strategies
- Optimizing Snowflake performance through query tuning and data model improvements
- Troubleshooting failures, root-cause analysis, and resolving technical issues across ETL workflows
- Developing ETL routines using Python, Scala, PySpark, or ETL tools
- Collaborating with cloud teams to manage AWS / Azure / GCP environments supporting Snowflake workflows
- Evaluating and improving existing staging, source, and reporting data structures
- Ensuring high data quality, reliability, and governance across all data assets
- Implementing dbt models and managing transformations within the data pipeline
- Creating documentation for data flows, transformation logic, and Snowflake architecture
- Working with cross-functional teams including data analysts, BI developers, and cloud engineers
- Ensuring best practices in data security, orchestration, and cloud integration
Required Skillsets :
Strong hands-on experience with Snowflake Data CloudExpertise in SQL, advanced PL / SQL, and Oracle database programmingExperience with SnowSQL, stored procedures, and Snowflake-specific featuresKnowledge of internal and external Snowflake staging and data loading optionsProficiency in data ingestion from multiple file formats (CSV, JSON, Parquet, Avro, etc.)Strong understanding of ETL development using Python, Scala, or PySparkExperience with AWS, Azure, or GCP cloud platformsKnowledge of dbt for data transformation and modelingExperience in SQL performance tuning and root-cause analysisFamiliarity with modern data architectures and data modeling conceptsStrong problem-solving skills and the ability to troubleshoot complex pipelinesExperience evaluating and improving existing data structuresGood communication, documentation, and teamwork abilitiesQualifications :
Bachelor's degree in Computer Science, Engineering, Information Systems, or equivalent4 - 7 years of experience in data engineering rolesMinimum 2 years hands-on experience with SnowflakeStrong hands-on experience with PL / SQL and SQL developmentExperience working with cloud environments such as AWS, Azure, or GCP(ref : hirist.tech)