Experience - 5 + Years
Skills- Snowflakes + AWS
About the Role
We are seeking a highly skilled Snowflake + AWS Engineer with strong expertise in cloud-based data engineering, ETL pipeline development, and modern data warehouse architecture. The ideal candidate will have a proven track record of implementing scalable, high-performance data solutions using Snowflake on AWS Cloud .
Key Responsibilities
- Design, develop, and maintain data pipelines using Snowflake, AWS services, and Python / SQL-based ETL tools.
- Architect and implement data models , schemas , and warehouse structures optimized for analytics and reporting.
- Manage and optimize Snowflake environments , including performance tuning, cost optimization, and role-based access control.
- Build and orchestrate data workflows using AWS services such as Glue , Lambda , Step Functions , S3 , and Athena .
- Integrate data from multiple sources (APIs, RDBMS, flat files, streaming data, etc.) into Snowflake.
- Implement and manage data security, governance, and compliance standards across platforms.
- Collaborate with Data Scientists, Analysts, and DevOps teams to ensure data availability, accuracy, and reliability.
- Monitor and troubleshoot production data pipelines and performance issues.
- Implement CI / CD automation for data pipelines using tools like Terraform, CloudFormation, or Jenkins .
Required Skills & Qualifications
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.5+ years of experience in data engineering , with at least 2+ years of hands-on Snowflake experience .Strong proficiency in SQL and Python for data transformation and automation.Hands-on experience with AWS Cloud services : S3, Glue, Lambda, Redshift, IAM, CloudWatch, and Step Functions.Experience with ETL / ELT tools such as dbt, Airflow, Talend, or Informatica Cloud .Solid understanding of data warehousing concepts , star / snowflake schemas , and data modeling techniques.Knowledge of data security, encryption, and key management on AWS and Snowflake.Strong analytical, problem-solving, and performance tuning skills.Experience with version control (Git) and CI / CD pipelines for data projects.Good to Have
Experience with Databricks or Apache Spark .Exposure to Terraform , Kubernetes , or Docker for deployment automation.Familiarity with AWS cost optimization and FinOps best practices .Knowledge of modern data architectures such as Data Lakehouse or Data Mesh .Soft Skills
Excellent communication and documentation skills.Ability to work in agile, fast-paced environments.Strong sense of ownership, accountability, and teamwork.