At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all.
Job Description – Snowflake Data Engineer
Objectives and Purpose
- The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs.
- The Senior Data Engineer will : Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes.
Your key responsibilities
Design, build, and optimize scalable data pipelines and ETL / ELT processes to ingest, transform, and load data into Snowflake.Develop and maintain Snowflake data warehouse solutions including schema design, performance tuning, and security implementation.Integrate data from diverse sources (structured, semi-structured, unstructured) such as databases, APIs, CSV, XML, JSON, etc.Implement best practices for Snowflake features such as virtual warehouses, clustering, time travel, zero-copy cloning, and data sharing.Leverage Snowflake’s advanced features (, materialized views, result caching, query acceleration services) to improve query performance and reduce costs.Collaborate with business users to understand requirements and deliver high-quality, reliable datasets for reporting and analytics.Work with AWS services such as Glue, S3, Redshift, Lambda, IAM, and CloudWatch to support data workflows.Ensure data governance, quality, and compliance across all pipelines.Implement and manage Snowflake security features such as RBAC (Role-Based Access Control), data masking, and row-level security to ensure compliance with organizational and regulatory requirements.Automate monitoring, logging, and alerting for data processes.Participate in code reviews, architecture discussions, and continuous process improvements.Required Skills & Qualifications
Bachelor's degree in engineering, Computer Science, Data Science, or related field.5+ years of experience in data engineering with strong focus on cloud data warehouses.Proven track record of designing and implementing complex data solutions.Hands-on expertise in Snowflake (SQL, SnowSQL, tasks, streams, stored procedures, performance tuning).Strong knowledge of SQL, Python, and ETL / ELT frameworks.Experience with AWS services (S3, Glue, Lambda, Redshift, etc.).Familiarity with data modeling (3NF, Star Schema, Data Vault).Experience with orchestration tools such as Airflow, DBT, or Rundeck.Experience with CI / CD pipelines for data workflows.Solid understanding of data governance, security, and compliance best practices using Snowflake.Ability to troubleshoot and optimize SQL queries and Snowflake workloads.Desired skillsets
Exposure to BI / Visualization tools (Power BI, Tableau, Looker, QuickSight).Knowledge of API integration and real-time data streaming (Kafka, Kinesis).Understanding of DevOps concepts in data engineering.Pharmaceutical, healthcare, or financial industry domain knowledge is a plus.Snowflake CertificationAny cloud certification (AWS / Azure)EY | Building a better working world