About the Role
Join our growing team to help build and optimize the data pipelines that power our analytics and machine learning models. This is a high-growth role with a strong focus on modern Data Warehousing, offering hands-on mentorship and structured training in Snowflake .
Key Responsibilities
- Pipeline Development :
- Assist in building scalable ETL / ELT processes to move data into the data warehouse efficiently.
- Data Integrity :
- Perform data profiling and validation to ensure accuracy, consistency, and reliability.
- SQL Optimization :
- Write and optimize complex SQL queries to support business reporting and data extraction.
- Collaboration :
- Work closely with Data Scientists and Analysts to deliver clean, well-structured datasets.
- System Health :
- Monitor pipeline performance and assist in troubleshooting job failures.
Technical Requirements
SQL Mastery :Strong knowledge of joins, subqueries, and window functions.Programming :Proficiency in Python (preferred) or Java.Databases :Understanding of relational databases (PostgreSQL / MySQL) and NoSQL environments.Tools :Familiarity with Git / GitHub for version control.What We Offer
Specialized Training :Direct training and hands-on experience with Snowflake and dbt.Modern Tech Stack :Exposure to tools such as Apache Airflow and BigQuery / Snowflake.Mentorship :Daily learning opportunities with senior engineers in a collaborative environment.Skills Required
Nosql, Java, Apache Airflow, Github, BigQuery, Git, Mysql, snowflake , Postgresql, Sql, Python