Responsibilities :
- Data Pipeline Development : Design, build, and maintain scalable and efficient data pipelines using Databricks for data ingestion, transformation, and processing.
- Programming & Scripting : Develop robust data solutions and automation scripts primarily using Python .
- Cloud Data Warehousing : Work with and optimize data solutions on cloud data warehouses such as Snowflake and BigQuery .
- SQL Expertise : Utilize advanced SQL skills for data manipulation, querying, and optimization within various data platforms.
- Data Modeling : Contribute to the design and implementation of data models for data lakes and data warehouses, ensuring data quality and consistency.
- Performance Tuning : Identify and resolve performance bottlenecks in data pipelines and queries within the Databricks, Snowflake, and BigQuery environments.
- Troubleshooting : Provide expert-level troubleshooting and support for data-related issues, ensuring data integrity and availability.
- Collaboration : Work closely with data scientists, data analysts, and other stakeholders to understand data requirements and translate them into effective data engineering solutions.
- Documentation : Create and maintain comprehensive documentation for data pipelines, processes, and architectures.
Required Skills :
Proficiency in Databricks .Strong programming skills in Python .Experience with Snowflake .Experience with BigQuery .Strong proficiency in SQL .Ability to design, develop, and troubleshoot complex data pipelines.Understanding of data warehousing and data lake concepts.Strong analytical and problem-solving skills.Excellent communication and collaboration abilities.Skills Required
Databricks, Python, snowflake , BigQuery, Sql, Data Warehousing