About the Role
We are looking for a Senior Data Engineer with strong hands-on experience in Databricks, GCP, and Data Pipeline Development . The ideal candidate will have a deep understanding of data architecture, automation, and cloud deployment practices, along with a passion for building scalable and efficient data solutions.
Key Responsibilities
- Design, develop, and optimize scalable data pipelines using Databricks (PySpark, SQL) .
- Manage and implement Unity Catalog for data governance and security.
- Develop and maintain infrastructure as code using Terraform for GCP-based data platforms.
- Implement and maintain CI / CD pipelines for continuous integration and deployment.
- Work with GCP services such as Cloud Build, BigQuery, Firestore , and others to design robust data solutions.
- Collaborate with cross-functional teams (Data Scientists, Analysts, DevOps) to deliver high-quality data solutions.
- Ensure best practices in data quality, performance, and reliability.
Required Skills
Databricks – Expert hands-on experienceUnity Catalog – Strong understanding and implementation experiencePySpark – Advanced proficiency in writing and optimizing Spark codeSQL – Strong command over complex queries and data transformationsTerraform – Hands-on experience in infrastructure automationCI / CD Pipelines – Experience setting up and managing data deployment pipelinesGCP Services – Hands-on experience with Cloud Build, BigQuery, Firestore , etc.