About the Role
We are looking for a
Senior Data Engineer
with strong hands-on experience in
Databricks, GCP, and Data Pipeline Development . The ideal candidate will have a deep understanding of
data architecture, automation, and cloud deployment
practices, along with a passion for building scalable and efficient data solutions.
Key Responsibilities
Design, develop, and optimize scalable
data pipelines
using
Databricks (PySpark, SQL) .
Manage and implement
Unity Catalog
for data governance and security.
Develop and maintain
infrastructure as code
using
Terraform
for GCP-based data platforms.
Implement and maintain
CI / CD pipelines
for continuous integration and deployment.
Work with
GCP services
such as
Cloud Build, BigQuery, Firestore , and others to design robust data solutions.
Collaborate with cross-functional teams (Data Scientists, Analysts, DevOps) to deliver high-quality data solutions.
Ensure best practices in data quality, performance, and reliability.
Required Skills
Databricks
– Expert hands-on experience
Unity Catalog
– Strong understanding and implementation experience
PySpark
– Advanced proficiency in writing and optimizing Spark code
SQL
– Strong command over complex queries and data transformations
Terraform
– Hands-on experience in infrastructure automation
CI / CD Pipelines
– Experience setting up and managing data deployment pipelines
GCP Services
– Hands-on experience with
Cloud Build, BigQuery, Firestore , etc.
Senior Data Engineer • Dombivali, Maharashtra, India