We are looking for an experienced
Databricks Engineer
to join our data engineering team. The role involves designing, building, and optimizing
data pipelines and workflows
on Databricks, ensuring scalability, performance, and reliability for enterprise-grade projects. The ideal candidate has strong experience with
Spark, PySpark, and SQL , and can work across diverse data sources in a modern cloud environment.
Key Responsibilities
Design, build, and maintain
data pipelines
and workflows on Databricks.
Implement
data transformations, aggregations, and validations
for large datasets.
Work with
structured, semi-structured, and unstructured data
across multiple sources.
Optimize Databricks jobs for
performance, scalability, and cost efficiency .
Collaborate with stakeholders to understand business requirements and deliver data-driven solutions.
Ensure
data integrity, security, and compliance
across all processes.
Document solutions, workflows, and provide knowledge transfer to team members.
Stay updated with the latest
Databricks and cloud ecosystem features .
Required Skills & Qualifications
Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or equivalent experience.
Proven hands-on experience (X+ years) with
Databricks
and
Apache Spark (PySpark / Scala) .
Strong proficiency in
SQL
and data modeling.
Experience with
Delta Lake
and modern
data lakehouse architectures .
Familiarity with
cloud platforms
(Azure, AWS, or GCP).
Understanding of
data governance, lineage, and role-based security .
Ability to work in
agile teams
and communicate effectively with both technical and business stakeholders.
Nice to Have
Experience with
orchestration tools
(Airflow, Azure Data Factory, AWS Glue).
Exposure to
real-time / streaming data
(Kafka, EventHub, Kinesis).
Knowledge of
CI / CD for data pipelines
and Infrastructure-as-Code (Terraform, ARM templates).
Familiarity with
data warehouse solutions
(Snowflake, BigQuery, Redshift).
Sr Engineer • India