Talent.com
Senior Databricks Data Engineer
Senior Databricks Data EngineerSquash Apps • alwar, India
Senior Databricks Data Engineer

Senior Databricks Data Engineer

Squash Apps • alwar, India
22 hours ago
Job description

Company Description

Squash Apps is a top-rated full-stack consulting company dedicated to building the next generation of scalable and robust web applications for visionary clients. We specialize in modern technologies, including MEAN, MERN, MEVN stack, Java stack, SQL / NoSQL, Elastic Search, Redis, and hybrid mobile applications. With innovative projects showcased on our website, Squash Apps is passionate about creating top-quality apps. We are committed to delivering excellence and invite like-minded individuals to join our dynamic team.

We’re looking for a Senior Databricks Data Engineer to lead large-scale data

pipeline development on the Databricks Lakehouse Platform . If you’re strong in

Spark, cloud platforms, and modern data engineering practices—this role is for you.

Responsibilities

  • Build & optimize ETL / ELT pipelines using Databricks (PySpark, SQL, Delta

Lake).

  • Design Lakehouse Bronze–Silver–Gold architecture.
  • Optimize Spark workloads & cluster performance.
  • Work across Azure / AWS / GCP data ecosystems.
  • Implement CI / CD, IaC (Terraform), and best-practice DevOps.
  • Ensure data quality, governance, and security (Delta Lake, Unity Catalog).
  • Collaborate with cross-functional teams + mentor junior engineers.
  • Required Skills

  • 6–12+ years Data Engineering; 3+ years strong Databricks + PySpark.
  • Expertise in Spark SQL, Delta Lake, performance tuning.
  • Hands-on with cloud services : Azure
  • Strong SQL, ETL / ELT concepts, orchestration (ADF / Airflow / Workflows).
  • Experience with Git, CI / CD pipelines, and automation.
  • Databricks Data Engineer (4–6 Years Experience)

    We’re looking for a Databricks Data Engineer who can design and develop

    scalable data pipelines on the Databricks Lakehouse Platform . Ideal for someone

    strong in PySpark, SQL, and cloud data engineering.

    Responsibilities

  • Develop and maintain ETL / ELT pipelines using Databricks (PySpark, Spark
  • SQL, Delta Lake).
  • Work on data ingestion, transformation, and optimization across Lakehouse
  • layers.

  • Manage and tune Spark jobs, clusters, and workflows.
  • Integrate with cloud services (Azure).
  • Work with ADF / Airflow / Databricks Workflows for orchestration.
  • Ensure data quality, reliability, and documentation.
  • Required Skills

  • 4–6 years overall Data Engineering.
  • Strong hands-on in PySpark, Spark SQL, Delta Lake, Databricks
  • Notebooks .

  • Experience with Azure data services.
  • Solid SQL + data warehousing fundamentals.
  • Experience with Git, CI / CD processes.
  • Create a job alert for this search

    Senior Data Engineer • alwar, India