Company Description
Squash Apps is a top-rated full-stack consulting company dedicated to building the next generation of scalable and robust web applications for visionary clients. We specialize in modern technologies, including MEAN, MERN, MEVN stack, Java stack, SQL / NoSQL, Elastic Search, Redis, and hybrid mobile applications. With innovative projects showcased on our website, Squash Apps is passionate about creating top-quality apps. We are committed to delivering excellence and invite like-minded individuals to join our dynamic team.
We’re looking for a Senior Databricks Data Engineer to lead large-scale data
pipeline development on the Databricks Lakehouse Platform . If you’re strong in
Spark, cloud platforms, and modern data engineering practices—this role is for you.
🔧 Responsibilities
- Build & optimize ETL / ELT pipelines using Databricks (PySpark, SQL, Delta
Lake).
Design Lakehouse Bronze–Silver–Gold architecture.Optimize Spark workloads & cluster performance.Work across Azure / AWS / GCP data ecosystems.Implement CI / CD, IaC (Terraform), and best-practice DevOps.Ensure data quality, governance, and security (Delta Lake, Unity Catalog).Collaborate with cross-functional teams + mentor junior engineers.🎯 Required Skills
6–12+ years Data Engineering; 3+ years strong Databricks + PySpark.Expertise in Spark SQL, Delta Lake, performance tuning.Hands-on with cloud services : AzureStrong SQL, ETL / ELT concepts, orchestration (ADF / Airflow / Workflows).Experience with Git, CI / CD pipelines, and automation.Databricks Data Engineer (4–6 Years Experience)
We’re looking for a Databricks Data Engineer who can design and develop
scalable data pipelines on the Databricks Lakehouse Platform . Ideal for someone
strong in PySpark, SQL, and cloud data engineering.
🔧 Responsibilities
Develop and maintain ETL / ELT pipelines using Databricks (PySpark, SparkSQL, Delta Lake).Work on data ingestion, transformation, and optimization across Lakehouselayers.
Manage and tune Spark jobs, clusters, and workflows.Integrate with cloud services (Azure).Work with ADF / Airflow / Databricks Workflows for orchestration.Ensure data quality, reliability, and documentation.🎯 Required Skills
4–6 years overall Data Engineering.Strong hands-on in PySpark, Spark SQL, Delta Lake, DatabricksNotebooks .
Experience with Azure data services.Solid SQL + data warehousing fundamentals.Experience with Git, CI / CD processes.