We are seeking an experienced Databricks Engineer to join our data team. The ideal candidate will have strong expertise in Databricks, Spark, and cloud platforms (Azure / AWS / GCP), along with a solid understanding of data engineering best practices. You will work closely with cross-functional teams to build scalable data pipelines, optimize big-data workloads, and support advanced analytics initiatives.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using Databricks and Apache Spark .
- Build and optimize ETL / ELT workflows for large-scale datasets.
- Develop Delta Lake tables and manage data quality, reliability, and performance.
- Integrate data from various sources (APIs, databases, cloud storage).
- Collaborate with data scientists, analysts, and business stakeholders to support analytics and ML workloads.
- Implement best practices for CI / CD, version control, and automation.
- Monitor platform performance and troubleshoot issues in Databricks clusters and jobs.
Required Skills & Experience
3+ years of experience as a Data Engineer or similar role.Hands-on experience with Databricks (Jobs, Workflows, Delta Lake, Unity Catalog, Notebooks).Strong proficiency in Apache Spark (PySpark / Scala) .Experience with at least one cloud platform : Azure, AWS, or GCP .Strong SQL knowledge for data modelling, transformations, and performance tuning.Experience with CI / CD tools (Git, Azure DevOps, Jenkins, etc.).Solid understanding of data warehousing and data-lake architecture.Preferred Qualifications
Databricks certification(s) (Data Engineer Associate / Professional).Experience with MLflow or supporting data science workloads.Knowledge of streaming technologies (Structured Streaming / Kafka).Experience with orchestration tools (Airflow, ADF, AWS Glue, etc.).What We Offer
Competitive compensation and benefits.Opportunity to work with modern cloud and big-data technologies.Collaborative, growth-oriented team environment.Career development and certification support.