Job Title : Databricks Engineer (Remote)
Location : 100 % Remote
Job Type : Full-Time
Shift timings : 3 pm - 11 pm IST
About the Role :
We are looking for an experienced Databricks Engineer with a strong background in data engineering and business intelligence to help build and optimize scalable, high-performance data solutions. The ideal candidate has hands-on experience in Databricks production environments, along with Power BI development and a deep understanding of modern data architecture. You will collaborate with cross-functional teams to create robust data pipelines, enable insightful reporting, and ensure the reliability, quality, and observability of data across platforms.
Key Responsibilities
Design, develop, and maintain data pipelines and ETL processes on Databricks.
Lead the architecture and implementation of scalable, secure, and high-performance data solutions.
Develop and optimize semantic data models to support Power BI reporting and analytics.
Design, build, and maintain Power BI dashboards and reports for business stakeholders.
Collaborate with business users, data scientists, and analysts to gather requirements and deliver actionable data and visualization solutions.
Optimize data workflows for performance, cost-efficiency, and reliability.
Integrate data from multiple structured and unstructured sources into Delta Lake and cloud data warehouses.
Implement best practices in data governance, security, and data quality management.
Ensure seamless data flow from Databricks to Power BI for real-time and batch reporting.
Mentor junior engineers and provide technical leadership in data engineering and BI best practices.
Troubleshoot and resolve issues in data pipelines and reporting layers, ensuring high data availability and integrity.
Required Skills & Experience
6–7 years of experience in Data Engineering and Analytics .
Proven Subject Matter Expertise in Databricks (including Databricks SQL, Delta Lake, and MLflow).
Strong hands-on experience with Power BI , including DAX, Power Query, and data modeling.
Strong experience with PySpark, Spark SQL, and Scala.
Hands-on experience with cloud platforms (Azure preferred ; AWS / GCP a plus).
Proficiency in data modeling, ETL development, and data warehousing concepts.
Solid understanding of big data ecosystems (Hadoop, Kafka, etc.).
Strong SQL development skills with a focus on performance tuning.
Experience in implementing CI / CD pipelines for data and BI solutions.
Experience integrating Databricks with Power BI using DirectQuery and Import modes.
Engineer • Mohali, Punjab, India