Job descriptionLead the architecture, design, and implementation of data lakehouse solutions leveraging Databricks, Delta Lake, Unity Catalog, and Apache Spark.Define and enforce architectural best practices for data ingestion, ETL / ELT processing, data transformation, governance, and analytics.Oversee deployment, scaling, configuration, and cost optimization of Databricks environments across Azure, AWS, or GCP.Design highly available and resilient data pipelines for both streaming and batch processing at enterprise scale.Implement and manage data governance, privacy, and data lineage using Unity Catalog, fine-grained access controls, encryption, and audit logging.Guide teams in adopting CI / CD, version control, automated testing, and infrastructure-as-code practices for Databricks jobs, workflows, and notebooks.Collaborate with data engineers, analysts, and data scientists to integrate advanced analytics, machine learning, and BI solutions into unified data platforms.Lead performance tuning initiatives, including cluster optimization and job scheduling, to achieve high efficiency and cost-effectiveness.Mentor and review the work of technical team members, ensuring compliance with coding standards and architectural guidelines.Monitor industry trends, evaluate new Databricks features and capabilities, and proactively recommend enhancements to platform reliability, scalability, and cost efficiency.