Role : Databricks Architect
- Lead architecture, design, and implementation of data lakehouse solutions using Databricks, Delta Lake, Unity Catalog, and Apache Spark.
- Define and enforce architectural best practices for data ingestion, ETL / ELT processing, transformation, governance, and analytics.
- Oversee the deployment, scaling, configuration, and cost optimization of Databricks environments (across Azure, AWS, or GCP).
- Design high-availability, resilient pipelines for both streaming and batch data processing at enterprise scale.
- Implement and manage data governance, privacy, and lineage (Unity Catalog, fine-grained access controls, encryption, audit logging).
- Guide teams in implementing CI / CD, version control, automated testing, and infrastructure-as-code for Databricks jobs and notebooks.
- Collaborate with engineers, analysts, and data scientists to integrate advanced analytics, ML, and BI into unified data platforms.
- Lead performance tuning, cluster optimization, and job scheduling for efficient and cost-effective processing.
- Review and mentor technical team members and enforce standards via code / architecture reviews.
- Monitor industry trends, identify new features, and proactively recommend improvements to platform reliability and cost
(ref : hirist.tech)