Experience : 3 to 6 years
Notice Period : Immediate to 15 days
Location : Bangalore
Work from office
JOB DESCRIPTION
Job Description : Databricks Platform Engineer / Administrator (AWS)
Department : Data Engineering / Cloud Platform / DevOps
About the Role
We are looking for a skilled and proactive
Databricks Platform Engineer / Administrator
to manage, optimize, and govern our Databricks environment on AWS. The ideal candidate will be responsible for overseeing platform operations, optimizing performance and cost, and ensuring secure and efficient use of Databricks and integrated AWS services.
This role sits at the intersection of platform engineering, data infrastructure, and cost governance, requiring both technical depth and cross-functional collaboration.
Key Responsibilities
Platform Administration
Deploy, configure, and manage
Databricks workspaces
on AWS.
Set up and maintain
AWS infrastructure
(VPCs, IAM roles, S3, security groups) to support Databricks operations.
Manage user access, job permissions, cluster configurations, and workspace-level controls.
Cost Optimization
Implement and enforce
cluster policies
to control resource consumption and prevent cost overruns.
Promote efficient compute usage by enabling
job clusters ,
auto-termination , and
spot instance
strategies.
Monitor and analyze cost using
AWS Cost Explorer ,
Databricks REST APIs , and internal dashboards.
Optimize storage costs through
S3 lifecycle policies ,
Delta Lake optimization , and
table version management .
Performance Tuning
Analyze and tune
Spark job performance , including partitioning, caching, and join strategies.
Provide guidance on using
Photon runtime ,
Delta Live Tables , and
SQL optimization best practices .
Collaborate with data engineers and analytics teams to improve job execution efficiency and pipeline throughput.
Monitoring & Observability
Set up and manage
logging, monitoring, and alerting
via CloudWatch, Databricks audit logs, and external tools.
Track metrics such as cluster utilization, job failures, job durations, and resource bottlenecks.
Develop dashboards for real-time visibility into platform usage, performance, and cost.
Security & Governance
Enforce access control through
Unity Catalog ,
IAM roles , and
cluster policies .
Ensure compliance with enterprise security policies, data governance standards, and audit requirements.
Prevent data duplication and misuse through strong access boundaries and usage monitoring.
Automation & Documentation
Automate provisioning and configuration using
Terraform ,
Databricks CLI , and
REST APIs .
Maintain detailed documentation of cluster policies, platform standards, optimization playbooks, and cost governance guidelines.
Drive platform standardization and advocate for best practices across teams.
Qualifications
Required
3–6+ years of experience managing
Databricks on AWS
or similar Spark-based platforms.
Strong hands-on experience with
AWS services
(S3, IAM, VPC, EC2, CloudWatch).
Proficiency in
Spark performance tuning
and
Databricks architecture .
Solid understanding of
cost optimization strategies
in cloud-based data platforms.
Experience with
Databricks cluster policies ,
job scheduling , and
Delta Lake .
Familiarity with
Python, SQL, Terraform , and
Databricks REST APIs .
Preferred
Experience with
Unity Catalog ,
Delta Live Tables , or
MLflow .
Knowledge of
AWS cost governance tools
(Cost Explorer, Budgets, Trusted Advisor).
Prior work in FinOps, DataOps, or Cloud Center of Excellence (CCoE) roles.
Certifications such as :
Databricks Certified Data Engineer / Admin
AWS Certified Solutions Architect / DevOps Engineer
Platform Engineer • India