Role : AWS Databricks Developer
Required Technical Skill Set : AWS / Databricks
Desired Experience Range : 6-8 yrs
Location of Requirement : Kolkata
Notice period : 90
Job Description :
- 5–6 years of total experience in data engineering or big data development.
- 2–3 years hands-on experience with Databricks and Apache Spark.
- Proficient in AWS cloud services (S3, Glue, Lambda, EMR, Redshift, CloudWatch, IAM).
- Strong programming skills in PySpark, Python, and optionally Scala.
- Solid understanding of data lakes, lake houses, and Delta Lake concepts.
- Experience in SQL development and performance tuning.
- Familiarity with Airflow, dbt, or similar orchestration tools is a plus.
- Experience in CI / CD tools like Jenkins, GitHub Actions, or Code Pipeline.
- Knowledge of data security, governance, and compliance frameworks.
Responsibilities
Develop and maintain scalable data pipelines using Apache Spark on Databricks.Build end-to-end ETL / ELT pipelines on AWS using services like S3, Glue, Lambda, EMR, and Step Functions.Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions.Design and implement data models, schemas, and lakehouse architecture in Databricks.Optimize and tune Spark jobs for performance and cost-efficiency.Integrate data from multiple structured and unstructured data sources.Monitor and manage data workflows, ensuring data quality, consistency, and security.Follow best practices in CI / CD, code versioning (Git), and DevOps practices for data applications.Write clean, reusable, well-documented code using Python / PySpark / Scala.