AWS with Data bricks-
Job Location - Pan India
a. Strong experience in Databricks on AWS to design, build, and optimize scalable data pipelines. Responsibilities include developing ETL / ELT workflows, integrating structured and unstructured data, and ensuring data quality and performance .
b. Proficient in Python, Pyspark, and SQL, with hands-on experience in AWS services such as S3, Glue, Lambda, and Redshift.
c. Skilled in Delta Lake, data lake architecture, and job orchestration. Strong understanding of data modelling, governance, and security best practices.
d. Agile practitioner with Git expertise and excellent communication skills.
e. Adept at translating business needs into technical user stories and working with large, complex code bases
Aws Data • gurugram, uttar pradesh, in