Description :
Key Responsibilities :
Design develop and maintain data pipelines using Databricks Python and Azure Data Services.
Implement and optimize ETL / ELT workflows for structured and unstructured data.
Work with Azure SQL Database Data Lake and Delta Lake to store and manage large datasets.
Collaborate with data architects and analysts to create logical and physical data models.
Develop and maintain RESTful APIs for data access and integration with external systems.
Ensure data quality lineage and governance using tools like Informatica Purview or similar.
Monitor and troubleshoot performance issues in data pipelines and cloud infrastructure.
Participate in code reviews testing and deployment processes.
Document technical solutions and maintain best practices for data engineering.
Enable Skills-Based Hiring No
Additional Details
Key Skills
Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala
Employment Type : Full Time
Experience : years
Vacancy : 1
Big Data • Bangalore, Karnataka, India