Role Description
This is a full-time role for a Data Engineer.
The Data Engineer will design, develop, and maintain data architecture, including databases and large-scale data processing systems.
Day-to-day tasks will include ETL development, data integration, performance tuning, ensuring data quality and integrity, and creating data pipelines.
The role also involves collaboration with other teams to support data needs and implementing real-time data streaming solutions.
Requirements :
- 4+ years of experience in data engineering or platform reliability roles.
- Strong hands-on experience with Azure Databricks, SQL, PySpark, and Delta Lake.
- Proven experience in building and maintaining production-grade data pipelines in cloud environments.
- Familiarity with Medallion Architecture and modern data engineering patterns.
- Experience implementing data testing validation rules, and monitoring tools to maintain data quality.
- Experience with orchestration tools like Azure Data Factory, Airflow, or dbt Cloud.
- Exposure to data governance, privacy, and compliance practices in fintech or regulated industries.
- Familiarity with CI / CD for data pipelines and infrastructure-as-code.
Key Responsibilities :
Own and operate the data platforms and day-to-day functions with a focus on 99%+ uptime and reliability.Build, monitor, and maintain data pipelines across bronze, silver, and gold layers using Azure Databricks, SQL, and PySpark.Implement and manage data testing and data quality frameworks to ensure trust and consistency in key datasets.Set up and maintain alerting, observability, and logging for data jobs and platform performance.Collaborate with stakeholders to understand data requirements and ensure data is delivered clean, complete, and on time.Drive root cause analysis and resolution for data issues, focusing on continuous improvement and automation(ref : hirist.tech)