Key Responsibilities :
- Design and implement scalable data architectures to support data integration, transformation, and storage.
- Build and maintain ETL pipelines to extract, transform, and load data from diverse sources into data warehouses or data lakes.
- Work with big data technologies like Hadoop, Spark, and Kafka to efficiently process and manage large datasets.
- Design data models and schemas to support business intelligence, analytics, and reporting requirements.
- Implement data quality checks and governance processes to ensure accuracy, consistency, and compliance with policies.
- Identify performance bottlenecks in data processing workflows and optimize pipelines for efficiency.
- Implement data security measures to protect sensitive information and ensure regulatory compliance.
Skills Required
Etl, Data Architecture, Hadoop, Spark, Kafka, Data Modeling