Job Summary :
We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and architectures for our data platform. The ideal candidate will have strong experience in data integration, transformation, and optimization using modern cloud and big data technologies.
Key Responsibilities :
- Design, develop, and manage robust ETL / ELT data pipelines to support analytics, reporting, and machine learning initiatives.
- Integrate data from multiple structured and unstructured data sources into centralized data warehouses or data lakes.
- Optimize data storage, query performance, and data flow for efficiency and scalability.
- Collaborate with data analysts, data scientists, and business teams to understand data needs and deliver reliable solutions.
- Ensure data quality, integrity, and security across the organization.
- Implement best practices for data modeling, metadata management, and data governance.
- Monitor and troubleshoot data pipelines, ensuring timely and accurate data delivery.
Required Skills and Qualifications :
Bachelor's or Master's degree in Computer Science, Information Technology, or related field.3–7 years of experience as a Data Engineer or in a similar role.Strong programming skills in Python , Scala , or Java .Expertise in SQL and data modeling.Experience with ETL tools (e.g., Apache Airflow, Informatica, Talend, AWS Glue, Azure Data Factory).Proficiency with cloud data platforms such as AWS (Redshift, S3, Glue) , Azure (Synapse, Data Lake, ADF) , or Google Cloud (BigQuery) .Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka).Strong understanding of data warehousing concepts and data lake architectures .Knowledge of CI / CD, DevOps, and version control systems (e.g., Git).Skills Required
Java, Hadoop, Aws Redshift, Scala, Data Warehousing, AWS Glue, Kafka, Informatica, Sql, Apache Airflow, Devops, Azure Synapse, Git, Azure Data Factory, Spark, Etl Tools, Talend, Python