Description : About the Role :
We are looking for a skilled Data Engineer to design, build, and maintain efficient and scalable data pipelines that support business intelligence, analytics, and data science initiatives.
You will work closely with cross-functional teams to ensure reliable data flow, quality, and accessibility across multiple platforms.
Key Responsibilities :
- Design, develop, and maintain scalable ETL / ELT pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and provide timely solutions.
- Build and optimize data warehouses, data lakes, and databases to support analytics and reporting needs.
- Ensure data quality, integrity, and security by implementing validation checks and monitoring.
- Work with cloud platforms and big data technologies to support distributed data processing.
- Develop automated scripts and tools to facilitate data integration and deployment processes.
- Troubleshoot and resolve data issues and optimize performance of data systems.
- Document data flows, architecture, and operational procedures.
- Stay updated with emerging technologies and best practices in data engineering.
Required Qualifications :
Bachelors degree in Computer Science, Information Technology, Engineering, or related field.5+ years of experience in data engineering or software development.Proficiency in SQL and experience with relational databases.Experience with programming / scripting languages such as Python, Java, or Scala.Familiarity with ETL tools and frameworks.Knowledge of big data technologies like Hadoop, Spark, or Kafka is a plus.Experience working with cloud platforms such as AWS, Azure, or Google Cloud.Strong problem-solving skills and attention to detail.Good communication and teamwork skills(ref : hirist.tech)