Position Summary :
- Experienced Data Engineer skilled in building data pipelines, managing databases, and handling ETL processes.
- Proficient in tools like SQL, Python, and cloud platforms.
- Focused on ensuring data quality and supporting analytics through efficient data systems.
Key Responsibilities :
Design and develop scalable data pipelines.Extract, transform, and load (ETL / ELT) data from various sources.Build and maintain data warehouses and data lakes.Integrate structured and unstructured data.Manage and optimize databases (SQL and NoSQL).Ensure data quality, accuracy, and consistency.Monitor and improve data pipeline performance.Collaborate with data scientists, analysts, and business teams.Maintain data security and follow compliance regulations (e.g., GDPR).Document data workflows, architecture, and systems.Use tools like Apache Spark, Kafka, Airflow, Hadoop, Snowflake, etc.Work with cloud platforms like AWS, GCP, or Azure.Troubleshoot data issues and perform root cause analysis.Automate data validation and cleaning processes.Stay updated with emerging data technologies and best practices.Qualifications & Requirements :
Bachelors degree in Computer Science, Information Technology, or a related field.
Minimum of 3+ years of experience in data engineering.
Strong experience in Python, Java, Scala, SQL, AWS, and TypeScript.
Excellent verbal and written communication skills, with the ability to clearly explain technical concepts to stakeholders at all levels.
Strong analytical and problem-solving skills, with a focus on continuous improvement.
(ref : hirist.tech)