Data Engineering Opportunity
We are seeking a highly skilled data engineer to design, develop and maintain ETL / ELT pipelines, ingest and transform data from various sources, build real-time and batch processing solutions, and ensure data security and compliance.
Key Responsibilities :
- Design and develop ETL / ELT pipelines using cloud-based platforms such as Snowflake, AWS / GCP / Azure, Databricks, BigQuery, Azure Synapse, Kafka and Apache Hadoop.
- Ingest and transform data from APIs, databases, files and streams using PySpark, SQL and Airflow.
- Build real-time and batch processing solutions using Spark, Hadoop and other technologies.
- Ensure data quality and governance by implementing data validation, cleansing and security measures.
- Maintain documentation and best practices for data engineering processes.
Requirements :
5-10 years of experience in data engineering with expertise in cloud-based data platforms.Strong understanding of data warehousing, ETL / ELT pipelines and big data technologies.Proficiency in programming languages such as Python, Java and Scala.Experience with data quality, governance and security measures.Familiarity with collaboration tools like Jira, Confluence and Slack.About Us :
We believe in empowering our employees to drive change and innovation through their work. If you are passionate about data engineering and want to make a meaningful impact, we encourage you to apply.