Big Data Engineer Role Overview
We are seeking a seasoned Data Engineer with expertise in designing and developing ETL / ELT pipelines to join our team.
The ideal candidate will have 5-10 years of experience in data engineering, with a strong understanding of big data technologies such as Snowflake, AWS / GCP / Azure, Databricks, BigQuery, Kafka, Azure Data Factory, Apache Spark, Hadoop, Airflow and dbt.
Key Responsibilities :
- Design, develop, and maintain efficient ETL / ELT pipelines to process large datasets.
- Ingest and transform data from various sources including APIs, databases, files, and streams.
- Build real-time and batch processing solutions to meet business requirements.
- Ensure data security, access control, and compliance by implementing robust measures.
- Maintain accurate documentation and adhere to industry best practices.
Requirements :
5-10 years of experience in data engineering with a focus on big data technologies.Expertise in designing and developing scalable ETL / ELT pipelines.Strong understanding of SQL and PySpark for data manipulation and analysis.Ability to work with multiple data sources and formats to deliver insights.