Job description
px; background-color : rgb(255, 255, 255);">
Required Skills & Experience :
- 3-5 years of hands-on experience as a Data Engineer.
- Proficiency in Python, SQL, and PySpark.
- Strong knowledge of Big Data ecosystems, including Hadoop, Hive, Sqoop, HDFS, and HBase.
- Expertise in the Spark ecosystem : Spark Core, Spark Streaming, Spark SQL, and Databricks.
- Solid experience with AWS cloud services, including EMR, EC2 / EKS, Lambda, Glue, and S3.
- In-depth understanding of data modeling, data warehousing methodologies, and ETL
processes.
Familiarity with data governance, quality, and security principles in cloud environments.Excellent problem-solving skills and ability to work independently or collaboratively in a fastpaced environment.