Talent.com
This job offer is not available in your country.
Data Engineer - Spark / Hadoop

Data Engineer - Spark / Hadoop

Growel Softech Pvt. Ltd.Bangalore
30+ days ago
Job description

Data Engineer with GCP / Azure / AWS + pyspark + hadoop + Hive

Mandatory key skills : Pyspark, GCP(Preferrable) / AWS / Azure, Hadoop, Hive.

Experience : 5+yrs(relevant)

Location : Any location(Bangalore, Pune, Chennai, Kolkata, Noida, Hyderabad, Kochi, Trivandrum)

Notice : only Immediate Joiners

Budget : 5-7yr

Key Responsibilities :

  • Design, implement, and optimize data pipelines for large-scale data processing.
  • Develop and maintain ETL / ELT workflows using Spark, Hadoop, Hive, and Airflow.
  • Collaborate with data scientists, analysts, and engineers to ensure data availability and quality.
  • Write efficient and optimized SQL queries for data extraction, transformation, and analysis.
  • Leverage PySpark and cloud tools (preferably Google Cloud Platform) to build reliable and scalable solutions.
  • Monitor and troubleshoot data pipeline performance and reliability issues.

Required Skills :

  • 59 years of experience in a Data Engineering role.
  • Strong hands-on experience with PySpark and SQL.
  • Good working knowledge of GCP or any major cloud platform (AWS, Azure).
  • Experience with Hadoop, Hive, and distributed data systems.
  • Proficiency in data orchestration tools such as Apache Airflow.
  • Ability to work independently in a fast-paced, agile environment
  • ref : hirist.tech)

    Create a job alert for this search

    Data Engineer • Bangalore