If you are highly interested and available immediately , please submit your resume along with your total experience, current CTC, notice period, and current location details to Nitin.patil@ust.com
Key Responsibilities :
Design, develop, and optimize data pipelines and ETL workflows .
Work with Apache Hadoop, Airflow, Kubernetes, and Containers to streamline data processing.
Implement data analytics and mining techniques to drive business insights.
Manage cloud-based big data solutions on GCP and Azure .
Troubleshoot Hadoop log files and work with multiple data processing engines for scalable data solutions.
Required Skills & Qualifications :
Proficiency in Scala, Spark, PySpark, Python, and SQL .
Strong hands-on experience with Hadoop ecosystem, Hive, Pig, and MapReduce .
Experience in ETL, Data Warehouse Design, and Data Cleansing .
Familiarity with data pipeline orchestration tools like Apache Airflow .
Knowledge of Kubernetes, Containers, and cloud platforms such as GCP and Azure .
If you are a seasoned big data engineer with a passion for Scala and cloud technologies , we invite you to apply for this exciting opportunity!
Big Data Engineer • Gandhinagar, Gujarat, India