Responsibilities :
Big Data Orchestration : Airflow, Spark on Kubernetes, Yarn, Oozie
Big Data Processing : Hadoop, Kafka, Spark & Spark Structured Streaming
Experience on SOLID & DRY principles with Good Software Architecture & Design implementation experience
Advanced Scala experience (e.g. Functional Programming, using Case classes, Complex Data Structures & Algorithms)
Proficient in developing automated frameworks for unit & integration testing
Proficient with Kubernetes, Docker, Helm and related container technologies
Proficient in deploying and managing Spark workloads on Kubernetes clusters
Candidates should have hands on experience in Spark and Scala at least 5+ years with good data engineering concepts
Data • Mohali, Punjab, India