Spark Core / Streaming with programming language Scala also in Machine Learing OR AI, Good Understanding of Data Structures, Algorithms, Data Transformation, Data Ingestion, Optimization mechanism / techniques, Good understanding of Big Data (Hadoop, MapReduce, Kafka, Cassandra) Technologies
Job Summary : We are looking for a candidate with strong experience into Spark Core / Streaming, Spark RDD API, Spark Dataframes API with Any programming language Scala / Java / Python, with Machine Learning and AI, Should have hands-on programming experience and Good Understanding of Data Structures, Algorithms, Data Transformation, Data Ingestion, Optimization mechanism / techniques, Good understanding of Big Data (Hadoop, MapReduce, Kafka, Cassandra) Technologies.
Key Responsibilities :
Required Skills and Experience :
We deal with huge amounts of data at a massive scale, so we are looking for engineers who love solving challenging problems through conducting independent research and collaborating with teams across our product teams to help improve the overall product experience.
(ref : hirist.tech)
Data Engineering • Hyderabad