Role-Senior Scala Developer
Experience-7-14 Years
Location-Mumbai (Powai)
Skills- Scala, Spark, UNIX
Minimum 5+ years of experience in development of Spark Scala
Experience in designing and development of solutions for Big Data using Hadoop ecosystem technologies such as with Hadoop Bigdata components like HDFS, Spark, Hive Parquet File format, YARN, MapReduce, Sqoop
Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and streaming data processing.
Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins, Views etc
Experience in debugging the Spark code
Working knowledge of basic UNIX commands and shell script
Experience of Autosys, Gradle
Good-to-Have
Good analytical and debugging skills
Ability to coordinate with SMEs, stakeholders, manage timelines, escalation & provide on time status
Write clear and precise documentation / specification
Work in an agile environment
Create documentation and document all developed mappings
Responsibility of / Expectations from the Role
Create Scala / Spark jobs for data transformation and aggregation
Produce unit tests for Spark transformations and helper methods
Write Scaladoc-style documentation with all code
Design data processing pipelines
Senior • India