Role-Senior Scala Developer
Experience-7-14 Years
Location-Mumbai (Powai)
Skills- Scala, Spark, UNIX
- Minimum 5+ years of experience in development of Spark Scala
- Experience in designing and development of solutions for Big Data using Hadoop ecosystem technologies such as with Hadoop Bigdata components like HDFS, Spark, Hive Parquet File format, YARN, MapReduce, Sqoop
- Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and streaming data processing.
- Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins, Views etc
- Experience in debugging the Spark code
- Working knowledge of basic UNIX commands and shell script
- Experience of Autosys, Gradle
Good-to-Have
Good analytical and debugging skillsAbility to coordinate with SMEs, stakeholders, manage timelines, escalation & provide on time statusWrite clear and precise documentation / specificationWork in an agile environmentCreate documentation and document all developed mappingsResponsibility of / Expectations from the Role
Create Scala / Spark jobs for data transformation and aggregation
Produce unit tests for Spark transformations and helper methods
Write Scaladoc-style documentation with all code
Design data processing pipelines