Role-Senior Scala Developer
Experience-7-14 Years
Location-Mumbai (Powai)
Skills- Scala, Spark, UNIX
1. Minimum 5+ years of experience in development of Spark Scala
2. Experience in designing and development of solutions for Big Data using Hadoop ecosystem technologies such as with Hadoop Bigdata components like HDFS, Spark, Hive Parquet File format, YARN, MapReduce, Sqoop
3. Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and streaming data processing.
4. Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins, Views etc
5. Experience in debugging the Spark code
6. Working knowledge of basic UNIX commands and shell script
7. Experience of Autosys, Gradle
Good-to-Have
1. Good analytical and debugging skills
2. Ability to coordinate with SMEs, stakeholders, manage timelines, escalation & provide on time status
3. Write clear and precise documentation / specification
4. Work in an agile environment
5. Create documentation and document all developed mappings
Responsibility of / Expectations from the Role
Create Scala / Spark jobs for data transformation and aggregation
Produce unit tests for Spark transformations and helper methods
Write Scaladoc-style documentation with all code
Design data processing pipelines
Senior • India