Job Description
Details
1Role -Senior Developer
2Required Technical Skill Set - Spark / Scala / Unix
3Desired Experience Range -5-8 years
4Location of Requirement - Pune
Desired Competencies (Technical / Behavioral Competency)
Must-Have
Ideally should not be more than 3-5)
- Minimum 4+ years of experience in development of Spark Scala
- Experience in designing and development of solutions for Big Data using Hadoop ecosystem technologies such as with Hadoop Bigdata components like HDFS, Spark, Hive Parquet File format, YARN, MapReduce, Sqoop
- Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and streaming data processing.
- Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins, Views etc
- Experience in debugging the Spark code
- Working knowledge of basic UNIX commands and shell script
- Experience of Autosys, Gradle
Good-to-Have
Good analytical and debugging skillsAbility to coordinate with SMEs, stakeholders, manage timelines, escalation & provide on time statusWrite clear and precise documentation / specificationWork in an agile environmentCreate documentation and document all developed mappingsResponsibility of / Expectations from the Role
1 Create Scala / Spark jobs for data transformation and aggregation
2 Produce unit tests for Spark transformations and helper methods
3 Write Scaladoc-style documentation with all code
4 Design data processing pipelines