Years of experience : 4 - 7 years
Location : Bangalore, Gurgaon
Job Description :
- Experience in working on Spark framework, good understanding of core concepts, optimizations, and best practices
- Good hands-on experience in writing code in PySpark, should understand design principles and OOPS
- Good experience in writing complex queries to derive business critical insights
- Hands-on experience on Stream data processing
- Understanding of Data Lake vs Data Warehousing concept
- Knowledge on Machin learning would be an added advantag
- Experience in NoSQL Technologies – MongoDB, Dynamo DB
- Good unestanding of test driven development
- Flexible to learn new technologies
Roles & Responsibilities :
Design and implement solutions for problems arising out of large-scale data processingAttend / drive various architectural, design and status calls with multiple stakeholdersEnsure end-to-end ownership of all tasks being aligned including development, testing, deployment and supportDesign, build & maintain efficient, reusable & reliable codeTest implementation, troubleshoot & correct problemsCapable of working as an individual contributor and within team tooEnsure high quality software development with complete documentation and traceabilityFulfil organizational responsibilities (sharing knowledge & experience with other teams / groups)