Summary of Job
Build the Big Data Infrastructure, understand the business need, recommend, and finally build and implement the most optimal solution. You will work closely with the operations and analytics teams to understand data landscaping requirements and build the systems, processes and teams required while maintaining scalability, security and reliability.
Key Responsibilities :
Service Delivery :
- 6 - 8 years of total IT experience developing scalable & high availability applications of which 2+ years of experience with Big Data technologies such as Spark, Kafka, Hive, Oozie, Flume
- Programming background with expertise in Spark with Python
- Well versed with REST API (JAX-RS / Flask / Bottle or any other framework in Python)
- Familiarity with databases (NoSQL / RDBMS)
- Hands-on experience in software design & development using Agile methodology
- Graduate / Postgraduate in Computer Sciences or a related field
Project Management
Vendor Management : handling external vendors to facilitate data collection ( Preferred )Managing project from start till end independently and with 1-2 team membersCustomer Relationship Management
Client servicing in task executionSeek feedback from clients on solutions / deliverablesCollaborating with multiple internal stakeholders for smooth delivery of projectsProcess Improvement
Complete understanding of various internal processes and quality norms, ensuring self compliance to the sameParticipate in Quality Improvement initiativesQualifications and Skills Required
Qualification : Graduate or Post Graduate in any of the disciplines- B.Tech / MCA / M.Sc (Comp Sc) or MBA with IT specializationExperience :
5-8 yrs. of relevant experience
Must-haves :
experience in developing scalable & high availability applications of which 2+ years of experience with Big Data technologies such as Spark, Kafka, Hive, Oozie, Flume Preferred :
Experience with any cloud platform (AWS, Azure, GCP)Exposure to Product Development