Hadoop Admin
Location : Bangalore, Chennai, Pune
Experience : 5- 12 Years
Employment Type : Full-time
Work Mode : Hybrid (during IND hours)
Job Overview :
We are seeking an experienced Hadoop Admin with 5-12 years of experience to join our PRE-Big Data team. This role is crucial for managing and optimizing our extensive Hadoop platforms, which span thousands of nodes per cluster. You will be responsible for Big Data Administration and Engineering activities across multiple open-source platforms, focusing on improving performance, reliability, and efficiency. A strong background in Hadoop support, automation, and DevOps practices is essential for this role.
Role Details :
This role is part of our PRE-Big Data team, which is responsible for managing our critical Hadoop platforms. Our Hadoop clusters are massive, with some spanning over 1000s of nodes. For senior resources, we expect at least 3-5 years of "Hadoop Admin support" experience specifically with clusters of 500+ nodes. The position is hybrid, working during India business hours, and will concentrate on enhancing the performance, reliability, and overall efficiency of our Big Data platforms.
Key Responsibilities :
- Perform comprehensive Big Data Administration and Engineering activities on multiple open-source platforms, including Hadoop, Kafka, HBase, and Spark.
- Provide expert-level Hadoop administration support, with a focus on large-scale clusters.
- Conduct effective root cause analysis for major production incidents and develop clear learning documentation.
- Identify and implement high-availability solutions for services with single points of failure to ensure continuous operation.
- Plan and execute capacity expansions and upgrades in a timely manner to prevent scaling issues and bugs.
- Automate repetitive tasks to significantly reduce manual effort and prevent human errors, leveraging tools like Ansible, Shell scripting, or Python scripting.
- Tune alerting mechanisms and set up robust observability to proactively identify issues and performance problems.
- Collaborate closely with Level 3 teams to review new use cases and implement cluster hardening techniques, building robust and reliable platforms.
- Create standard operating procedure (SOP) documents and guidelines for effectively managing and utilizing the Big Data platforms.
- Leverage DevOps tools and disciplines (Incident, Problem, and Change Management) and standards in day-to-day operations.
- Ensure that the Hadoop platform consistently meets performance and service level agreement (SLA) requirements.
- Perform security remediation, automation, and self-healing as per requirements to maintain a secure and resilient environment.
- Concentrate on developing automations and reports to minimize manual effort, utilizing various scripting and programming languages.
Required Skills & Qualifications :
5- 12 years of experience in Big Data Administration and Engineering.Minimum of 2 years of dedicated Hadoop admin support experience.For senior roles, 3-5 years of Hadoop Admin support experience with 500+ node clusters is expected.Strong proficiency in Hadoop administration automation using tools like Ansible, Shell scripting, or Python scripting.Solid DevOps skills, with the ability to code in at least one language, preferably Python.Hands-on experience with Hadoop, Kafka, HBase, and Spark platforms.Proven strong troubleshooting and debugging skills.Experience in identifying and implementing high-availability solutions.Knowledge of capacity planning, upgrades, and performance tuning for large-scale Big Data environments.Familiarity with monitoring, alerting, and observability best practices.Experience with security remediation and automation within Big Data platforms.Ability to create comprehensive standard operating procedure (SOP) documents and guidelines.Understanding and application of Incident, Problem, and Change Management processes.(ref : hirist.tech)