Description :
- Skill : Bigdata + Pyspark
- Location : Hyderabad only
- Experience : 5-9 Years
- Notice Period : Immediate to 30 Days
- Interview Mode : L1 Virtual / L2 F2F (Mandatory)
- Work Mode : Hybrid
- Skills required : Bigdata (Hadoop Hive, Impala, Spark) , pyspark, Python, Oracle, Exadata (RDBMS), Autosys, Bitbucket
Job Summary :
We are seeking a Specialist with 5 to 8 years of experience in the Big Data Hadoop Ecosystem to design and implement scalable data solutions.
Detailed Job Description :
Work extensively with the Big Data Hadoop Ecosystem to process and analyze large datasets.Utilize Hadoop, Spark, and SparkSQL technologies to build efficient and high-performance data pipelines.Leverage Python for data manipulation, automation, and integration tasks.Collaborate with cross-functional teams to gather data requirements and deliver optimized solutions.Participate in the design, development, and deployment of Big Data applications and frameworks.Ensure data quality, integrity, and security across big data environments.Stay current with the latest trends and advancements in Hadoop and Spark technologies.Optimize system performance and troubleshoot issues related to big data infrastructure.Roles and Responsibilities :
Design and develop scalable Big Data solutions using the Hadoop and Spark ecosystems.Implement and optimize SparkSQL queries for performance and efficient resource utilization.Develop Python scripts for data ingestion, transformation, and workflow automation.Monitor and maintain Hadoop clusters to ensure high availability and reliability.Collaborate with data engineers, analysts, and stakeholders to deliver actionable insights.Conduct code reviews and provide technical guidance to junior team members.Participate in capacity planning, system tuning, and performance benchmarking.Document technical specifications, processes, and best practices for big data projects.(ref : hirist.tech)