We are seeking a
Big Data Engineer
with strong hands-on experience in Spark and AWS technologies. The ideal candidate should demonstrate a deep understanding of big data concepts, programming fundamentals, and the ability to solve complex problems related to scalability, failure handling, and optimization.
Key Responsibilities :
Design, develop, and optimize big data pipelines using
Spark
on
AWS .
Implement scalable and fault-tolerant data processing solutions.
Troubleshoot and resolve performance bottlenecks in big data workflows.
Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
Write clean, efficient, and well-documented code following core programming principles.
Continuously improve existing data infrastructure for better reliability and performance.
Required Skills & Experience :
Strong practical experience with
Apache Spark
and big data ecosystems.
Hands-on experience with
AWS
services relevant to big data (e.g., EMR, S3, Lambda).
Solid understanding of core programming fundamentals, including
Object-Oriented Programming (OOP)
concepts.
Proven problem-solving skills related to scaling, failure handling, and performance optimization in big data environments.
Ability to explain not just what technologies are used, but why and how they work.
Familiarity with common big data terms and best practices.
Big Data Developer • India