Role : Mid-level Big Data Engineer
Position Type : Full-Time Contract (40hrs / week)
Contract Duration : 12 months+
Work Schedule : 8 hours / day (Mon-Fri)
Location : Hybrid - Hyderabad, India (3x days onsite)
Our Client is seeking a skilled and motivated Big Data Engineer to join our dynamic data engineering team. This hybrid role based in Hyderabad offers the opportunity to work on cutting-edge data platforms and services that power client's global data ecosystem. You will be instrumental in building scalable data pipelines, integrating APIs, and optimizing big data solutions on AWS.
Key Responsibilities :
- Design, develop, and maintain scalable data processing pipelines using Spark / PySpark and Hadoop on AWS.
- Implement and optimize data workflows on AWS EMR, EC2, and ECS.
- Develop and integrate RESTful APIs and AWS Gateway services using Scala / Java.
- Collaborate with cross-functional teams to understand data requirements and deliver robust solutions.
- Ensure data quality, performance, and reliability across all systems.
- Participate in code reviews, testing, and deployment processes.
- Required Skills & Qualifications :
- 3–5 years of experience in Big Data Engineering.
- Strong hands-on experience with Spark / PySpark, Hadoop, and AWS services.
- Proficiency in Scala or Java for backend and API development.
- Experience with AWS EMR, EC2, ECS, and API Gateway.
- Solid understanding of RESTful API design and integration.
- Familiarity with CI / CD pipelines and version control (e.g., Git).
- Excellent problem-solving and communication skills.
- Preferred Qualifications :
- Experience in data security and compliance within cloud environments.
- Exposure to data lake architectures and real-time data processing.
- Knowledge of containerization tools like Docker and orchestration with Kubernetes.