Key Responsibilities :
- Design, develop, and maintain data processing applications using Scala and Apache Spark.
- Collaborate with cross-functional teams to gather requirements and deliver scalable, efficient solutions.
- Implement test-driven development practices to improve reliability and maintainability.
- Manage deployment of artifacts from lower to higher environments, ensuring smooth transitions.
- Troubleshoot and optimize Spark performance issues for large-scale data processing.
- Actively participate in Agile ceremonies (sprint planning, development, reviews) and deliver high-quality, timely outcomes.
- Provide production support for critical data batches, including timely issue resolution and / Competencies :
- Strong hands-on experience with Scala programming language (mandatory).
- Proven expertise in Apache Spark (development and performance optimization).
- Solid understanding of data structures, algorithms, and design patterns for efficient data processing.
- Good knowledge of SQL and database concepts.
- Strong analytical and problem-solving skills.
- Familiarity with DevOps practices (CI / CD, deployment automation, monitoring) is a Skills (Non-Negotiable) :
- Scala (Pure Scala coding experience is mandatory).
- Apache Spark (application development & optimization).
- Big Data ecosystem knowledge.
- Strong problem-solving & debugging skills.
- Agile delivery Note :
Looking specifically for a Pure Scala Developer.
Scala coding is mandatory; candidates without solid Scala hands-on experience will not be considered.
(ref : hirist.tech)