Job Description :
Key Responsibilities :
- Design, develop, and maintain high-performance, scalable applications using Java and Big Data technologies
- Build and manage data pipelines to process structured and unstructured data from multiple sources
- Develop and maintain microservices, RESTful APIs, and other distributed systems
- Work with modern Big Data tools and technologies such as :
1. Apache Spark
2. HDFS
3. Ceph Storage
4. Solr / Elasticsearch
5. Apache Kafka
6. Delta Lake
Write clean, efficient, and well-documented codeCollaborate with cross-functional teams including Data Engineers, QA, and DevOpsParticipate in code reviews and contribute to continuous improvement of development processesMentor and support junior developers in the teamRequired Skills & Qualifications :
Bachelors degree in Computer Science, Information Technology, or a related field2- 4 years of hands-on experience in Java development with exposure to Big Data ecosystemsStrong knowledge of core Java, multithreading, and object-oriented programmingExperience in building data processing pipelines and working with large-scale datasetsFamiliarity with Big Data components like Spark, HDFS, Kafka, Ceph, Delta LakeUnderstanding of REST APIs, microservices architecture, and distributed systemsStrong problem-solving and analytical skillsGood verbal and written communication skillsAbility to work in a fast-paced, collaborative environmentPreferred Skills (Nice to Have) :
Exposure to cloud platforms (AWS, Azure, GCP) for Big Data deploymentsExperience with NoSQL databases or search engines like Solr / ElasticsearchFamiliarity with CI / CD tools and agile development practices(ref : hirist.tech)