Experience : 5 - 9 Years
Location : Bangalore / Pune / Hyderabad
WorkMode : WFO (Hybrid)
Note : Should be open for short / medium-term travel opportunities within India & foreign location
Technical Skills
Must Have :
Java (primary backend development language)
Kafka (real-time messaging and streaming)
Big Data Processing (Spark, distributed systems)
Strong backend architecture & microservices design
Azure (preferred) or any major cloud (AWS, GCP)
SQL Databases (Postgres or equivalent)
NoSQL Databases (Elasticsearch, MongoDB, or similar)
Python (secondary scripting / data workflows)
Kubernetes & Docker (containerization and orchestration)
Good to Have (Optional) :
Golang (for performance-critical services)
Databricks (managed Spark environment)
Roles and Responsibilities
Backend Development (Scala focus)
Build and maintain high-performance, scalable backend services using Scala.
Design APIs, microservices, and distributed backend systems.
Implement clean, maintainable, and test-driven code (TDD / BDD).
Data Engineering & Processing
Design and implement data pipelines with Spark, Kafka, and distributed storage systems.
Integrate SQL / NoSQL databases for backend and analytics needs.
Incorporate data quality checks, lineage, and governance in pipelines.
Cloud-Native Engineering
Deploy, operate, and scale services on Kubernetes and Docker.
Implement CI / CD pipelines and DevOps best practices.
Ensure reliability, observability, and monitoring of deployed services.
Client Engagement & Consulting
Advise clients on backend system design and distributed storage / compute options.
Present trade-offs between different modeling and architecture approaches.
Encourage collaboration and agile practices between teams.
Architecture & End-to-End Solutions
Drive discussions on backend scalability, fault tolerance, and high availability.
Ensure solutions align with business goals and client objectives.
Big Data Developer • Delhi, India