Spark Scala Developer
Location : Bengaluru, Mumbai
Employment Type : Full-time
What Were Looking For
Were hiring a Spark Scala Developer who has real-world experience working in Big Data environments, both on-prem and / or in the cloud. You should know how to write production-grade Spark applications, fine-tune performance, and work fluently with Scalas functional style. Experience with cloud platforms and modern data tools like Snowflake or Databricks is a strong plus.
Your Responsibilities :
- Design and develop scalable data pipelines using Apache Spark and Scala
- Optimize and troubleshoot Spark jobs for performance (e.g. memory management, shuffles, skew)
- Work with massive datasets in on-prem Hadoop clusters or cloud platforms like AWS / GCP / Azure
- Write clean, modular Scala code using functional programming principles
- Collaborate with data teams to integrate with platforms like Snowflake, Databricks, or data lakes
- Ensure code quality, documentation, and CI / CD practices are followed
Must-Have Skills :
3+ years of experience with Apache Spark in ScalaDeep understanding of Spark internalsDAG, stages, tasks, caching, joins, partitioningHands-on experience with performance tuning in production Spark jobsProficiency in Scala functional programming (e.g. immutability, higher-order functions, Option / Either)Proficiency in SQLExperience with any major cloud platform : AWS, Azure, or GCPNice-to-Have :
Worked with Databricks, Snowflake, or Delta LakeExposure to data pipeline tools like Airflow, Kafka, Glue, or BigQueryFamiliarity with CI / CD pipelines and Git-based workflowsComfortable with SQL optimization and schema design in distributed environments(ref : hirist.tech)