Experience Range : 4 - 10 Years
Education : Bachelors degree in computer science
Hiring Location : PAN India
Mandate skills : Java / Python + Scala, Big data using Apache Spark, AWS, Terraform / Ansible / Cloud Formation
Must Have Skills :
- 3+ years of hands-on experience in Big Data processing using Apache Spark (Scala)
- Strong programming experience in Java or Python and Scala
- Proven expertise in AWS Cloud (S3, EC2, EMR, Lambda)
- Experience in Spark Streaming, Hadoop, Spark SQL, Presto / Hive
- Proficiency in RDBMS (PostgreSQL, MySQL, or Oracle)
- Experience with orchestration tools like Apache Airflow
- Strong skills in Unix / Linux OS, Shell scripting, Python, JSON, and YAML
- Source control experience with GitLab or Bitbucket
- Agile / Scrum methodology experience in application development
- Strong troubleshooting, problem-solving, and communication skills
Good to Have Skills :
Experience with Docker and containerized application deploymentFamiliarity with Infrastructure-as-Code tools like Terraform, Ansible, or Cloud FormationAWS Certification (any level)Exposure to service-oriented architectureAbility to develop reusable code / components across multiple projectsExperience collaborating with Data Science teams and building rapid technical prototypesMentoring and guiding junior team membersAwareness of industry trends and participation in peer networks or : Java + Scala, Big data using Apache Spark, AWS, Terraform / Ansible / Cloud Formation(ref : hirist.tech)