Talent.com
This job offer is not available in your country.
Big Data Developer - PySpark / Scala

Big Data Developer - PySpark / Scala

Prorsum TechnologiesHyderabad
30+ days ago
Job description

Key Responsibilities :

  • Data Development : Design, develop, and maintain big data processing jobs using PySpark or Scala / Java.
  • AWS Integration : Work extensively with AWS services such as EMR, S3, Glue, Airflow, RDS, and DynamoDB to build and manage data solutions.
  • Database Management : Utilize both Relational and NoSQL databases for data storage and retrieval.
  • Microservices & Containers : Develop and deploy microservices or domain services, and work with technologies like Docker and Kubernetes.
  • CI / CD : Implement and maintain CI / CD pipelines using tools like Jenkins to ensure efficient deployment.

Required Skills & Qualifications :

  • Experience : 6-10 years of experience in Big Data development.
  • Mandatory Skills :
  • PySpark
  • Scala / Java
  • AWS (EMR, S3, Glue, Airflow, RDS, DynamoDB, or similar)
  • Jenkins (or other CI / CD tools)
  • Technical Knowledge :

  • Experience with Relational and NoSQL databases.
  • Knowledge of microservices, domain services, or API gateways.
  • Familiarity with containers (Docker, Kubernetes).
  • (ref : hirist.tech)

    Create a job alert for this search

    Big Data Developer • Hyderabad