Years of experience : 5+ yrs
Role Description
About the role :
The position is open with DaaS Core Data Assets Development and Support team, they focus on the serving external customers / clients and the position is open as part of new hiring projection for their upcoming or new client onboardings and serving the current clients.
Roles& Responsibilities :
New hire will work to optimize the data engineering pipeline- take care of active jobs running in batch and mini batch, will use spark and scala to write code or make modifications. He / She will be responsible for activities like designing, fine tuning, development, support the deployment and give final sign off for the release, overall optimisation of data and pipelines.
1.Hadoop using Core Java Programming,
2.Spark or Scala
3.Hive tables and DB experience ( SQL)
4.Expertise in OOPs – Java / Python ( Java is utmost preferred and okay to consider Java developer who have transitioned into a Python programming role.
Experience : 5-12 yrs
The candidates need to be culturally aligned and key aspects are- Being Independent or Self - Reliant, Technical expertise to be able to provide suggestions and innovative ideas, Eager to get things done and have zeal to learn.
Work model : Hybrid - Wed, Thu, Fri - Work from Office- Mahadevapura, BWTC.
Work timings : General shift
Employee Value Proposition :
1.The team deals with Petabytes of Data and is also responsible for client related revenue generation.
2.Candidate will be part of designing, fine tuning, development, support the deployment and give final sign off for the release.
3.He / She will get an opportunity to technically can upskill on Kubernetes, Golang.
4.Candidate can foresee endeavours going on Gen AI perspective during his / her collaboration with the team. Understand use of Gen AI wrt such a large environment.
Data Engineer • Bangalore, Bangalore (district)