The Offer
- Work within a company with a solid track record of success
- Excellent career development opportunities
- Great work environment
The Job
Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python / Scala, and Spark .Work extensively with the Kafka and Hadoop ecosystem , including HDFS, Hive, and other related technologies.Write efficient SQL queries for data extraction, transformation, and analysis.Implement and manage Kafka streams for real-time data processing.Utilize scheduling tools to automate data workflows and processes.Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.Ensure data quality and integrity by implementing robust data validation processes.Optimize existing data processes for performance and scalability.The Profile
Our client is looking for an immediate joiner and experienced Big Data Developer with a strong background in Kafka, PySpark, Python / Scala, Spark, SQL, and the Hadoop ecosystem.The ideal candidate should have over 5 years of experience and be ready to join immediately . This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions.The Employer
Our client has deep expertise in Technology domains, such as Cognitive Process Automation, Data Engineering, Statistical Analysis and Optimization, Data Science, Natural Language Processing, and Computer Vision, and is well prepared to transform organizations into Intelligent Enterprise.