We are seeking an experienced Big Data Developer with 5–7 years of hands-on expertise in designing, developing, and optimizing large-scale data processing systems. The ideal candidate will have strong proficiency in Scala, Hadoop, Hive, and Spark, along with a solid understanding of distributed computing and data engineering best practices.
Key Responsibilities
- Design, develop, and maintain big data solutions leveraging Hadoop, Hive, Spark, and Scala.
- Build scalable and reliable data pipelines to ingest, transform, and process structured and unstructured data.
- Optimize data processing workflows for performance, scalability, and cost efficiency.
- Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions.
- Implement best practices for data governance, security, and compliance.
- Troubleshoot and resolve performance, data quality, and scalability issues in big data applications.
- Contribute to architectural decisions and provide technical guidance to junior developers.
Required Skills & Qualifications
Experience : 5–7 years in big data development or data engineering roles.Technical Skills :Proficiency in Scala for data processing and application development.Strong experience with Apache Spark (batch & streaming).Hands-on expertise in Hadoop ecosystem (HDFS, YARN, MapReduce).Proficient in Hive for data warehousing and querying.Knowledge of SQL and performance tuning techniques.Experience with distributed computing concepts and large-scale data processing.Strong understanding of data structures, algorithms, and design patterns.Familiarity with version control (Git), CI / CD, and Agile methodologies.Skills Required
Hadoop, Spark, Scala, Sql, Big Data, Spark And Scala, Scala Programming, Hive