Role Name : Data Engineer
Experience : 3+ Years
Location : Bangalore(No relocation)
Notice Period : 20-30 days(who are currently serving)
Mode : Hybrid
Type : Fulltime
Job Description :
As a Data Engineer on our team, you will work on our Hadoop-based data warehouse, contributing to scalable and reliable big data solutions for analytics and business insights. This is a hands-on role focused on building, optimizing, and maintaining large data pipelines and warehouse infrastructure.
Key Responsibilities
- Design, develop, and maintain robust data pipelines in Hadoop and related ecosystems, ensuring data reliability, scalability, and performance.
- Implement data ETL processes for batch and streaming analytics requirements.
- Optimize and troubleshoot distributed systems for ingestion, storage, and processing.
- Collaborate with data engineers, analysts, and platform engineers to align solutions with business needs.
- Ensure data security, integrity, and compliance throughout the infrastructure.
- Maintain documentation and contribute to architecture reviews.
- Participate in incident response and operational excellence initiatives for the data warehouse.
- Continuously learn mindset and apply new Hadoop ecosystem tools and data technologies.
Required Skills and Experience
Proficiency in Hadoop ecosystems such as Spark, HDFS, Hive, Iceberg, Spark SQL.Extensive experience with Apache Kafka, Apache Flink, and other relevant streaming technologies.Proven ability to design and implement automated data pipelines and materialized views.Proficiency in Python, Unix or similar languages.Good understanding of SQL oracle, SQL server or similar languages.Ops & CI / CD : Monitoring (Prometheus / Grafana), logging, pipelines (Jenkins / GitHub Actions).Core Engineering : Data structures / algorithms, testing (JUnit / pytest), Git, clean code.5+ years of directly applicable experienceBS in Computer Science, Engineering, or equivalent experience.