Work Model : Work From Office (Hyderabad)
Experience : 4 to 7 years
Notice Period : Immediate joiners or up to 15 days
Roles & Responsibilities :
- Understand and translate design specifications into robust product features.
- Develop and optimize core product development modules with a focus on performance, scalability, and reliability.
- Design and implement data ingestion pipelines for distributed systems with parallel processing, using Golang, C++, or Java.
- Build connectors to ingest data from diverse sources, including :
1. Cloud storage : Amazon S3, Azure Cloud Storage, Google Cloud Storage
2. Databases : Snowflake, Google BigQuery, PostgreSQL
3. Streaming and messaging : Kafka
4. Data lakehouses : Apache Iceberg
Implement high-availability (HA) solutions, including cross-region replication and failover strategies.Develop and maintain loading monitoring, logging, and error reporting systems.Work with Spark connectors and third-party tools (Kafka, Kafka Connect, etc.).Collaborate with product managers, architects, and design engineers in an Agile development environment.Apply CICD best practices to ensure smooth deployment and release :Proven experience in building distributed systems with strong knowledge of parallel processing.Hands-on experience with any of the following Kafka, Zookeeper, Spark, or Stream Processing frameworks.Strong understanding of event-driven architectures.Programming experience in Golang, C++, or Java.Solid knowledge of cloud platforms (AWS, Azure, or GCP) and modern data platforms (Snowflake, BigQuery, PostgreSQL).Familiarity with Agile software development practices.Strong expertise in CICD pipelines (Jenkins, GitLab, GitHub Actions, etc.).(ref : hirist.tech)