Role Description -
We’re seeking a hands-on engineer with deep expertise in Confluent Cloud Kafka and Apache Flink. Candidate should be able to design, build, and operate real-time streaming pipelines and analytics, write and optimize complex Flink SQL queries, and administer Confluent Cloud environments w.r.t to managing topics, schema governance.
Key responsibilities -
Design, implement, and optimize Flink streaming jobs.
Confluent Cloud Kafka
administration.
Java knowledge.
Enforce TLS, RBAC / ACLs, schema governance, secrets management, and audit logging standards.
Partner with data engineers, platform / SRE, and product teams to define and align SLAs / SLOs, data contracts.
Review designs / PRs, mentor engineers on streaming best practices; contribute to standards and documentation.
Required qualifications -
7+ years building streaming / data-in-motion solutions; 3+ years focused on Kafka (Confluent Cloud strongly preferred).
3+ years hands-on with Apache Flink (Flink SQL); expert at writing and tuning complex Flink queries.
Strong troubleshooting skills with Kafka and Flink.
Excellent communication and documentation habits; can translate requirements into robust streaming designs.
Nice to have -
Experience with Confluent Cloud Flink (managed), ksqlDB, Kafka Streams.
Cloud experience (Azure) and observability stacks (Dynatrace / Prometheus / Grafana).
Data Engineer • Amritsar, Punjab, India