Description looking for a Senior Data Infrastructure Engineer to scale and evolve the data backbone that powers Hivel's analytics and AI insights. You'll own and optimise how engineering data flows through our systems from multiple third-party integrations to processing pipelines and analytics stores, ensuring it's fast, reliable, and ready for insight :
- Build and scale multi-source data ingestion from Git, Jira, and other developer tools using APIs, webhooks, and incremental syncs.
- Refactor and harden existing Java-based ETL pipelines for modularity, reusability, and scale. Implement parallel and event-driven processing (Kafka / SQS, batch + streaming).
- Optimise Postgres schema design, partitioning, and query performance for 100GB+ datasets.
- Design and own data orchestration, lineage, and observability (Airflow, Temporal, OpenTelemetry, or similar).
- Collaborate with backend, product, and AI teams to make data easily consumable for insights and ML workflows.
- Maintain cost efficiency and scalability across AWS infrastructure (S3 ECS, Lambda, RDS, CloudWatch).
- Create self-healing and monitored pipelines that let you sleep through the night.
Requirements :
6 - 10 years of experience as a Backend Engineer or Data Engineer in data-heavy or analytics-driven startups.Strong hands-on experience with Java and AWS (S3 ECS, RDS, Lambda, CloudWatch).Proven experience fetching and transforming data from multiple external APIs (GitHub, Jira, Jenkins, Bitbucket, etc. ).Solid understanding of data modelling, incremental updates, and schema evolution.Deep knowledge of Postgres optimisation, indexing, partitioning, and query tuning.Experience building data pipelines or analytics platforms at scale (100M+records, multi-tenant systems).Bonus : exposure to dbt, ClickHouse, Kafka, Temporal, or developer analytics ecosystems.(ref : hirist.tech)