Job Description
Key Responsibilities
- Debug and resolve data ingestion and mapping issues for clients.
- Understand current data mappings for file ingestions (FinTech files) for each client.
- Work with ETL / ELT pipelines and messaging systems (Kafka or any custom ingestion services).
- Validate and process data in common formats (CSV, JSON, Parquet).
- Monitor and optimize data pipelines with Prometheus / Grafana alerts.
- Collaborate with software engineers on schema evolution and data contracts.
- Contribute automation scripts in Python / Shell to reduce repetitive tasks.
- Participate in on-call for production data issues.
Requirements
Required Skills
Strong knowledge of SQL and PostgreSQL.Experience with ETL / ELT pipelines and messaging systems (Kafka, Spark optional).Understanding of data formats (CSV, JSON, Parquet).Familiarity with MySQL, Snowflake, or BigQuery.Exposure to Kubernetes, Docker, AKS for running data jobs.Ability to debug ingestion errors and runtime failures.Good to Have
Knowledge of observability for data pipelines (Prometheus / Grafana).Familiarity with GitOps (FluxCD / Helm) and CI / CD (GitHub Actions).Interest in SRE / Platform engineering mindset (reliability, automation).Requirements
Required Skills - Strong knowledge of SQL and PostgreSQL. - Experience with ETL / ELT pipelines and messaging systems (Kafka, Spark optional). - Understanding of data formats (CSV, JSON, Parquet). - Familiarity with MySQL, Snowflake, or BigQuery. - Exposure to Kubernetes, Docker, AKS for running data jobs. - Ability to debug ingestion errors and runtime failures. Good to Have - Knowledge of observability for data pipelines (Prometheus / Grafana). - Familiarity with GitOps (FluxCD / Helm) and CI / CD (GitHub Actions). - Interest in SRE / Platform engineering mindset (reliability, automation).