Job Description
Role Overview
As a Data Engineer at Divami , you will be responsible for designing, building, and maintaining efficient data pipelines . You will collaborate closely with engineering, product, and business teams to ensure data is clean, reliable, and accessible for analytics, reporting, and product innovation.
This is a mid-level role ideal for someone who has worked on end-to-end data engineering projects and is eager to grow with a company where ownership, innovation, and resourcefulness matter more than hierarchy.
Key Responsibilities
Build and maintain data pipelines (ETL / ELT) to move data from multiple sources into data stores / warehouses.
Collaborate with cross-functional teams to define data requirements, models, and metrics .
Manage and optimize data storage solutions (Postgres, BigQuery, Redshift, or cost-effective alternatives).
Ensure data quality, validation, and governance across all pipelines.
Implement automation, monitoring, and alerting for data workflows.
Support business intelligence, dashboards, and analytics use cases .
Work on API integrations and structured / unstructured data ingestion .
Requirements
3–5 years of experience in data engineering / backend development .
Proficiency in SQL and Python (or equivalent programming language).
Hands-on experience with ETL frameworks (Airbyte, Airflow, dbt, Luigi, or similar).
Good understanding of data warehousing concepts, schema design, and optimization .
Exposure to cloud platforms (AWS / GCP / Azure) or open-source alternatives.
Ability to work independently, prioritise tasks, and deliver in a lean setup .
Strong problem-solving mindset and a passion for building scalable systems .
Nice-to-Haves
Experience working in a startup / bootstrapped environment .
Knowledge of real-time data processing (Kafka, Spark, Flink) .
Familiarity with BI tools (Metabase, Tableau, Power BI, Looker).
Exposure to analytics engineering / ML pipelines .
Benefits
What We Offer
Opportunity to own and grow the data function at Divami.
A chance to work on impactful projects that directly influence product and business strategy.
Collaborative culture with designers, engineers, and product thinkers.
Flexibility, autonomy, and career growth in a fast-moving, bootstrapped setup .
Requirements
4 to 6 years of experience in data engineering / backend development. Proficiency in SQL and Python (or equivalent programming language). Hands-on experience with ETL frameworks (Airbyte, Airflow, dbt, Luigi, or similar). Good understanding of data warehousing concepts, schema design, and optimization. Exposure to cloud platforms (AWS / GCP / Azure) or open-source alternatives. Ability to work independently, prioritise tasks, and deliver in a lean setup. Strong problem-solving mindset and a passion for building scalable systems. Nice-to-Haves Experience working in a startup / bootstrapped environment. Knowledge of real-time data processing (Kafka, Spark, Flink). Familiarity with BI tools (Metabase, Tableau, Power BI, Looker). Exposure to analytics engineering / ML pipelines.
Data Engineer • Hyderabad, TG, in