Job Role- Data Engineer
Job Type- Full-Time
Work Mode- Remote(6 days working in a week)
We’re looking for a hands-on Data Engineer to build and own end-to-end data pipelines powering downstream AI agent applications. You’ll design scalable data models, automate ETL / ELT workflows, and collaborate on ML deployments in a fast-paced startup environment.
Key Responsibilities :
- Design and maintain data models, schemas, and pipelines (batch & streaming with Spark)
- Automate ingestion from databases, APIs, files, and other sources
- Enable GenAI workflows by enriching data and surfacing real-time context
- Support ML model deployment using MLflow, Docker, or Kubernetes
- Implement monitoring, governance, and CI / CD (Azure DevOps, GitHub Actions, Terraform)
Skills & Experience :
5+ years in Data Engineering or related roleStrong Python, SQL, and Azure ecosystem experienceHands-on with ETL / ELT (dlt, duckDB, dvc, Prefect / Azure Data Factory)Experience with RAG pipelines and ML deployment toolsGood understanding of DevOps, Git, and data governanceNice to Have :
Prompt Engineering / Agent workflowsML or Computer Vision experienceFamiliarity with GDPR, CCPAWhy Join Us :
Fast-growing, revenue-generating PropTech startupSteep learning curve & direct production impactRemote-first with quarterly meetups and multi-market exposure