Senior Data Engineer – ETL & Pipeline Development
Experience : 7 to 12 Years
Location :
Remote (Work from Home) / Bangalore / India
Mode of Engagement : Full-time
No. of Positions :
Educational Qualification :
B.E / B.Tech / M.Tech / MCA / Computer Science / IT
Industry :
IT / Data / AI / LegalTech / Enterprise Solutions
Notice Period :
Immediate Joiners Preferred
What We Are Looking For :
7–12 years of
hands-on experience
in designing, developing, and deploying
end-to-end data pipelines and ETL workflows
— not just using prebuilt tools, but
building from scratch
using Python and SQL.
Strong command of
Python programming
for data transformation, orchestration, and automation (e.g., using Pandas, Airflow, or custom schedulers).
Solid experience in
writing complex SQL queries , optimizing database performance, and designing schemas for large-scale systems.
Proven experience integrating
RESTful APIs
for data ingestion, transformation, and delivery pipelines.
Working knowledge of
AWS / GCP / Azure
for data storage, processing, and deployment (S3, EC2, Lambda, BigQuery, etc.).
Practical exposure to
Docker, Kubernetes , and
CI / CD pipelines
for automating and deploying ETL and data workloads.
Familiarity with
AI-driven data pipelines or automation workflows
is an added advantage.
Responsibilities :
Design, architect, and build ETL pipelines
from the ground up to extract, clean, transform, and load data across multiple systems.
Develop and deploy
custom Python-based data frameworks
to automate workflows and improve data quality.
Build and maintain
high-performance SQL queries
and database structures to support analytics and AI teams.
Develop and integrate
API-based data ingestion systems
(internal and external).
Deploy and manage workloads using
Docker, Kubernetes, and CI / CD
tools ensuring high availability and version control.
Work closely with product, AI, and analytics teams to
deliver intelligent, automated data solutions .
Implement
data validation, monitoring, and alerting
mechanisms for production pipelines.
Continuously optimize pipeline performance, cost efficiency, and scalability in cloud environments.
Qualifications :
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Proficiency in
Python (FastAPI, Flask, or Django) ,
SQL , and
REST API design .
Strong understanding of
ETL principles, data modeling, and pipeline orchestration
(Airflow / Prefect / Dagster).
Experience working with
AWS (S3, Lambda, EC2, Glue, Athena)
or equivalent GCP / Azure components.
Hands-on exposure to
Docker, Kubernetes , and
Git-based CI / CD
workflows.
Excellent problem-solving, debugging, and analytical skills with an
ownership mindset .
Senior Data Engineer • Dehra Dun, Uttarakhand, India