Senior Data Engineer
Location : Bangalore, Gurugram (Hybrid)
Experience : 4-8 Years
Type : Full Time | Permanent
Job Summary :
We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command of modern data stacks. You'll be responsible for building scalable ETL / ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.
This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.
Key Responsibilities :
PostgreSQL & Data Modeling :
- Design and optimize complex SQL queries, stored procedures, and indexes
- Perform performance tuning and query plan analysis
- Contribute to schema design and data normalization
Data Migration & Transformation :
Migrate data from multiple sources to cloud or ODS platformsDesign schema mapping and implement transformation logicEnsure consistency, integrity, and accuracy in migrated dataPython Scripting for Data Engineering :
Build automation scripts for data ingestion, cleansing, and transformationHandle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)Maintain reusable script modules for operational pipelinesData Orchestration with Apache Airflow :
Develop and manage DAGs for batch / stream workflowsImplement retries, task dependencies, notifications, and failure handlingIntegrate Airflow with cloud services, data lakes, and data warehousesCloud Platforms (AWS / Azure / GCP) :
Manage data storage (S3, GCS, Blob), compute services, and data pipelinesSet up permissions, IAM roles, encryption, and logging for securityMonitor and optimize cost and performance of cloud-based data operationsData Marts & Analytics Layer :
Design and manage data marts using dimensional modelsBuild star / snowflake schemas to support BI and self-serve analyticsEnable incremental load strategies and partitioningModern Data Stack Integration :
Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or KafkaSupport modular pipeline design and metadata-driven frameworksEnsure high availability and scalability of the stackBI & Reporting Tools (Power BI / Superset / Supertech) :
Collaborate with BI teams to design datasets and optimize queriesSupport development of dashboards and reporting layersManage access, data refreshes, and performance for BI toolsRequired Skills & Qualifications :
4-6 years of hands-on experience in data engineering rolesStrong SQL skills in PostgreSQL (tuning, complex joins, procedures)Advanced Python scripting skills for automation and ETLProven experience with Apache Airflow (custom DAGs, error handling)Solid understanding of cloud architecture (especially AWS)Experience with data marts and dimensional data modelingExposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)Familiarity with BI tools like Power BI, Apache Superset, or Supertech BIVersion control (Git) and CI / CD pipeline knowledge is a plusExcellent problem-solving and communication skillsref : hirist.tech)