Talent.com
Search jobs
Search salary
Tax calculator
For employers
Sign in
Find jobs
This job offer is not available in your country.
Data Engineer - ETL / Data Pipeline
Brainhunter Recruiting (India) Private Limited
Bangalore
30+ days ago
Job description
Job Responsibilities :
Design, build, and maintain scalable ETL pipelines and data workflows using AWS Glue, Spark, and Python.
Implement data ingestion from structured and unstructured data sources into AWS Data Lake or data warehouse systems.
Work with large-scale datasets to ensure efficient data transformation and loading across various AWS storage layers (S3, Redshift, RDS).
Write complex SQL queries for data validation, transformation, and reporting.
Develop and maintain metadata, data lineage, and logging for data quality and traceability.
Optimize data workflows for performance, scalability, and cost efficiency.
Collaborate with Data Scientists, Analysts, and Business teams to understand data needs and deliver robust solutions.
Ensure data governance, security, and compliance in cloud data pipelines.
Perform unit and integration testing, and support deployment in lower and production Skills :
8+ years of experience in Data Engineering or related roles
Strong hands-on experience with AWS Glue (Jobs, Crawlers, Workflows, Dynamic Frames)
Experience with AWS services : S3, Redshift, Lambda, Athena, Step Functions, CloudWatch, IAM
Strong experience in Apache Spark (PySpark preferred) for distributed data processing
Advanced skills in Python (data manipulation, scripting, exception handling, performance tuning)
Strong in SQL writing complex queries, stored procedures, and query optimization
Experience in building and orchestrating ETL / ELT pipelines
Familiarity with schema design, data partitioning, compression formats (Parquet, ORC, Avro)
Hands-on with data cataloging, metadata management, and data warehousing Skills (Nice to Have) :
Experience with Airflow, AWS Step Functions, or other orchestration tools
Knowledge of DevOps for Data CI / CD pipelines using tools like Git, Jenkins, CodePipeline
Exposure to data warehousing concepts and experience with Redshift or Snowflake
Experience working in Agile Scrum environments
Understanding of data security, privacy, and compliance standards (GDPR, HIPAA, Qualities :
Strong communication and collaboration skills, with experience in stakeholder interaction
Excellent analytical thinking and problem-solving abilities
Ability to work independently and within a cross-functional team
Detail-oriented, with a commitment to delivering high-quality, reliable solutions
(ref :
hirist.tech)
Create a job alert for this search
Data Pipeline Engineer • Bangalore
Create