Note : Candidates are requested to apply only if they're available for In-person interview on 15th Nov in Bangalore Neon Office.
Job Title : Senior Data Engineer – Python, AWS, PySpark
Experience Required : 4 to 9 Years
Location : PAN India
Employment Type : Full-Time
Job Summary :
We are looking for a skilled and experienced Data Engineer with strong expertise in Python, AWS, and PySpark to join our team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and solutions in a cloud environment, enabling efficient data processing and analytics.
Key Responsibilities :
- Develop and optimize data pipelines using PySpark and Python.
- Design and implement scalable data solutions on AWS (e.g., S3, Lambda, Glue, EMR, Redshift).
- Collaborate with data scientists, analysts, and other engineers to understand data requirements.
- Ensure data quality, integrity, and security across all data platforms.
- Monitor and troubleshoot data workflows and performance issues.
- Automate data ingestion, transformation, and loading processes.
- Participate in code reviews, testing, and deployment activities.
Required Skills :
Strong programming skills in Python .Hands-on experience with PySpark for distributed data processing.Proficiency in AWS services such as S3, Lambda, Glue, EMR, Redshift, CloudWatch, etc.Experience with data modeling, ETL / ELT processes, and big data technologies.Familiarity with CI / CD pipelines and version control (Git).Good understanding of data warehousing concepts and performance tuning.Preferred Qualifications :
AWS certification (e.g., AWS Certified Data Analytics, Solutions Architect).Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions.Knowledge of SQL and NoSQL databases.Exposure to Agile / Scrum methodologies.Soft Skills :
Strong problem-solving and analytical skills.Excellent communication and collaboration abilities.Ability to work independently and in a team environment.