About the Role :
We are looking for a highly skilled and motivated Senior Data Engineer with strong experience in Python, IBM DataStage, Perl, and Java. The ideal candidate will be responsible for designing and maintaining robust data pipelines and ETL solutions, ensuring high data quality, scalability, and performance.
Key Responsibilities :
Data Pipeline Development :
- Design, develop, and optimize scalable and efficient data pipelines using Python.
ETL with IBM DataStage :
Build and maintain ETL processes integrating data from diverse sources.Legacy Script Maintenance :
Maintain and enhance existing Perl and Java scripts, supporting migration or integration withmodern systems.
Stakeholder Collaboration :
Work closely with analysts, architects, and business teams to understand requirements and deliver robust solutions.Data Quality & Troubleshooting :
Identify and resolve data issues, ensuring data integrity across systems.Code Best Practices :
Develop unit / integration tests, participate in code reviews, and follow best practices in documentation and deployment.Required Skills :
5+ years of experience in data or software engineering roles.Expertise in Python for scripting, automation, and data processing.Experience with IBM DataStage for ETL development and integration.Hands-on knowledge of Perl and Java, especially for legacy systems.Proficiency in relational databases such as Oracle, SQL Server, or PostgreSQL.Familiarity with Git and CI / CD pipelines.Strong analytical and troubleshooting skills.Ability to work independently and collaboratively.Preferred Qualifications :
Experience with cloud platforms like AWS, Azure, or GCP.Knowledge of data warehousing and big data tools / technologies.Familiarity with Agile / Scrum development :Bachelors or Masters degree in Computer Science, Engineering, or a related fieldref : hirist.tech)