About the Company :
We are a forward-looking organization with a strong focus on GenAI adoption across the entire company. The role is part of the TCO (Technology & Cloud Operations) Line of Business, supporting innovative data-driven solutions to enhance business outcomes.
Role Summary :
We are seeking a Data Science Engineer with strong expertise in Data Pipelining, Reporting Tools, and GenAI concepts. The ideal candidate will be responsible for building scalable data pipelines, working with cloud platforms, implementing reporting solutions, and supporting GenAI adoption. This role involves collaborating with cross-functional teams and requires hands-on experience in Python, PowerBI, and cloud technologies.
Key Responsibilities :
- Design, develop, and maintain robust data pipelines using Python to process large-scale data from multiple sources.
- Implement and manage reporting solutions using PowerBI, Microsoft Excel, and Confluence.
- Collaborate with teams to integrate GenAI capabilities into workflows and products.
- Work on cloud-based solutions, preferably GCP, for data storage, compute, and processing
tasks.
Apply best practices in data modeling, data quality, and data governance.Support MLOps practices including infrastructure as code (Terraform), CI / CD pipelines for MLworkflows.
Assist in implementing advanced GenAI workflows such as RAG (Retrieval Augmented Generation) and Agentic Systems.Collaborate with US-based panel in the second round of interviews.Mandatory Skills :
Python : Strong core programming skills, specifically for building data pipelines.Reporting Tools : PowerBI, Microsoft Excel, Confluence.Cloud Platforms : Experience with GCP is preferred; AWS is acceptable.GenAI : Basic understanding of Generative AI concepts and use cases.Nice-to-Have Skills :
RAG (Retrieval Augmented Generation) : 1 year experience in implementing GenAIworkflows.
Agentic Workflows : ~1 year experience in agentic GenAI systems.MLOps : Experience with Terraform and setting up MLOps pipelines.(ref : hirist.tech)