Description
Design and implement clean modular efficient Python (3.x) codebases for backend services data pipelines and LLM integrations.
Integrate with external Document AI / LLM systems via RESTful APIs codifying prompts into productiongrade code and managing their lifecycle (versioning tuning concepts template integration).
Architect and evolve MongoDB schemas with expert handling of embedding vs referencing strategies schema migrations and performance tuning.
Perform CRUD operations indexing backup strategies and monitoring on MongoDB Atlas hosted on AWS; manage VPC peering IAM roles and serverless triggers if needed.
Build and maintain cloudnative scalable secure data systems primarily on Azure or AWS.
Ensure high standards of quality with unit testing CI / CD pipelines and coding best practices.
Lead handson development while collaborating closely with Data Scientists Product Managers and DevOps teams.
Champion a highquality productiongrade approach to LLM prompt engineering and backend data services.
Monitor technological trends in AI integration NoSQL technologies and cloudnative data architectures to keep Neurons tech stack futureproof.
The Requirements
Please enter the minimum criteria skills education licenses etc. required to do this job
Mandatory Skills
Python Development
oStrong proficiency in Python 3.x backend and scripting tasks
oExperience integrating with Document AI / LLM systems (e.g. OpenAI Azure OpenAI) via APIs
oGood understanding of RESTful API concepts and integration patterns
oAbility to codify prompts manage their lifecycle and integrate templates into production LLM pipelines
oFamiliarity with unit testing CI / CD pipelines (e.g. GitHub Actions Azure DevOps)
Data Analytics Exposure
oFamiliarity with endtoend data analytics workflows including data preparation transformation and insight delivery
oAbility to support or collaborate with analytics teams to ensure backend systems support analytical use cases
oExperience working with WTWs Radar platform is a strong plus
Document Database Expertise
oStrong experience working with document databases for highperformance applications MongoDB preferred
oProficiency in schema design including embedding vs referencing strategies
oHandson experience with CRUD operations indexing performance tuning and schema evolution
Cloud Platform Familiarity
oStrong familiarity with Azure or AWS services relevant to data and backend application hosting
oBonus : experience with AWS / Azure SDKs in Python
Qualifications
General
o812 years of total experience in data engineering backend development and / or cloudnative application development
oAbility to operate both strategically (solution architecture) and tactically (coding handson)
oExcellent communication and documentation skills to share complex ideas with technical and nontechnical stakeholders
NicetoHave Skills
Experience finetuning or customizing LLMs beyond just API integration.
Familiarity with data governance frameworks and data quality best practices.
Experience in insurance financial services or digital platform environments.
Exposure to serverless cloudnative architectures.
Understanding of secure software development practices in regulated environments.
Key Skills
Administrative Skills,Facilities Management,Biotechnology,Creative Production,Design And Estimation,Architecture
Employment Type : Full-Time
Experience : years
Vacancy : 1
Lead Data Engineering • Gurugram, Haryana, India