The LLM Applications Engineer transforms AI prototypes into modular, secure, and production-ready workflows. This role focuses on orchestrating AI agents, implementing observability, and building toolkits that accelerate internal and external adoption.
Key Duties / Responsibilities
- Develop agent orchestration using frameworks like LangChain, CrewAI, AutoGen.
- Build reusable APIs, SDKs, and configuration layers for internal consumption.
- Implement prompt safety measures and fallback handling using Rebuff, Guardrails AI.
- Ensure agent workflows are observable and CI / CD friendly.
- Collaborate with platform and backend engineers for deployment enablement.
Leadership Skills :
Ownership of agent toolkit delivery.Cross-functional collaboration with platform and AI scientists.Documentation and internal adoption support.Required Technical Skills :
Node.js, Python, Docker, REST APIs.LangChain, Rebuff, LCEL, Guardrails, LangSmith.CI / CD tooling, OpenTelemetry, workflow orchestration (Airflow, Prefect).Experience implementing agent safety using Rebuff, Guardrails, or similar tools.Familiarity with one or more orchestration frameworks : LangChain, CrewAI, AutoGen (or equivalents).Qualification :
Bachelor s degree in Engineering or related field.5+ years in backend / devops roles, with 2+ years in AI workflow orchestration.Skills Required
Python, Node.js, Docker, Rest Apis