As a Staff AI Engineer, you will be a technical leader responsible for designing and implementing the core architectural components of the
- AI Workbench platform
- and ensuring all delivered applications achieve sustained production quality and scale. You will work within
- "small, skilled AI-native teams"
- using a
- "Human + AI Co-creation"
- approach to deliver products and AI agents rapidly, in weeks, not months.
Your focus will be on the critical journey from a rapid MVP to sustained production.
### Key Responsibilities
1.
Architectural Leadership for Scale and Reliability :Lead the design and implementation of the capabilities necessary to ensure applications becomerobust, scalable, and continuously managed in productionEnsure the platform is designed forseamless growthon demand.Drive the architectural structure of the AI Workbench, ensuring it is built usingModel Context Protocol (MCPs)to maximisecomposabilityand modularity.2.
End-to-End Application Lifecycle Ownership :Take responsibility for theentire application lifecycle managementDesign and implement core features that handle the complex capabilities required for continuous operations, includingdeployment, operations, monitoring, and support3.
Proprietary Generative AI Economics :Design, implement, and maintain the innovative"thought caching" mechanismwithin theAgentic Coreof the AI Workbench.Solve the significant barrier of thehigh and sometimes prohibitive costs of Large Language Model (LLM) access for large-scale use cases. This mechanism is akey differentiatorthat provides asignificant cost advantage to customers4.
Addressing Talent Scarcity :Contribute to overcoming the global scarcity of"AI talent"by mentoring team members and establishing best practices within the AI-native teams.### Required Qualifications
Extensive experience successfully moving sophisticated software (ideally AI-based solutions) from initial development (MVP) to arobust, scalable, and continuously managed application in productionProven ability to design and implement complex architectural components, such as agents and proprietary core mechanisms (e.g., thought caching), that provide a competitive advantage and serve as akey differentiatorDeep technical understanding of the complexities associated with LLM access, performance, and the economic barriers to scaling AI applications.Experience in architectural principles that support sustained growth, including modularity and maximisingcomposabilityA strong desire to operate in an entrepreneurial environment, tackling fundamental challenges related to production complexity and application economics (Conversation History).