Company Overview
Location : Bangalore, Type : Full-time (WFO)
Lifesight is a fast-growing SaaS company focused on helping businesses leverage data & AI to improve customer acquisition and retention. We have a team of 130 serving 300+ customers across 5 offices in the US, Singapore, India, Australia, and the UK. Our mission is to make it easy for non-technical marketers to leverage advanced data activation and marketing measurement tools that are powered by AI, to improve their performance and achieve their KPIs. Our product is being adopted rapidly globally and we need the best people onboard the team to accelerate our growth.
What You’ll Do
Build & Ship
- Translate problem statements into architectures and implementation plans; deliver functional MVPs within a few weeks window.
- Develop end‑to‑end features across frontend, backend, data, and AI orchestration; own code quality and reviews.
- Leverage latest AI coding stacks and technologies to speed-run app development and establish best practice on templatization of SaaS business apps.
- Create production‑ready APIs, build agentic systems, background automation jobs, and integrations with third‑party services.
AI / LLM Engineering
Design prompts, system instructions, and guardrails; implement function / tool calling and multi‑step reasoning flows.Build retrieval‑augmented generation (RAG) : data pipelines, embeddings, vector indexes, and context strategies.Evaluate and compare models (latency, quality, cost); route requests across providers / models; implement eval harnesses as well as enable feedback improvement loops.Instrument and monitor AI behavior (e.g., response quality signals, hallucination detection, safety filters).Architecture & Ops
Choose pragmatic, modern patterns (serverless where possible) with clear boundaries and failure handling.Set up CI / CD, IaC, runtime observability (logs, traces, metrics), and cost controls (rate limiting, caching, quotas).Ensure data security, privacy, and compliance best practices from day one.Quality, Safety & Reliability
Establish automated testing including AI‑aware tests (prompt / response assertions, red‑team suites).Define SLIs / SLOs (availability, latency) and implement graceful degradation and fallbacks.Collaboration & Leadership
Work as the technical counterpart to the PM; shape scope, manage trade‑offs, and de‑risk early.Mentor a junior engineer; create lightweight tech docs and run quick design reviews.Contribute to the studio’s reusable components, templates, and playbooks.Minimum Qualifications
4+ years building production software; 3+ years in modern web stacks; 1+ years in AI products. Must’ve shipped products end‑to‑end.Deep fluency with Next.js / TypeScript / JavaScript and PythonHands-on experience integrating LLMs (e.g., OpenAI, Gemini, Anthropic), designing prompts / system instructions, and implementing function / tool calling.Built at least one RAG system (embeddings, vector DB, chunking, retrieval strategies) and can explain trade‑offs.Comfortable with cloud‑native deployment (e.g., Vercel / Cloud Run), CI / CD, IaC, and production observability.Strong product sense; bias to ship; excellent collaboration and written communication.Nice to Have
Prior startup or venture‑studio experience; comfort with ambiguity and rapid iteration.Exposure to fine‑tuning / LoRA, model distillation, or on‑device inference for mobile.Experience with analytics / events pipelines, A / B testing, and feature flags.Security background : secrets management, authN / Z patterns, data governance.Our Stack
Frontend / App : Next.js (App Router), React, Tailwind, shadcn / ui, React Native (as needed)Backend : Next.js API routes and / or Python FastAPI; tRPC / REST; background jobs (Cloud Tasks / Queues)AI : OpenAI, Anthropic; orchestration via LangChain / LlamaIndex or lightweight custom; prompt repos & evals (Langfuse / Humanloop); observability (Helicone)Data : PostgreSQL (Neon / Supabase) + pgvector; Prisma / SQLAlchemy; Redis / Upstash; optional vector DB (Pinecone / Weaviate)Infra & DevEx : Vercel / Cloud Run, Docker, Terraform, GitHub Actions, Sentry / Datadog, OpenTelemetryQuality : Jest / Playwright, PyTest; contract testing; red‑team / safety test suites; load testing (k6 / Locust)We choose tools pragmatically per product; you’ll help decide on future stack, set conventions and reusable templates.