About Giggso
Giggso is an award-winning, Michigan-based AI startup, recognized in the Top 50 Michigan Startups of 2023 & 2024. Founded in 2017, we deliver a unified platform for AI agent orchestration, governance, and observability, simplifying complex enterprise workflows.
Our solutions extend to model risk management, security, and blockchain enablement, ensuring trustworthy AI across diverse industries. By automating operations and providing real-time monitoring, Giggso drives cost savings and boosts organizational intelligence.
We champion responsible AI to help businesses optimize decision-making and enhance customer experiences at scale.
Skills :
Proven experience in Generative AI (LLMs, RAG, prompt engineering, evaluation frameworks)
- Hands-on with Agentic AI workflows (multi-agent orchestration, tool integration, safety guardrails)
- Experience with LLMOps frameworks and vector DBs
- Ability to co-create solutions with IFLI AI team for insurance-specific use cases
- ML modeling exposure (predictive analytics, feature engineering, insurance KPIs)
- Model evaluation, Responsible AI, and explainability
Key responsibilities :
Participate in solution workshops with IFLI stakeholders for solution design and prioritization.Share best practices, reusable accelerators, and frameworks while respecting IP boundaries.Deliver end‐to‐end AI / ML solutionsDesign agentic workflows (multi‐tool / multi‐agent orchestration) across the enterprise.Optimize RAG pipelines (chunking, retrievers, indexing, caching) for accuracy, groundedness, and cost efficiency.Extend / upgrade existing solutions with measurable relevance gains and safety controls.Contribute to ML use cases such as lead conversion uplift and risk or claims propensity, including feature engineering and model evaluation.Support ML models with experimentation, monitoring inputs / outputs, and recalibration inpartnership with IFLI teams
Ensure security, compliance, and Responsible AI adherence.Perform bias / fairness checks and maintain explainability artifacts (e.g., SHAP / LIME).Conduct red‐teaming and adversarial testing (prompt injection, hallucination, toxicity) andimplement guardrails for safe outputs.
Design optimized prompts, implement caching and model selection strategies to reduce token usage and inference costs.Monitor model drift, retrain or fine‐tune models, and enhance existing solutions with measurable relevance and safety improvements.Establish evaluation harnesses (automatic + human‐in‐the‐loop) and red‐teaming procedures.Deliver architecture diagrams, playbooks, evaluation reports, documentation, and knowledge transferEnsure acceptance criteria and compliance documentation are met.Requirements :
At least 3 years of experience in AI / ML or MLOps (≥2 years in production-like settings forrespective role).
Strong Python engineering (typing, testing, packaging) for AI / ML rolesStrong experience in DevSecOps practices for ML pipelines, including CI / CD automation,container security, vulnerability scanning, and Infrastructure as Code for cloud environments
Ability to implement cost governance, build dashboards, and optimize AI / ML
workloads.
Experience working in regulated domains (BFSI) and handling PII / PHI.Excellent collaboration, documentation, and stakeholder communication