Artificial Intelligence

Case study: IT support with AI — 70% less time

· 7 min read · SISCON Blog

Sometimes a real case explains better than any theory. This is a recent PoC for a financial-sector client (anonymized) where we automated 62% of L1 tickets in 4 weeks.

The starting point

A 600-employee fintech with ~400 daily IT tickets handled by an L1 team of 8 people. Average resolution time: 45 minutes. Cost per ticket: ~$150 MXN. Coverage 8x5. The problem wasn't lack of resources — it was that 70% of tickets were repetitive questions answered in their own internal documentation.

The solution

An AI agent connected to their knowledge base (Confluence + procedure PDFs) able to: identify the type of incident, search for the documented solution, guide the user step by step and escalate to a human if the issue isn't in the documentation or the user prefers it. Stack: Llama 3.1 70B running on Ollama, orchestration with LangGraph, vector base in ChromaDB and observability with Langfuse.

The 4-week process

Week 1: knowledge-base audit and selection of the top 50 most frequent ticket types. Week 2: setup of infrastructure and ingestion of the corpus. Week 3: prompt engineering, evaluation pipeline and integration with the ticketing system. Week 4: closed pilot with a sample of users, adjustments and metrics.

Concrete results

Average resolution time: from 45 to 15 minutes (-67%). Cost per ticket: from $150 to $25 MXN (-83%). Coverage: from 8x5 to 24/7. Tickets resolved without human intervention: 62%. End-user satisfaction (CSAT): 4.3/5 — slightly higher than the human-only baseline (4.1/5). The L1 team didn't shrink — they refocused on more complex tickets and on improving the documentation that powers the agent.

What was unexpected

Two findings surprised us. First: the agent identified gaps in the documentation that nobody had noticed (questions with no documented answer). Second: when the agent resolves the trivial question, the satisfaction of the L1 team goes up — they spend less time on repetitive cases and more on cases where they actually add value.

What didn't work

Honest part: in the first iteration the agent confidently answered questions outside the documentation (hallucination). It took us a week of work on guardrails and confidence checks to bring that to acceptable levels (currently below 3% of cases). It's the type of problem that demos don't show, but that production reveals.

Want a similar case study?

Schedule a discovery session. In an hour we can tell you whether your support volume justifies the investment and what the expected ROI would be. Detailed proposal in 1 week if it makes sense.

Want to learn more? Book your free session →

Ready?
Let's talk about your next digital step
You don't need to have everything figured out. Tell us where you are and where you want to go.