De-mystifying AI for Service Management

De-mystifying AI for Service Management

De-mystifying AI for Service Management - Blog by Santeri JussilaNo doubt AI is the talk of the town right now. And yes, there is some hype around it also 😉 With this blog, I want to de-mystify AI and discuss how it can concretely help IT support in their demanding daily work. 

 

Starting with the problem 

Let’s start by framing the problem that we are trying to address with AI.  

When we meet different Service Desk and IT teams, we usually hear a pretty consistent message:

  • IT support workloads are increasing.
  • The complexity of contacts to the Service Desk is rising.
  • Infrastructure and the number of services are growing, along with the dependencies between them.

So the job of the IT Support team is getting harder by the day. 

At the same time, the statistics indicate that automation has not been able to keep up and provide much-needed help.  

  • On average, over 80% of the contacts still require manual work by an agent. 
  • A big portion of these contacts – over 40% - come through work-intensive submission channels, like email and voice.

All of this has led to a more than 20% increase in the cost-per-agent contact over the last three years.

We feel this is not sustainable. How do we turn this tide, and help IT Support work smarter?  

 

How can AI help IT Support work smarter 

We don’t think improving productivity is a magical task. Real-life IT support involves a vast collection of smaller tasks, many of which still rely heavily on human actions. This implies that:

 

AI should be integrated into those areas where actual work is happening.

 

We want AI to be seamlessly available when and where it's needed, with a focus on addressing the specific task at hand. Moreover, we want it to serve many user roles across the organization. 

Let’s go through a few concrete examples:

#1 Imagine you are an employee who needs to update your tax card.  

Not many employees remember the routine for this by heart. Instead of trying to call somebody or finding instructions through a myriad of web pages and links, the employee can simply call up the AI assistant and ask what to do - in plain language. AI will recognize the intent and start guiding the employee with clear instructions on how to update the tax card. This reduces the contacts to support teams – which in turn helps with the productivity challenge we described earlier.

#2 Or imagine you are an agent who works in the Service Desk frontline and deals with high ticket volumes on a daily basis.  

When we visit different service desks, it’s striking to see how much time frontline agents spend purely on writing emails. This is a great use case for AI, which can serve as an extra pair of hands for handling emails. AI can auto-generate responses, which is a really useful productivity tool, e.g. for first responses or status updates.  AI can also complete and correct agent-written emails, which is a handy time-saver and quality booster for all email correspondence.  

Efecte’s approach is called Effie AI. Effie AI is a new type of personal AI-based assistant that knows you and can assist you in various daily tasks. In short, it helps you work smarter. Besides these two examples, Effie AI can also help with other important daily tasks, such as ticket resolution and live chat conversations.

 

Human remains in control

Many AI models, and especially the new language models, are effectively black boxes. Users cannot really understand how and why AI generated the result.

That’s why we believe:

 

It is fundamentally important to provide mechanisms for users to remain in control and decide when and where they apply AI assistance.

 

With Effie AI admins can define what data elements are visible to the AI model. Based on this, agents can define what specific context attributes (like ticket ID, status, device type, etc.) are passed to the AI model in each task (by default excluding any personal data). And no AI-generated content goes out without a human being able to validate and approve it first.

That’s why at Efecte we have also started to implement more transparency into our own AI models, like the NLP (natural language processing), that powers Effie AI Chatbot feature. With that model, the admin can see inside the AI decision flow. This provides a unique transparency – which we believe is one of the key success elements for AI going forward.

 

Does the user need to know how AI behaves? 

This also creates the question – do users even need to know how AI behaves? I think the answer is yes and no. 

Yes, AI is one of the new citizen skills for the future, and users do need to understand the limitations and risks of AI. They should be aware that AI can and will make mistakes. And while it’s really good at certain tasks, it’s not always the right solution to all problems. As a solution provider, we also have the responsibility to explain the limitations and risks of AI in our software. 

At the same time, no, users don’t need to know the ins and outs of an AI model to benefit from it.

 

The use of features powered by AI should be super easy. We talk about the design principle of AI with a single click.

 

The point is to ensure users do not need special training or skillsets on things like neural networks or model prompting to be able to use it. This is the only way to drive adoption and benefit at scale.

 

Is this just ChatGPT with a different name? 

As many users are already familiar with OpenAI’s ChatGPT product, we are also often asked whether we can just use that to support the IT work. While some of the Effie AI features do use a similar language model as ChatGPT, there are also unique characteristics and value that you can get with AI designed specifically for Service Management.   

First, Effie AI answers are contextualized to the IT support situation and the user in question. Whereas ChatGPT on its own gives great generic answers, Effie can provide more personalized, accurate, and situational answers. This is enabled by combining the local and contextual data from the Efecte platform (like ticket description, conversation history, services impacted and device type/model in question) with the power of the language model’s ability to create responses.  

Second, Effie AI provides a seamless user experience for specific tasks in IT support. With Effie AI, users don’t need to context-switch between different tools (with separate logins, copy-pasting, etc). Effie AI already resides where the work is done – i.e. directly in the tool agents use for the IT support. It can be used as part of the natural workflow and when needed – literally with a single click.

 

AI on your terms

It’s also important to recognize that organizations have concerns in terms of where their data reside when they use AI. Many organizations also have regulatory restrictions in place related to this.  

That’s why one of our core design principles has been to enable organizations to run AI also locally. In practice, this means any data center customer chooses – though considering there are certain processing capacity requirements especially for AI features powered by Large Language Models. 

 

With the local approach organizations can implement more secure AI with control over where data resides and is processed.  

 

Efecte’s Effie AI is powered by two main types of models: Generative AI and NLP AI models. 

The Generative AI model (often referred to as the Large Language Model) is good at creating new content based on patterns learned. It provides an optimal engine for features like Effie AI Email and Chat.  We started our journey by using Open AI’s GPT model but have been also testing our own open-source-based model. This will enable our customers to run Effie features that require Generative AI locally*, without any data going to OpenAI or other provider-based models (* first PoC during Q4).  

NLP AI model on the other hand is good at classifying intent and sentiment and deciding how to respond based on pre-defined categories. This makes it a good fit for features like Effie AI Chatbot. Here we use our own NLP model, which can already today run in local environments. 

 

Hope you enjoyed this blog! If you are interested to talk more and learn how to get started, please don’t hesitate to reach out to me at santeri.jussila@efecte.com.

 Santeri Jussila

Written By -

Santeri Jussila is the Chief Product Officer (CPO) of Efecte since March 2021. He is responsible for leading Efecte's product management, vision, and strategy. Before joining Efecte Santeri worked for 14 years in various international management positions at Nokia and before that at Comptel. He has considerable experience in the field of international product and customer experience development of tech industries.

Recent blog posts

Will AI replace the Service Desk Agent?

April 2, 2024

The work of service desk teams is increasingly demanding, partly due to the higher complexity of the topics they address. Without more advanced..

READ MORE

Getting started with AI in IT Service Management: 3 steps to success

February 15, 2024

Digital transformation has revolutionized many areas of modern business, but the IT Service Desk has been left behind. According to Gartner, over 80%..

READ MORE

The Service Management Trends 2024

January 12, 2024

The new year is in the starting blocks and as we look forward to the developments in the ITSM market, I would like to share some of our thoughts and..

READ MORE