It's been two years since ChatGPT enamoured the world. When ChatGPT first launched, everybody was amazed at its seemingly magical capability to answer any kind of question in any manner requested by users. Not only was the potential for disruption in the world of knowledge recognised, but the potential for natural language interfaces and voice interfaces to become standard in technology applications was also predicted to change the way software was designed.
If 2023 was the year of ChatGPT and an introduction to generative AI capabilities, as 2024 has progressed, the world has become more and more interested in so-called agentic AI. Agentic AI is an extension of existing generative AI capabilities.
The difference from existing large language model capabilities, which produce a single output in response to a single input, is that agentic AI takes a single input and carries out a series of "reasoning" steps in order to produce a more considered response to the initial prompt. It does this by feeding the AI-produced output back into the large language model to determine what next step should be taken. This means that a simple prompt given by a human can be broken down into a series of tasks by a large language model in order to produce a more considered response compared to classic generative AI capabilities.
For many, this kind of technology is potentially game-changing. It democratises bureaucratic processes and allows a human to get from A to B without necessarily having to know the route they need to navigate to get there. When combined with integrations into other generative AI capabilities and other system integrations, this technology seems to be closer to the autonomous working many people envision when thinking about artificial intelligence.
Nonetheless, the adoption of agentic AI is not without its risks. While it may be suitable for quick or one-off tasks, there is likely to be a large number of professionals who are concerned about a computer deciding the exact steps to take in a given process. This is likely to be especially the case in risk-averse professions, such as those who work in the legal industry.
Product designers would be wise to bear this in mind when leveraging generative AI capabilities. If control is important in a given process, they should ensure that there is an opportunity for users to understand all of the steps that are taken and to "course correct" the AI if it seems to be going off in an incomprehensible or inadvisable direction.
At an organisational level, there may be concerns with leveraging agentic AI given how large language models operate. It is core to large language models that they operate in a stochastic manner instead of a deterministic manner. In practice, this means that the output of large language models is not always predictable and that different actions can be taken in response to the same prompt. This could have a number of negative repercussions if different people across the business are trying to do the same thing, but using generative AI results in a different route being taken to achieve it. This could negatively impact efficiency, risk, and record-keeping.
Indeed, for many processes, organisations may prefer to give users as little latitude as possible in how to conduct their work. In practice, this means that some degree of design needs to take place when considering agentic AI. For example, is the process in question best accomplished through an agentic AI-powered route, or would it be better to use a deterministic route such as an expert logic application? Or perhaps a bit of both?
A common thread runs through all applications of generative AI. Many creators of generative AI models are keen to emphasise the fact that these models mimic human reasoning. In reality, the fundamental mechanism through which large language models work is "next token prediction," which means that the models produce their output based on the semantic relationship between words and phrases.
While we do not know exactly how the human brain carries out its reasoning function, it seems an oversimplification to say that there is a large degree of commonality between how machines think and how humans think. We should bear this in mind when considering whether a process requires a human to be in the driving seat or whether it is sufficient for AI to be in the driving seat with a human having oversight. We should also consider the impact on a human's capability to think and whether we lose something if AI is repeatedly carrying out a cognitive task that humans should be performing. All of these concerns are present and have been present since the introduction of large language models, but they are perhaps brought to the forefront with the introduction of agentic AI capabilities.
Agentic AI is an advancement that is exciting in multiple ways. First, it is a new tool in the armoury for creating game-changing applications that can transform the lives of consumers and businesses. Second, it calls into question the nature of what we do in our day-to-day lives as humans, whether we act for specific reasons, and if it is time to change. The true repercussions of embedding agentic AI in applications will continue to be felt over the coming years, but this is an extremely exciting space to watch in the meantime.
Copyright © 2023 Legal IT Professionals. All Rights Reserved.