Published on

What LLMs Are And Are Not

Authors

Large Language Models, or LLMs, have become synonomous with Artificial Intelligence. It's important to understand what LLMs are what they are not. In the vast field of Artificial Intelligence I would argue that a better definition of LLMs are that they are the user interface into a broader and much deeper artificial intelligence appliation architecture. The way that Windows became a better user interface for mass adoption of computing in general (over DOS based textual interfaces, ironically) I posit that LLMs and conversational AI provide a better human user interface into AI. We are currently at that inflection point in the industry.

LLMs have remarkable capabilities to understand, generate, and interpret human language. However, it's important to clarify that when we say language models "understand," we mean that they can process and generate text in ways that appear coherent and contextually relevant, not that they possess human-like consciousness or comprehension.

Enabled by advancements in deep learning, which is a subset of machine learning and artificial intelligence (AI) focused on neural networks, LLMs are trained on vast quantities of text data. This allows LLMs to capture deeper contextual information and subtleties of human language compared to previous approaches. As a result, LLMs have significantly improved performance in a wide range of NLP tasks, including text translation, sentiment analysis, question answering, and many more.

This allows us to construct conversational interfaces with the other tribes of AI. LLMs with specialized neural networks are a subset of the Connectionists tribe themselves. Connectionist AI has a vast world of neural network techniques that can yield astonishing specialized results themselves. LLMs are a window into that vast, deep world. We've all experience the Bayesian tribe, with the most universal example being spam detection. LLMs can help us interface to the entire world of Bayesian AI. The Evolutionaries include Arthur Burks, John Holland, The Santa Fe Institute and yielded Genetic Algorithms and Genetic Programming. Conway's Game of Life is a familiar example. Genetic Algorithms can help us, among other things, optimize business processes or any process for that matter. Again, LLMs can provide a nice way to work with this entire body of work, in a converstational way. Symbolic AI like expert systems is among the oldest techniques and most effective techniques. Good Old Fashioned AI, or GOFAI, is in use every time you buy an airplane ticket. There are millions of use cases out there for it. Analogy styled AI of the Goedel, Escher, Bach variety gave us Support Vector Machines. I think you can see that there is an astonishingly vast array of AI techniques and systems, all of which can leverage LLMs.

To really turn all of this technology, research and technique into AI applications, what is required is a runtime that manages the long running processes and long term memory of all of these components and systems. Running millions of these processes in parallel, keeping them fault tolerant, interacting with current and long term memory, retraining and updating models and using AI itself to manage the logic and sequences of the application is what NueroAGI does. The implementation of the runtime and cognition architecture is called Mathison. The lab that created it over the last decade or so is Mathison Labs. We have a select few early users in production. To find out if there is alignment between your AI concept and how we might bring it to life, book an Alignment Call For Founders to discuss.