As he watched the eyeless face with the jaw moving rapidly up and down, Winston had a curious feeling that this was not a human being but some kind of dummy. It was not the man’s brain that was speaking, it was his larynx. The stuff that was coming out of him consisted of words, but it was not speech in true sense: it was a noise uttered in unconsciousness, like the quacking of a duck.
Orwell, George. 1984. Penguin Classics (2021).
Anyone who has read George Orwell’s 1984 might remember this passage, or any of the other references to such mindless quacking. Orwell wrote about the importance of maintaining the connection of the human brain to individual (and collective) thinking, acting, and speaking, and warned of the dire consequences that would come if humans allowed their brains to disconnect. If ever there were a time to heed Orwell’s warning, it is now.
In November 2022, a new duck developed by the company OpenAI powered by Large Language Model (LLM) machine learning (ML) descended onto the world scene. Its name—ChatGPT. (See OpenAI, “Introducing ChatGPT”.) The launch promised that the AI-powered chatbot “interacts in a conversational way” and can “answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate questions.” (See id.) The seemingly so-cool bot has since made talking about AI fashionable among non-techies (and even lawyers). With AI on the (and manipulating) the brAIn, I figured this would be a good time to start writing about it in a way that non-techies (and lawyers) can appreciate and understand.
Though artificial intelligence has been lurking under the surface of much of what we do for years—think, for example, Siri, Alexa, Grammarly, chatbots, your Roomba—many, if not most, ordinary people still do not know much about it. The United States Congress, too, is struggling to understand the role of AI in society and legislation. Even the Supreme Court is confounded by these issues. It was only last month during oral arguments in the case Gonzalez v. Google LLC that several Supreme Court justices expressly stated that they were confused by the arguments before the Court concerning algorithms and the internet. Thus, regardless of education level, many people find AI and algorithms befuddling. And only several months ago, before ChatGPT became all the rage, they also found the subject boring.
But AI and algorithms have very serious implications for society, business, and law. It is therefore time for average people, businesspersons, and lawyers (gasp) to begin understanding at least the basic principles of artificial intelligence. Otherwise, how can we engage in (as opposed to be confused by) the conversation, debate, and hype.
To understand and follow AI, you must have a working relevant vocabulary. Below I have provided a few key terms with understandable definitions, explanations, and references to provide a good starting point for non-tech people in business and law to learn the common vocabulary to begin engaging in discussions and decision-making about AI.
Key Vocabulary and Concepts
Artificial Intelligence. In describing artificial intelligence (AI), IBM writes: “At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving.” (See, What is Artificial Intelligence? By IBM.) The definition of artificial intelligence by IBM continues: “It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.” (See, id.) Significantly, this definition references both machine problem-solving, the “simple form” of AI, and the sub-fields that have been pushing the boundaries of AI for the past several years, such as machine learning and deep learning, described below.
Algorithm. According to TechTarget, “[a]n algorithm is a procedure used for solving a problem or performing a computation. Algorithms act as an exact list of instructions that conduct specified actions step by step in either hardware – or software-based routines.” (See Techtarget.com/what is/definition/algorithm.) This definition of algorithm is easy to understand and visualize; it allows you to think of an algorithm like a recipe. When the recipe is well-written and followed exactly, the dish turns out as anticipated. When the recipe is missing ingredients, is printed out of order, and includes the wrong temperatures and measurements, the dish is prone to fail. In other words, the exactness of the recipe—and the algorithm—has a direct correlation to the outcome.
Machine Learning. “Machine learning is a subfield of artificial intelligence, which is broadly defined as the capability of a machine to imitate intelligent human behavior. Artificial intelligence systems are used to perform complex tasks in a way that is similar to how humans solve problems.” (See Machine Learning Explained, Sara Brown, MIT Management Sloan School, April 21, 2021.) The operative phrase in this definition is “imitate intelligent human behavior.” Though machine learning (ML) can cause the machine to appear like it is thinking, reasoning, and using human-like judgment, it is not. Rather, the machine is typically describing, predicting, or prescribing outcomes based upon its interpretation of large data sets. (See, id.) For more information about AI generally and its future implications, see, Artificial Intelligence and the Future of Work.)
Neural Networks. IBM explains: “Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another. (See, What are Neural Networks by IBM.) Diagrams of neural networks in action depict how the machine takes in particular data/information, passes it through layers of neural networks working with the inputs (sometimes referred to as a black box), and produces an outcome after the process. (See, id.) The point here is that humans do not control or teach the machine algorithms as the inputs pass through the neural networks (and often do not even understand it). Rather, the machine learns on its own during the process; hence the term “machine learning.” Much of the concern about bias and explainability in AI center around neural networks, machine learning, and deep learning. It goes without saying that machines working with garbage input will produce garbage output.
Deep Learning. “Deep learning is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain—albeit far from matching its ability—allowing it to “learn” from large amounts of data. While a neural network with a single layer can still make approximate predictions, additional hidden layers can help to optimize and refine for accuracy.” (See, What is deep learning? By IBM; see also, What is Artificial Intelligence? By IBM.) In case you missed it, deep learning (DL) “attempts to simulate the behavior of the human brain” to “learn” from large data sets. But just as I noted in the section on ML, this sub-section of ML is still only a facsimile of intelligent human behavior.
Natural Language Processing. “Natural language processing is a field of machine learning in which machines learn to understand natural language as spoken and written by humans, instead of the data and numbers normally used to program computers.” (See Machine Learning Explained, Sara Brown, MIT Management Sloan School, April 21, 2021; see also, What is natural language processing (NLP)? By IBM.) Natural language processing (NLP) “allows machines to recognize language, understand it, and respond to it, as well as create new text and translate between languages. Natural language processing enables familiar technology like chatbots and digital assistants like Siri or Alexa.” (See Machine Learning Explained, Sara Brown, MIT Management Sloan School, April 21, 2021.) NLP is what drives language-based AI programs. Until recently, it has been somewhat underwhelming for general application to certain fields (like law).
Large Language Models.
Techopedia explains: “A large language model (LLM) is a type of machine learning model that can perform a variety of natural language processing (NLP) tasks, including generating and classifying text, answering questions in a conversational manner and translating text from one language to another.” (See, Techopedia Definition of Large Language Models.) The operative word in the term large language models is “large,” which “refers to the number of values (parameters) the model can change autonomously as it learns. Some of the most successful LLMs have hundreds of billions of parameters.” (See, id.)
Generative AI. “Generative AI is a type of artificial intelligence technology that can produce various types of content including text, imagery, audio and synthetic data.” (See, What is Generative AI? Everything You Need to Know, by George Lawton.) George Lawton’s article provides a more detailed explanation of how generative AI has evolved and works. The key takeaway for this article is that generative AI generates content that looks and sounds impressive to humans. It is the AI that drives ChatGPT and a slew of other new technologies coming to market at record pace. Generative AI, unlike traditional NLP, is not underwhelming but overwhelmingly catching on (even in law).
Generative AI v. AI. “Generative AI produces new content, chat responses, designs, synthetic data or deep fakes. Traditional AI has focused on detecting patterns, making decisions, honing analytics, classifying data and detecting fraud.” (See, What is Generative AI? Everything You Need to Know, by George Lawton.)
Weak/Narrow AI. The terms “weak” or “narrow” AI have been used to describe the current state of artificial intelligence. As explained by IBM, however, the terms may be misnomers in that they suggest that the machine is not strong or powerful, which is not necessarily the case:
Weak AI—also called Narrow AI or Artificial Narrow Intelligence (ANI)—is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. ‘Narrow’ might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple’s Siri, Amazon’s Alexa, IBM Watson, and autonomous vehicles. (See, What is Artificial Intelligence? By IBM.)
The development of AI today has created a path of narrow AI bricks that, when combined and extended, are likely forming the path to general AI, described below, (a/k/a Nirvana (?), heaven (?), hell (?), human extinction (?)). Where this AI path takes us, in my view, will depend on how quickly and thoughtfully leaders, lawyers, and general citizens get involved in directing the future of AI.
Strong/General AI. I turn again to IBM to describe strong/general AI:
Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)—also known as superintelligence—would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn’t mean AI researchers aren’t also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey. (See, What is Artificial Intelligence? By IBM.)
The key to general AI is that a machine with general AI would not be mimicking the intelligence, consciousness, and judgment of humans, it would actually possess those traits and surpass humans.
Why Talk About Orwell and AI Now?
AI is rapidly becoming ubiquitous in business and society, and generative AI technology will only hasten its prevalence. Now is the critical time for leaders and citizens alike to understand the machines that are guiding us–literally. Very recently, MIT professor Aleksander Mądry spoke before Congress, urging lawmakers to get involved in shaping the future of AI and not leaving it up to Big Tech, testifying:
We are at an inflection point in terms of what future AI will bring. Seizing this opportunity means discussing the role of AI, what exactly we want it to do for us, and how to ensure it benefits us all. This will be a difficult conversation but we do need to have it, and have it now[.]
(See, MIT professor to Congress: “We are at an inflection point” with AI Aleksander Mądry urges lawmakers to ask rigorous questions about how AI tools are being used by corporations, MIT Washington Office Publication Date March 10, 2023.) I could not agree more.
I love reading George Orwell and learning about (and using) AI, but both require effort, thinking, and conscious intent. Like the characters in 1984, today’s artificial intelligence speaks not with a brain but a larynx. It produces “noise uttered in unconsciousness”—at least for now. We are still the humans in the room with voices connected to our brains—at least for now. It is time to use them to guide the conversation.
In December 2021, Shannon Boettjer, Esq. successfully completed the course Artificial Intelligence: Implications for Business Strategy through the Massachusetts Institute of Technology (MIT) in conjunction with MIT Sloan School of Management and MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Shannon is passionate about fostering partnering with machines in business and law and empowering leaders and general citizens to engage in shaping the future of artificial intelligence in business, law, and society.
 The purpose of this article is not to provide a comprehensive, technical explanation of artificial intelligence. Rather, it is meant to be a guide for business and legal professionals to become familiar with AI.
 This overly simplified description is intended to highlight the general process of machine learning. For a more detailed and technical explanation, see the referenced articles.
 Though I would argue that AI has been exciting and overwhelmingly underused by the vast majority of attorneys (particularly in eDiscovery), in part, because law schools and law firms alike have not yet made technological competence/literacy a core part of legal training for every law student and practicing attorney.
 While this article was being reviewed for print, Elon Musk and others signed an open letter seeking a six month pause on giant AI experiments .(See Pause Giant AI Experiments: An Open Letter.) Responding in large part to GPT, the Open Letter asserts: “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” (See id.) Like Professor Mądry, the signatories to the Open Letter opine that Congress and policy makers must be involved in the process, stating: “AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems.” (See id.)