An AI Primer for Businesses

Written BY

Emily Friedman

November 13, 2025

What is Artificial Intelligence (AI)? 

AI is a branch of computer science concerned with training machines to mimic human intelligence in order to perform tasks ranging from simple perception to complex problem solving and reasoning. AI systems learn to simulate human cognitive functions by consuming/analyzing large amounts of data, looking for patterns, and creating rules or algorithms to inform decisions. 

You can think of AI as the broader concept or overarching field, under which there is machine learning, generative AI, agentic AI, etc.

Enterprise AI is the application of AI within a company - across a range of functions (operations, customer service, sales and marketing, cybersecurity, etc.) - to solve complex problems, improve decision making, automate (routine) tasks, optimize processes, and drive innovation through new products and services. 

Current State of AI in Enterprise 

Companies are eager to harness AI for immediate gains in efficiency, agility and innovation (and to replace labor). We’re seeing great interest and rapid experimentation but not necessarily positive outcomes. Generative AI has arguably had the greatest mainstream success. 

A recent MIT study provides a sobering reality check, finding that 95% of enterprise AI projects fail to deliver measurable impact. The majority of AI initiatives aren’t translating into operational or revenue gains because they’re disconnected from systems, processes, and the physical reality of business. 

Powered by AI is also at risk of becoming a buzz phrase–-it seems every product and service is now “powered by AI.”

Fields/Subfields of AI

There’s machine learning (ML), deep learning (DL), computer vision (CV), natural language processing (NLP), generative AI, conversational AI, embodied AI, spatial AI (think Niantic Spatial), contextual AI, agentic AI, and more—often overlapping. Some familiar examples of AI include voice assistants like Alexa and Siri, predictive text, facial recognition, and self-driving cars. 

All AI in use today is considered narrow or weak AI: Current AI systems are trained for a single or limited set of specific tasks. In these tasks, they often outperform humans in speed and efficiency, but they’re still specialized. The opposite would be strong AI, a theoretical form of AI capable of performing any intellectual task that a human can do.

Machine learning: A subset of AI involving training algorithms on large datasets to find patterns and make predictions or decisions. Machine learning encompasses a broad array of models, ranging from basic ones like decision trees to more complex ones like neural networks, allowing systems to learn from data without being explicitly programmed to do so. 

While all ML is a form of AI, not all artificial intelligence is machine learning. ML is just one subfield of AI (along with NLP, robotics, etc.) Consider this: When your credit card is flagged for a large purchase in a foreign country, that’s AI. The fraud detection system flags transactions meeting specific criteria. A system that learns to detect fraud by analyzing historical transaction data—that’s machine learning. 

Deep learning is a subset of ML that uses artificial neural networks (ANNs) inspired by the structure of the human brain to learn hierarchical features and complex patterns from large, often unstructured datasets. (ML, on the other hand, uses structured, labeled data.) ‘Deep’ refers to the layers of the neural network - like layers of data processing - that enable the system to perform complex regression (prediction) and classification tasks with minimal human intervention. Each layer extracts progressively higher level features from the raw data. Deep learning powers most modern AI applications, including object and speech recognition and self-driving cars.  

Large Language Models (LLMs): Deep learning models trained on massive amounts of (text) data to understand and generate human-like text. LLMs can handle more complex and nuanced language tasks like creative writing (generative AI) and open-ended conversation (conversational AI) compared to NLP, and can be fine-tuned for specific tasks by prompt engineering. Examples include GPTs (generative pre-trained transformers), translation apps, and code generators. As DL models, LLMs are built on a type of neural network architecture called a transformer and are trained with self-supervised learning. 

Natural Language Processing (NLP): LLMs are a specific type of advanced model within NLP, the branch of AI focused on enabling machines to process and understand human language. NLP models often use rule-based systems or simpler machine learning approaches trained on smaller, domain-specific datasets, and thus excel at specific, focused tasks like text classification, named entity recognition, and basic sentiment analysis. Examples include spam filters, speech recognition, and auto-correct. NLP uses machine and deep learning to power things like translators and speech recognition. 

Computer vision: Another branch of AI, this one focused on enabling machines to “see” - interpret and understand - visual information like images and videos. Like NLP, CV uses ML and DL to process and analyze visual data to perform tasks like object recognition, image segmentation, and scene understanding. Applications include facial recognition, medical image analysis, self-driving cars, scene reconstruction, and augmented reality. 

Conversational AI: A field of AI that uses natural language processing, machine learning and other AI technologies to enable human-like conversation with machines through text or voice. Conversational AI allows machines to understand, interpret and respond to human queries or commands in an intuitive, natural way. It works by processing user input, analyzing the input to determine intent, and providing a relevant response. Conversational AI learns from each interaction to improve over time. Examples include virtual assistants like Alexa and Siri and chatbots for customer support. 

Generative AI: A subset of ML that uses DL methods (LLMs, transformers, diffusion models) to generate new, original content such as text, images, audio, video, and even code in response to user prompts. It differs from traditional ML by focusing on generation rather than prediction or classification. Gen AI models generate new data that mimics the characteristics and patterns of their training data. Something like ChatGPT, for instance, is initially trained on a vast and diverse collection of text and other written content from books, articles, websites, public forums, and other sources. The algorithms are further refined through human feedback and real user interactions

Physical and Embodied AI: The terms appear to be used interchangeably, though physical AI could be seen as a broader category encompassing any AI with a physical presence. Embodied AI is then a subtype of physical AI referring to AI that interacts with the physical world and learns through experience in the real world. Embodied AI combines ML, CV, NLP with robotics, sensors and actuators to form systems that can perceive, reason, and act in physical environments. Another way to think of it: Embodied AI makes decisions resulting in physical actions such as a robot moving an object or a car steering to avoid a pedestrian. Examples include self-driving cars, autonomous robots, and virtual agents that can navigate and act within a space. 

That brings us to AI agents and agentic AI.

AI agents: ‘AI agent’ and ‘agentic AI’ are often used interchangeably. When it comes to AI agents, in particular, sources disagree on level of complexity, degree of autonomy, and ability to learn. Some describe AI agents as little more than basic chatbots - designed to perform specific tasks within defined parameters - while others assign them more advanced capabilities like the ability to use/integrate with tools like databases and search engines. 

Let’s go with: An AI agent is a software system or application that uses AI to complete tasks on behalf of users. AI agents require a human input or trigger and operate based on rules or algorithms; they can’t think autonomously or adapt but excel in precision and speed at their specific tasks. An example would be a chatbot that answers customer FAQs or a virtual assistant which responds to voice commands. Siri, for instance, is an example of deep learning, conversational AI, and also an AI agent. She can set a reminder, even consult the internet or an app, but cannot, say, book a doctor’s appointment for you. 

Agentic AI: Field of AI focused on creating systems that can autonomously plan and execute complex, multi-step tasks. Agentic AI systems are characterized by the ability to adapt, learn from experience, and use reasoning. Whereas AI agents are task-specific and traditional AI focuses on recognizing patterns, agentic AI is goal-oriented and can handle more complex, multi-step workflows with minimal oversight. Use cases include enterprise automation (e.g. streamlining IT workflows), project management (a system that autonomously assigns and tracks tasks), supply chain optimization (adjusts logistics and inventory in real time), and cybersecurity (detects and actively responds to security threats.)

Some refer to agentic AI as the next generation of intelligent systems capable of coordinating multiple AI agents to solve complex problems. Whatever the case, the line between an AI agent and agentic AI is blurry. 

Next Frontier for AI 

It is agentic AI that’s been overhyped. More realistic and in the immediate future is what many call spatial or spatially aware AI. AI may understand text and images but it doesn’t understand the real world. As it was put in a recent Forbes article, generative AI cannot move a warehouse robot, coordinate a drone fleet, or train a digital twin to anticipate the next bottleneck that will take down a supply chain. Niantic Spatial’s Tom Gewecke explains that AI cannot move from an advisory role to an operational one without spatial intelligence

Further Reading