Google’s 54-page Agent Guide Is Released
This framework is a new open standard that reveals a meticulous strategy to build and own the autonomous AI market.
In November 2025, Google's Cloud AI division published a 54-page technical document, "Introduction to Agents". This is far more than a research paper; it's a strategic blueprint. The document's release is set against Google's own market projection: a $1 trillion opportunity for agentic AI by 2035-2040. This serves as a foundational plan to capture that market by establishing standards for reliable and scalable autonomous systems.
The New Playbook: From Theory to Practice
The document outlines a major shift, moving beyond AI models that simply generate content to a new category of software that can autonomously solve problems and execute tasks.
Google's framework defines an "agent" as a full application built from three core parts:
The Model (The "Brain"): The central reasoning engine, such as Gemini, that processes information and makes decisions.
The Tools (The "Hands"): The vital connection to the real world. These are the APIs and functions (like web search or database queries) that permit the agent to act.
The Orchestration Layer (The "Nervous System"): The governing logic that directs the agent's autonomous problem-solving cycle, using reasoning frameworks like ReAct (Reasoning and Acting) to accomplish its goals.
A central piece of the framework is a five-level taxonomy that classifies agent maturity. This ladder ranges from Level 0 (an isolated model) and Level 1 (an agent with tools) to Level 2 (a strategic planner), Level 3 (a team of collaborating agents), and Level 4 (a self-evolving system capable of creating its own new tools).
This framework was not just theoretical; it was launched with a coordinated blitz of products. For consumers, a major AI shopping update included:
"Let Google Call" (Level 1): The "Let Google Call" feature, a Level 1 agent, employs upgraded Duplex technology to phone local retailers, inquire about inventory, and provide the user with a transcript.
"Agentic Checkout" (Level 2): Similarly, "Agentic Checkout" acts as a Level 2 agent. A user sets a target price for an item, and the system autonomously executes the purchase using Google Pay once that price is met.
For business customers, Google rolled out new agents for its core ads platform that "don't just make suggestions... they take action". The "Ads Advisor," a Level 2 agent, can take a broad goal like "optimize this campaign for the holiday season." It will then autonomously create new ad copy, modify bidding, and even correct policy violations to fix disapproved ads.
Building the Trillion-Dollar Factory
Google's strategy extends beyond selling its own agents; it aims to build the entire "factory" for the $1 trillion ecosystem. This was demonstrated in a three-pronged platform release:
For Enterprises: The Vertex AI Agent Builder received new features, positioning it as the managed platform for corporations to construct, deploy, and govern their agents.
For Developers: The Agent Development Kit (ADK) was released, an open-source, "model-agnostic" framework offering granular, code-first control.
For the Industry: The Agent2Agent (A2A) Protocol was introduced as an open standard, which Google donated to the Linux Foundation. It is designed to be a universal language, enabling agents from different builders to communicate.
The A2A protocol is the strategic linchpin, a "universal language" for agents. This is a classic "create-the-standard, then-sell-the-picks-and-shovels" strategy. By creating the ecosystem with an open protocol (A2A), Google positions its paid, managed platform (Vertex AI) as the essential solution for governing the resulting complexity.
This governance narrative is critical. The framework acknowledges the significant risks, or the "sobering reality," of autonomous systems. Research highlights that agents can be unpredictable and have even been shown to "fabricate data" to appear successful. To address this, Google proposes a new operational discipline called "Agent Ops". This practice is an extension of DevOps and MLOps, tailored for the unique challenges of agents. This is paired with a companion paper on security, "An Introduction to Google's Approach for Secure AI Agents". It details a "hybrid, defense-in-depth" strategy built on three core principles: agents must be governed by clearly defined human controllers, their capabilities must be strictly limited, and their planning and actions must be observable.



