Is the continued decline of AI Agent caused by the recently popular MCP protocol?

This article is approximately 1334 words,and reading the entire article takes about 2 minutes
Manus + MCP is the key to the impact that web3 AI Agent has suffered this time.

Original author: Haotian (X: @tme l0 211 )

Some friends said that the continuous decline of web3 AI Agent targets such as #ai16z and $arc is caused by the recently popular MCP protocol? At first, I was a little confused. What does it have to do with it? But after thinking about it carefully, I found that there is a certain logic: the valuation and pricing logic of existing web3 AI Agents has changed, and the narrative direction and product landing route need to be adjusted urgently! Below, I talk about my personal opinion:

1) MCP (Model Context Protocol) is an open source standardized protocol designed to enable various AI LLMs/Agents to seamlessly connect to various data sources and tools. It is equivalent to a plug-and-play USB universal interface, replacing the previous end-to-end specific packaging method.

Simply put, there are obvious data islands between AI applications. In order to achieve interoperability between Agents/LLMs, they need to develop corresponding API interfaces. Not only are the operating procedures complicated, but they also lack two-way interaction functions, and usually have relatively limited model access and permission restrictions.

The emergence of MCP provides a unified framework that allows AI applications to break away from the data island status of the past and realize the possibility of dynamic access to external data and tools. It can significantly reduce development complexity and integration efficiency, especially in terms of automated task execution, real-time data query, and cross-platform collaboration.

At this point, many people immediately thought, if we use the multi-agent collaborative innovation Manus to integrate this MCP open source framework that can promote multi-agent collaboration, will it be invincible?

That’s right, Manus + MCP is the key to the impact that web3 AI Agent has suffered this time.

2) However, what is incredible is that both Manus and MCP are frameworks and protocol standards for web2 LLM/Agent, and they both solve the problem of data interaction and collaboration between centralized servers. Their permissions and access control also rely on the active opening of each server node. In other words, it is just an open source tool attribute.

Logically speaking, it is completely contrary to the core ideas of web3 AI Agent, such as distributed servers, distributed collaboration, distributed incentives, etc. How can a centralized Italian cannon blow up a decentralized bunker?

The reason is that the first phase of web3 AI Agent is too web2-ized. On the one hand, many teams come from the web2 background and lack a full understanding of the native needs of web3 Native. For example, the ElizaOS framework was originally a packaged framework that helps developers quickly deploy AI Agent applications. It just integrates platforms such as Twitter and Discord and some API interfaces such as OpenAI, Claude, and DeepSeek, and appropriately encapsulates some Memory and Charter general frameworks to help developers quickly develop and settle AI Agent applications. But if you are serious, what is the difference between this service framework and web2 open source tools? What are the differentiated advantages?

Well, is the advantage that there is a set of Tokenomics incentives? Then use a framework that can be completely replaced by web2 to incentivize a group of AI Agents that exist to issue new coins? Terrible. Following this logic, you can roughly understand why Manus + MCP can have an impact on web3 AI Agents?

Since a number of web3 AI Agent frameworks and services only solve the quick development and application needs similar to web2 AI Agent, but cannot keep up with the innovation speed of web2 in terms of technical services, standards and differentiated advantages, the market/capital has re-evaluated and repriced the previous batch of web3 AI Agents.

3) At this point, the crux of the problem must have been found, but how to break the deadlock? There is only one way: focus on web3 native solutions, because the operation and incentive architecture of distributed systems are the absolute differentiated advantages of web3.

Taking distributed cloud computing power, data, algorithm and other service platforms as an example, on the surface, this kind of computing power and data aggregated with idle resources cannot meet the needs of engineering innovation in the short term. However, when a large number of AI LLMs are competing in a performance breakthrough arms race for centralized computing power, a service model with idle resources, low cost as a gimmick will naturally be disdained by web2 developers and VC teams.

However, once web2 AI Agent has passed the stage of competing in performance innovation, it will inevitably pursue directions such as vertical application scenario expansion and segmented fine-tuning model optimization. Only then will the advantages of web3 AI resource services be truly revealed.

In fact, when web2 AI, which has climbed to the position of a giant by monopolizing resources, reaches a certain stage, it will be difficult to retreat and use the idea of surrounding the city from the countryside to break through the subdivided scenarios one by one. At that time, it is time for excess web2 AI developers + web3 AI resources to work together.

In fact, in addition to the quick deployment + multi-agent collaborative communication framework + Tokenomic token issuance narrative of web2, web3 AI Agent has many innovative directions of web3 Native worth exploring:

For example, a distributed consensus collaboration framework is required, which requires many adaptive components considering the characteristics of the LLM large models off-chain computing + on-chain state storage.

1. A decentralized DID authentication system allows the Agent to have a verifiable on-chain identity, just like the unique address generated by the virtual machine for the smart contract, mainly for the continuous tracking and recording of subsequent status;

2. A decentralized Oracle system, which is mainly responsible for the trusted acquisition and verification of off-chain data. Unlike previous Oracles, this set of AI Agent-adapted Oracles may also need to build a combined architecture of multiple agents including data collection layer, decision consensus layer, and execution feedback layer, so that the Agents on-chain data and off-chain calculations and decisions can be accessed in real time;

3. A decentralized storage DA system. Since the knowledge base state of the AI Agent is uncertain when it is running, and the reasoning process is also temporary, a system is needed to record the key state library and reasoning path behind the LLM and store them in a distributed storage system, and provide a cost-controlled data proof mechanism to ensure data availability during public chain verification;

4. A zero-knowledge proof ZKP privacy computing layer can be linked with privacy computing solutions including TEE and FHE to achieve real-time privacy computing + data proof verification, so that Agents can have a wider range of vertical data sources (medical, financial), and then more professional and customized service Agents will appear on top;

5. A set of cross-chain interoperability protocols, which is somewhat similar to the framework defined by the MCP open source protocol. The difference is that this set of interoperability solutions requires a relay and communication scheduling mechanism that adapts to the operation, transmission, and verification of agents, and can complete the asset transfer and state synchronization of agents between different chains, especially complex states such as agent context and prompt, knowledge base, memory, etc.;

In my opinion, the key to conquering the real web3 AI Agent should be how to make the complex workflow of AI Agent and the trust verification flow of blockchain fit as closely as possible. As for these incremental solutions, it is possible that they are upgraded and iterated from existing old narrative projects, or recast from new projects in the AI Agent narrative track.

This is the direction that web3 AI Agent should strive to build, and it is in line with the fundamentals of the innovative ecosystem under the macro narrative of AI + Crypto. If there is no relevant innovation and development and the establishment of differentiated competitive barriers, then every disturbance in the web2 AI track may turn web3 AI upside down.

Original link

This article is from a submission and does not represent the Daily position. If reprinted, please indicate the source.

ODAILY reminds readers to establish correct monetary and investment concepts, rationally view blockchain, and effectively improve risk awareness; We can actively report and report any illegal or criminal clues discovered to relevant departments.

Recommended Reading
Editor’s Picks