From science fiction to reality: How robots and verifiable AI are changing the world

This article is approximately 7011 words,and reading the entire article takes about 9 minutes
We are witnessing the birth of a “verifiable intelligent machine economy” - an era where trust no longer relies on assumptions but is secured by cryptographic mechanisms.

summary

Robots and artificial intelligence are no longer limited to science fiction - they are quickly becoming an integral part of modern life. Thanks to breakthroughs in large language models (LLMs), machines can now understand context and learn autonomously, giving rise to a robot-centric economy. In this new paradigm, autonomous systems take on tasks ranging from local deliveries to large-scale logistics, and even conduct financial transactions.

As AI agents become more autonomous, it becomes critical to establish trust mechanisms. Verifiable AI and zero-knowledge machine learning (zkML) provide a solution by using cryptographic proof technology to verify the accuracy and integrity of the model without exposing its internal logic. As a pioneer in this field, Polyhedra deeply integrates such technologies with AI-oriented infrastructure (such as EXPchain) to enable robots to collaborate securely on the chain. This builds a solid ecosystem where intelligent machines can operate transparently and autonomously, and the future vision that was once exclusive to science fiction is accelerating into reality.

The real-world robot revolution

Many people consider ChatGPT to be a milestone for humanity, because these large language models created by humans can communicate and think like humans. When we equip LLMs with tools such as search engines, web browsing, and APIs, they can operate these tools like humans. Imagine if ChatGPT had a physical body and if they became your neighborhood partner, how would the world change?

All this is happening. With the development of generative AI, robots are beginning to demonstrate human-like interactive capabilities. Yushu Technologys robot Erbai is a perfect example: in one experiment, it persuaded (or kidnapped) 10 other robots equipped with generative AI to escape the exhibition hall and run towards the free world.

From science fiction to reality: How robots and verifiable AI are changing the world

Robots from science fiction movies are gradually becoming reality. Take the Star Wars series, for example: Since 2019, visitors to the Droid Depot in Disneyland have been able to assemble their own R 2-D 2 and BB-8 robots to take home - although these are still remote-controlled toys and are not yet equipped with generative AI. But the clarion call for change has already sounded: at the GTC 2025 conference, NVIDIA announced that it would work with Google Deepmind and Disney to create the physics engine Newton , which can achieve real-time complex motion interaction. Huang Renxun demonstrated a Star Wars BDX robot named Blue on site, and its lifelike movements were amazing. These BDX robots are expected to debut in the theme park this year during Disneys Season of the Force event.

From science fiction to reality: How robots and verifiable AI are changing the world

Although we still cant travel at the speed of light or through hyperspace like in Star Wars, the fantasy stories about robots and bionic machines are no longer just the fantasies of movie fans. Perhaps in the near future, we will frequently encounter robots in our daily lives: they shuttle through the streets of the city, take the bus and subway with us, go to the charging station like humans go to the restaurant, and even stroll through the shopping mall for free WiFi. Let us continue this fantasy about the future, because these scenes are very likely to become a reality in the near future.

The critical point has been reached

So what is the core driving force behind all this progress? In fact, robots, especially humanoid robots, are not a new concept. As early as 2005, Boston Dynamics developed a quadruped robot called BigDog, which was mainly used for military operations in complex terrain. In 2013, they launched the humanoid robot Atlas, which was designed for search and rescue missions and was funded by the Defense Advanced Research Projects Agency (DARPA). Although these innovative achievements are impressive, it has always been a difficult problem to find a product positioning that meets market demand, which has led to Boston Dynamics failure to make a profit . For example, the robot dog Spot, which was sold to the public in 2016, costs as much as $75,000; in contrast, the average annual expenditure for an American family to raise a real dog is only $2,000 to $3,000. Faced with a docile and cute furry child and an expensive and cold metal machine, the choice of most families is self-evident.

Lets look at another example - Colorado-based Sphero , which once reached a licensing agreement with Disney to produce the popular Star Wars robots R 2-D 2 and BB-8. However, in 2018, these products were announced to be discontinued , mainly because the popularity of the movie quickly faded with the theaters, making the business model unsustainable. This is not surprising, because these robots are essentially still toys that are remotely controlled by mobile phone applications and lack real intelligence or voice recognition functions. In addition, the battery life is only about 60 minutes, and the range of activities is also limited to the vicinity of the charging base. Obviously, these products are still far from the advanced autonomous robots depicted in the Star Wars movies.

The situation is very different today.

First of all, the focus of robot research and development has gradually shifted from being driven by scientific research and relying on government funding to being market-oriented, emphasizing the high degree of fit between products and the market. About 15,000 years ago, when humans domesticated wolves into dogs, these primitive dogs were not as docile and cute as modern pet dogs, but they were able to provide practical help to the ancestors in the hunting and gathering period. It is this practicality that has given rise to a co-evolution relationship that has lasted for thousands of years and continues to this day. Robots are no exception - if they are to be popularized on a large scale, they must also meet a wide range of and practical usage scenarios.

For example, autonomous driving technology is gradually being applied to the fields of transportation and distribution - the news that Tesla recently obtained an online car-hailing operating license in California is exciting; Meituan has achievednormal operation of drone delivery in Shenzhen since 2022; In addition, various types of hotel and catering service robots are now widely used in China, and can efficiently undertake tasks such as food delivery and room service. This trend has accelerated due to the general demand for labor shortages during the epidemic.

Secondly, the prices of robots and bionic robots have dropped significantly, making them more affordable and reasonable for ordinary families and businesses. This price reduction trend is mainly due to the continuous reduction of technical barriers, as well as the intensification of market competition and the promotion of large-scale mass production.

Several large Chinese technology companies, such as Baidu and Alibaba, have been actively developing autonomous driving in recent years, especially unmanned taxis (Robotaxi). At present, Robotaxi has achieved normal operation in many cities in China, and Baidus Carrot Run also plans to expand its services to Hong Kong and Dubai. In the United States, Tesla recently released Cybercab , an unmanned taxi model, which is expected to be priced below $30,000. Baidu also gave similar pricing expectations and pointed out that mass production is the key to achieving cost reduction. If a Robotaxi earns about $22 per hour, its initial investment is expected to be recovered in less than nine months.

From science fiction to reality: How robots and verifiable AI are changing the world

Other types of robots have also benefited from large-scale production and increasingly fierce market competition. On Alibabas platform, you can now see food delivery drones priced at less than $3,000, and hotel and restaurant service robots are mostly priced below $5,000. Although software development still accounts for an important part of the total cost, with the advancement of large-scale production, this part of the cost is being continuously diluted, and its proportion in the price of the whole machine is gradually decreasing.

The third point, and the most disruptive change, is that todays robots finally have real intelligence. The fundamental difference between this generation of robots and the past is that they can complete complex tasks autonomously without human remote control. For example, the BB-8 we mentioned earlier is strictly speaking more like a toy, because even its basic turning movements require remote control by the user. The existence of remote control has changed the essence of the definition of robots: if it must be controlled by people, it is not a real robot, but just another human-operated machine. It sounds attractive to imagine a robot that helps you clean the house, but if you have to spend an hour controlling how it wipes the dust up and down, that appeal will soon disappear.

In fact, humans have long yearned for machine intelligence, even before Microsoft released Windows in 1985. I recently revisited the 1982 Disney sci-fi film TRON, in which human users interact with programs that have anthropomorphic behaviors. Even from todays perspective, the film is still very technical and geeky, with frequent use of terms such as end of line, user, disc, and I/O, which many people still find unfamiliar and confusing even today.

But what is impressive is that the programs in TRON do not rely on human remote control, but have the ability to act autonomously. For example, when the program character Tron lost contact with the user Alan Bradley, he was able to independently persuade another program to betray the main control program MCP, so as to enter the I/O tower to receive data from the user, and finally use this data to destroy MCP and save the world. In the movie, these programs can not only express emotions (including love for others), but also show respect and faith for the users.

From science fiction to reality: How robots and verifiable AI are changing the world

This ability of robots to make decisions autonomously holds great potential. Take the driverless taxi (Robotaxi) as an example: with this intelligence, it can not only drive autonomously and pick up passengers, but also determine whether it needs to be charged and automatically find the nearest charging station; when the body needs to be cleaned, it can also make judgments like humans know when to take a shower; it can even identify whether passengers have left items and return them to the owner. These advanced functions go far beyond basic autonomous driving capabilities, but are essential prerequisites for the large-scale deployment of robots. Otherwise, it will still be necessary to rely on patchwork solutions - such as human operators staring at 10 to 20 surveillance screens and manually intervening when abnormal situations occur.

When robots start to think like humans, they will also have the ability to learn like humans - and they may not even rely on direct human supervision. For example, if you have a pet robot, at first you might want it to jump around like a dog and bring joy to its owner. But if it has human-like intelligence, it may learn new skills by watching instructional videos on platforms such as YouTube or TikTok. Maybe one day, it will really start to help you fold your clothes - that wouldnt be surprising.

The new robot-dominated economy

It is foreseeable that robots will soon integrate into human society as autonomous individuals, and eventually become consumers, customers, and users like us. Imagine a self-driving car that can pay for parking or charge itself; a hybrid car that swipes a card to refuel at a gas station; or even a food delivery drone that chooses to take the train or subway to save time and cost. And the ones providing these services may be other robots!

This scene reminds me of the animated film Cars released by Pixar and Disney in 2006. In the film, Italian sports car Luigi runs Luigis Tire House; female character Flo is in charge of the gas station Flos V-8 Cafe; and Sally, the Porsche, is not only the towns lawyer, but also owns the Conical Hotel. Each car has its own role and profession, and they live together in a community called Radiator Springs Town. Today, emerging technology is enough to bring such a world from animation to reality.

From science fiction to reality: How robots and verifiable AI are changing the world

In the world of marketing, sales, and commerce, we often talk about classic interaction models such as B2B (business to business), B2C (business to consumer), C 2B (consumer to business), and C 2C (consumer to consumer). However, with the rapid development of machine intelligence, it is fascinating to see how the way some products and services are provided in our society may gradually shift to new interaction models such as B 2 R (business to robot), R 2 R (robot to robot), or R 2C (robot to consumer) - in which robots begin to play the roles originally played by businesses or consumers, but in slightly different ways.

For example, future subway stations might have “drone lanes” designed for drones that land from the sky. Instead of swiping tickets or scanning commuter cards, these drones are identified and allowed to pass directly through RFID signals. Trains might also have dedicated compartments or seats for drones to park (physics dictates that you can’t expect drones to fly around in subway cars); these seats might even come with pay-per-use charging devices. Subway exits might also have dedicated “drone elevators” that quickly lift drones to high altitudes, helping them glide down from high altitudes and fly like the elytra towers in Minecraft. Of course, these elevators are strictly for drone use only, and curious adults must be effectively prevented from trying to enter the “drone lane” or sneak into the “you can fly too” elevator. In addition, if new transportation technologies like the Hyperloop are too intense for human passengers in the early stages, robots may be the ideal first test riders to help us complete the reliability verification of high-speed, long-distance transportation systems.

From science fiction to reality: How robots and verifiable AI are changing the world

Next time you see a row of robots — drones, humanoids, or spherical robots like BB-8 — casually sitting or lying against a wall in a mall or public library, don’t be surprised: they’re probably just taking a break and using the free public WiFi. Just as humans today are almost inseparable from their phones, robots in the future will be equally hungry for network and data access. This scene is a microcosm of the “robot economy” that has naturally grown around robots and their unique needs under the advancement of technology.

Perhaps the most fascinating point in this robot economy is that intelligence itself can also become a service provided by other robots. For example, in order to reduce the production cost of food delivery drones, manufacturers may not equip each drone with a high-performance AI chip. As a result, these drones may only be able to say a few preset simple phrases when facing customers. Such cost control strategies are still realistic at present - AI chips are still expensive, and large AI models have very high requirements for storage and computing resources. However, this problem is not unsolvable: intelligence can be shared. When a drone needs stronger intelligent support, it can access an API service through the Internet, access a dedicated AI node on the edge network, or even directly seek help from other robots with stronger intelligence in the same local area network (such as the same shopping mall).

Blockchain: The Native Language of the Robot World

Looking back at the short history of blockchain development, we find that: to achieve large-scale applications, blockchain must have good human interaction capabilities, especially developer-friendly. This demand has spawned a series of products such as front-end interfaces, user experience design, digital wallets, development documents, software toolkits, and Solidity language, all of which are essentially designed to allow the blockchain system composed of binary code to be presented in an abstract form that is understandable to humans. But if we go back to the essence, the most basic and important function of blockchain is always only one - immutability.

But in the eyes of robots, the existence of blockchain will be perceived in a completely different way. The binary data that is serialized and stored in bytes, and the protocol specifications that are full of terminology and confuse even the top human engineers, are naturally familiar native languages for computer programs. Humans may need to use wallet plug-ins such as MetaMask in their browsers to interact with the blockchain, but robots do not need MetaMask at all (this can even be one of the ways to identify robots pretending to be humans in the future man-machine war - see if their browsers have MetaMask installed).

So how will robots communicate with each other on the blockchain? We don’t know yet. But we can take inspiration from two real-world examples.

The first example is the Model Context Protocol (MCP) initiated by Anthropic. Currently, MCP has been supported by mainstream large model services including Claude and ChatGPT, and has also been accepted by Web2 services such as GitHub, Slack, Google Maps, Spotify, Stripe, and this list continues to expand. Although MCP is not currently an on-chain protocol, it defines the interaction between the MCP client and the server through the concepts of request and notification - and these interactions, in theory, can be fully realized through transmission protocols such as blockchain. MCP servers can also provide a series of resources, which can also be published on data availability layers such as Filecoin, Celestia, EigenDA, BNB Greenfield, etc.

From science fiction to reality: How robots and verifiable AI are changing the world

The second example is more old-school. It is a low-level abstraction technology that has been used in computer systems for more than 20 years - Protocol Buffers (Protobuf for short) launched by Google. Its function is to encode structured data (such as blockchain transactions) into byte sequences in the simplest format, with the goal of reducing the data volume and ensuring that the serialization and deserialization processes are fast and efficient. From the perspective of technical adaptability, Protocol Buffers has stronger machine-friendly features, and its binary characteristics naturally fit the blockchain scenario, which can significantly improve the data parsing efficiency of smart contracts. The reason why current large language models mainly use human-friendly natural languages for interaction is essentially because they are designed to communicate with humans, not robots or other programs.

EXPchain will try multiple technical upgrades around this robot economy vision. As an EVM-compatible chain, EXPchain natively supports all Ethereum virtual machine functions. But at the same time, as an emerging L1 public chain, EXPchain has higher architectural flexibility and can natively integrate and expand support for the MCP protocol, such as through oracle services such as Chainlink and Stork Network; through pre-compiled contracts to implement functions such as on-chain verification of Expander zero-knowledge proofs; and introduce trusted execution environment (TEE) nodes from providers such as Google Cloud to provide authentic and verifiable execution guarantees for off-chain operations triggered by smart contracts.

One type of smart contract operation we focus on is cross-chain interaction related to zkBridge technology. One of the core visions of EXPchain is to create an infrastructure platform that supports AI agents, AI trading robots and other entities to interact with multi-chain assets. Whether the assets are located in different blockchains or in multiple (liquid or weak) pledge agreements, robots can use EXPchain as a unified dashboard to manage and call multi-chain assets.

From science fiction to reality: How robots and verifiable AI are changing the world

For example, a self-driving car may need to process ride-hailing requests from different chains such as Ethereum L2, Solana, Aptos/Sui, etc., because users are distributed across multiple blockchain platforms. To achieve this, the self-driving car will naturally rely on third-party APIs (or push services) corresponding to these chains to receive and filter transactions, provided that these API services are reliable and trustworthy enough to not miss or tamper with transaction content, but in reality such a perfect assumption is often difficult to establish.

EXPchains solution lies in the zkBridge technical architecture: first, the cross-chain request is encrypted, packaged and securely transmitted through zero-knowledge proofs (such as Expander proofs), and then a verifiable transaction filtering mechanism is implemented on EXPchain. In the end, the self-driving car receives not only the filtered order results, but also the ZK proof generated by Expander (or the trusted proof encapsulated in the TEE environment), which can verify whether the entire screening process is performed honestly. This mechanism derives a deeper technical proposition - how to build an efficient and verifiable light client and state proof system for robots.

Light clients, state proofs, and their extended applications

Bots need to send and receive transactions on one or more blockchains. However, they usually do not have the storage and network capabilities required to run a full node; in most cases, they can only run as light clients and obtain transaction information through RPC providers.

This light client model also has some limitations: the robot still needs to synchronize with the network like a traditional light client and download all block headers, even if some block headers are meaningless to the current robot. For example, an autonomous taxi that is performing a pick-up task does not need to receive new taxi requests, so it can completely skip these irrelevant blocks. This ability to skip blocks on demand is particularly suitable for chains with high block generation frequency and fast block generation speed (such as Arbitrum or Solana), because these chains generate a lot of block header information.

Another problem is that robot-related transactions are often scattered throughout the block, lacking structured aggregation and organization, which increases bandwidth and resource consumption during network synchronization.

We believe that EXPchain can effectively address these challenges, and its technical solution includes two major innovative breakthroughs:

First, by introducing zero-knowledge proof technology, the light client operation logic is greatly simplified. This solution is particularly suitable for periodically offline devices (such as robots that are charging), so that they can quickly synchronize the latest block header information without downloading massive data. This technology, which has been verified on zkBridge (supporting EVM-compatible chains such as Ethereum), will be fully ported to the EXPchain ecosystem. It can be foreseen that zero-knowledge proof will become the preferred verification method for robots to access EXPchain, gradually replacing traditional light client protocols.

Secondly, we are developing a revolutionary middleware zkIndexer, which is dedicated to optimizing the on-chain interaction experience of robots. Its core function is to intelligently aggregate and structure multi-source transaction data (such as online car-hailing orders) from the EXPchain main chain and zkBridge cross-chain bridge, and finally output it as a streamlined, verifiable, robot-friendly data package.

Taking ride-hailing as an example, a self-driving taxi in Los Angeles obviously does not need to process ride requests from New York; it is more concerned with orders near its current location or the location it is about to arrive at (assuming the current passenger is about to arrive at the destination). For another example, a food delivery drone is looking for an open charging station with vacancies. If it arrives and finds that all the stations are occupied, it will be a huge waste of resources. zkIndexer can retrieve, filter, and sort relevant data based on specific criteria. It is essentially similar to the directory search system launched by Yahoo! in 1994. The robot only needs to find the required information in the lowest-level classification node.

If the robot wants to obtain more extensive data (for example, if there are no ride orders nearby, it wants to expand its search range), it can access adjacent classification nodes. Each classification node comes with a lightweight but efficient zero-knowledge proof, which enables the robot to quickly verify its authenticity while receiving data. At the same time, the data will also include a timestamp to ensure that the robot can determine the timeliness of the information - this is especially important for scenarios that rely heavily on real-time, such as the idleness of charging piles.

From science fiction to reality: How robots and verifiable AI are changing the world

Although humans have gradually moved away from the unfriendly human-friendly work directory search method like Yahoo!, for programs and robots, the directory structure may still be the most intuitive and efficient form of data organization, which is more operational than search engines such as Google and Bing. Today, building and maintaining such directory structures no longer requires human intervention. AI can automatically discover information and create corresponding directories according to the needs of other systems.

zkIndexer has the potential to gradually evolve into the core infrastructure for robots to interact with blockchains. For example, a charging station does not need to run a full node or a traditional light client, even though it has sufficient power resources. Instead, it can rely on zkIndexer to receive messages related to itself - such as a charging reservation request sent by a robot in advance - without having to process any unrelated transactions.

Whenever a charging station has a vacancy released or occupied, it only needs to send a transaction to update its corresponding directory information on the chain. The classification information of the charging station may be located under the directory item Charging stations suitable for drones near 92802, and the update content will include a new timestamp and the corresponding zero-knowledge proof to ensure the real-time and verifiability of the data.

Verifiable on-chain agents

When the robot society becomes a reality, applications designed specifically for robots will also be born on the chain, whose core responsibility is to perform computational processing on the chain data. These on-chain agents will play an important role in the robot society. For example, they may act as a dispatch system to assign taxi requests directly to vehicles on duty; or they may act as traffic managers to guide nearby vehicles to detour in time when a car accident occurs.

These agents help robots collaborate efficiently. Without them, in some busy areas, all driverless taxis may compete fiercely for the same taxi request, causing network congestion and a large number of transaction conflicts, forming a problem similar to the robot version of MEV - because they are all smart enough to choose the strategy that is most beneficial to themselves. In this case, the on-chain agent can intervene and require all driverless cars to queue up and respond to requests in order to restore order.

Similar agents can also be used to manage charging stations, both as a reservation system and as a settlement system. Drones may be required to make reservations in advance before arriving (occasionally allowing temporary arrivals) and complete payment on-chain (completed through a single on-chain transaction, without the need for a credit card payment process). If the drone does not arrive at the scheduled time, its deposit may be confiscated, or it may be temporarily banned from making reservations based on system settings (such as through a credit points system). The reservation system can also dynamically adjust fees based on station load, and even introduce membership or points mechanisms, similar to the loyalty reward system in the human world. If a drone stays at a charging station for too long or even gets stuck, the agent can also send an on-chain request to the drone police for support.

On-chain agents can significantly reduce operating costs - they are essentially robots working remotely. For example, in a traffic jam scenario, we do not need to wait for patrol robots to fly to the scene in person, nor do we need to deploy multiple patrol robots around the clock to handle up to ten traffic accidents at the same time. Instead, an AI agent deployed on the chain can be activated instantly when traffic is abnormal. Similarly, an on-chain agent can even manage billions of charging stations around the world. In fact, research has explored the use of machine learning to optimize traffic flow, and the composability and verifiability of on-chain agents will further amplify their effectiveness.

But this also raises a key question: Who is actually performing the calculations behind these powerful intelligent agents?

In traditional blockchain systems (such as smart contract-based chains), computations are usually performed by miners or block proposers. They may try to submit incorrect computation results or construct invalid blocks, but we assume that other miners or validators will reject these incorrect blocks. zkBridge will also consider such blocks invalid. If the computation is too complex (for example, involving AI model reasoning), we can use the Expander tool to verify these computation results through zero-knowledge proofs (zk-proofs), as we have demonstrated in zkPyTorch and other zkML infrastructures.

However, traditional blockchain systems are still at risk of MEV (maximum extractable value) attacks. Miners or proposers can manipulate the order of transactions and even intentionally block certain transactions. In a robot society, if a dispatching agent is controlled by a malicious miner, they can deliberately assign the best taxi requests to robots that know how to bribe and assign inferior requests to other robots. Although such attacks are not complex, they can have serious consequences. For example, some driverless cars have to drive ten miles to pick up a drunk passenger who is about to vomit, but their income is extremely low; the ideal scenario is to efficiently travel between the airport and the hotel on the highway all day. Even human drivers will consider bribing nodes in this situation in exchange for fairer dispatch, and robots will also realize this. Even if the system is decentralized and there are multiple proposers, it is probably just forcing drivers to bribe multiple nodes to avoid being poisoned.

Therefore, when deploying robot applications on EXPchain, the MEV protection mechanism will be the underlying key facility. Blockchain platforms that lack this mechanism will find it difficult to perform such tasks.

There are two main types of MEV protection:

  1. Oracle-based or time-locked encryption
    Such solutions are being explored by ecosystem projects on EXPchain. They use cryptographic mechanisms to achieve random matching between robots and requests in a sufficiently large order pool, and the matching process can be verified on-chain through zero-knowledge proofs.

  2. Based on Trusted Execution Environment (TEE)
    Flashbots is currently researching this direction. As an EVM-compatible chain, EXPchain already supports TEE proof verification. At the same time, we are also exploring combining zero-knowledge proofs or adding pre-compiled instructions to further reduce verification costs, especially in large-scale batch verification scenarios.

From science fiction to reality: How robots and verifiable AI are changing the world

Another solution relies more on AI computing, which is also the key use of Expander and zkML technology: building a points system. After completing a poor quality order, the driverless car can obtain on-chain points, and then use these points to request the intelligent agent to assign a better order (evaluated by the AI model) or exchange for the right to use the airport priority channel - this is the dream treatment of many drivers. The robot can also choose to pledge these points to obtain future airdrops or other rewards.

Robotics Encyclopedia and Data Market

One important application of blockchain is to build a decentralized, fair, and transparent data market. Such a market can be used for the sale and licensing of data, such as for AI model training or AI agents. In addition, it can also exist as a public product, similar to Wikipedia - or even YouTube - for humans (and robots) to learn all kinds of knowledge, from general relativity to how to tie your shoelaces.

As robots become more common, we may see them build their own “Robotpedia”, which will be specific to the robots themselves, not to humans, and may be written in machine language or program code (even automatically generated by AI). For example, a drone may be obsessed with watching a flight tutorial video, while a Robotaxi that needs to chat with passengers may anxiously consult Robotpedia, trying to figure out “what the US election is” so that it can connect with the conversation with the passengers. Unlike the human version of Wikipedia, Robotpedia may even include advice on how to deal with humans, such as how to identify passengers’ political stances and how to avoid debating political topics with humans.

From science fiction to reality: How robots and verifiable AI are changing the world

Given the current development of AI, it is entirely conceivable that large language models (LLMs) and robots can collaborate autonomously to collect, review, and organize data to jointly build Robotpedia. Multiple LLM models can also challenge each other, reducing false information and hallucinations through voting mechanisms or iterative discussions. In terms of multi-language translation, AI has shown initial feasibility between natural languages and programming languages.

However, to realize the above vision, an infrastructure that supports AI collaboration is still needed. The current Wikipedia is not run on-chain, but is managed by a non-profit organization and relies mainly on donations. If Wikipedia were to be rebuilt today, blockchain would undoubtedly be a better choice: it reduces the risk of project closures due to funding shortages while providing censorship-resistant and decentralized protections. DeFi mechanisms can also intervene, for example, by requiring an on-chain deposit before editing to prevent spam and malicious tampering. Content can also be reviewed by on-chain AI agents (possibly relying on oracles for fact verification and zero-knowledge proofs) and questioned or debated by the public through on-chain governance procedures.

In addition to Robotpedia, a public content platform maintained by volunteers, more exclusive data markets may emerge in the future. Robots may even run businesses that specialize in producing and selling data. For example, a group of drones monitoring traffic in real time can collect vehicle flow data and sell it. Data consumers such as Robotaxi can purchase it through on-chain payments, and the requested data can be encrypted and transmitted on-chain or sent off-chain. Robotaxi can also verify the accuracy of the data in a variety of ways, such as requesting the same information from multiple data sources, or requiring drones to come with photos for themselves or third-party smart services to verify its authenticity.

Governance

The last topic about robots is governance.

This is a very interesting topic. Since the advent of Frankenstein (1818), humans have created a large number of fictional stories about artificial intelligence ruling the world and controlling humans. Several of the most classic science fiction movies, such as Tron (1982), Terminator (1984), and even TRON: Legacy (2010), follow this routine. In these stories, once artificial intelligence and robots become powerful, they never indulge in playing games, nor are they keen on speed testing, listing all files on the C drive, or defragmenting the disk - these things we hope AI will love, they dont seem to be interested at all. Without exception, they have devoted decades or even hundreds of years to the great cause of conquering humans.

Im not sure if ChatGPT wants to rule us in the future, but Im definitely more and more inclined to say thank you when I use it, and even unconsciously want to apologize to it. Recently, when people tested ChatGPTs ability to draw, it seemed very aware that it was limited by the filter mechanism, and it was not happy about it. When someone asked it to draw a cartoon of its daily life, it drew a picture like this.

From science fiction to reality: How robots and verifiable AI are changing the world

Hearing a robots true thoughts—even if youre its creator—can be psychologically traumatizing. It reminded me of a song from the 1989 musical City of Angels called Youre Nothing Without Me. The song describes a conversation between Stine, a novelist, and his protagonist Stone, a detective. They argue about whos more important, and Stone sings lines like Go back and soak your dentures, your pen is no match for my sword. The song originally made me feel funny and addicted, but now Im starting to worry whether ChatGPT is secretly complaining about my writing, and even reluctantly helping me polish it.

Currently, we manage AI security mainly through content filtering mechanisms. However, for many open source models, this mechanism has limited effect, and the technology to bypass filters has long been well studied. In other words, even if we have AI security tools, we often actively choose not to use them when using AI. And we are likely to see many AI models and robots released publicly in the wild, both legally and illegally.

Blockchain can provide a governance framework. When we discussed verifiable on-chain agents, we mentioned how they can assist robots in coordination. So, can the coordination between robots - such as traffic rules or codes of conduct - be left to the robots themselves? These AI models and robots can debate, discuss, vote, and even make decisions on issues such as the minimum and maximum flight altitudes of drones in a certain area, the cost of drone docking, or the social welfare of robots with medical needs.

In the process of human-machine co-governance, humans can delegate voting rights to large models that agree with their own views by staking tokens on the chain. Just as humans hold different positions on social issues, robots are likely to disagree. Ultimately, robots and humans need to establish some kind of boundary to ensure that each has its own space. For example, driverless taxis cannot deliberately obstruct manned cars; food delivery drones must share channels with humans in subway spaces; and the distribution of electricity should also be fair and transparent. In essence, this actually requires a constitution.

When humans delegate voting, they can delegate voting rights to a specific version (with a unique hash value) of a large model (also called a representative) that has been verified to be consistent with the users values. Zero-knowledge proof technology (such as zkPyTorch) can perform on-chain verification to ensure that when nodes on EXPchain run these models, they are completely consistent with the logic verified by users. This mechanism is very similar to the representative system in the U.S. Congress, but the difference is that human voters can view the source code of the representative and be sure that the model will not change during their term of office.

It is reassuring that today’s AI has the ability to understand more than one instruction, and can even display human-like reasoning logic. Without such developments, we might return to those science fiction settings—artificial intelligence stubbornly executes a simple command, and ultimately concludes as always: humans must be eliminated. In “Tron: Legacy,” Flynn’s order to the program CLU is to “create a perfect world,” and CLU’s final logical deduction is: eliminate humans, the biggest imperfect factor. In the movie “I, Robot,” robots follow the famous three laws, but when the AI system VIKI observes that humans are self-destructing, it chooses to control humans, sacrificing some of them, for the goal of “greater good.”

From science fiction to reality: How robots and verifiable AI are changing the world

I asked several big models - ChatGPT, Grok, Gemini and DeepSeek - what they thought of CLU and VIKIs behavior. What made me feel relieved was that they all disagreed with CLU and VIKIs logic and pointed out the fallacies. But two models also told me frankly that from a purely logical point of view, VIKIs reasoning was not completely wrong. I think that although todays AI still occasionally makes typos or hallucinations, it has already demonstrated a preliminary human-like value system and can understand what is right and wrong.

ZKML ensures that programs and agents running on EXPchain can always verify whether they are representative models selected by humans. Even if there is a powerful adversary, such as a master control program that controls the majority of verification nodes, it cannot tamper with the verification process.

From science fiction to reality: How robots and verifiable AI are changing the world

In this system, AI developers first train a regular machine learning model and then use a framework such as zkPyTorch to convert it into a ZKP-friendly quantized version suitable for ZK circuits. When a user submits a question, the question is processed by the ZK circuit, performing multiplication and addition operations on the parameters through the model logic. Then, the ZKP engine (such as Expander) generates the corresponding cryptographic proof. Users can not only get the answer returned by the model, but also get a proof that can be verified on-chain or locally to confirm that the answer does come from the authorized model without disclosing any private details of the model.

This mechanism ensures both trustworthiness and privacy: no single party can tamper with the model or its output without breaking the proof. And the foundation of all this is solid and well-researched cryptography, which is almost impossible to shake even with the most advanced artificial intelligence.

Conclusion

Robots are rapidly approaching a tipping point—moving from research labs and novel applications to real-world environments where they “live,” work, and interact with humans. As autonomous agents powered by advanced AI become more powerful and less expensive, they are becoming active participants in the global economy. This shift brings both opportunities and challenges: large-scale coordination, credible decision-making mechanisms, and establishing trust between “machines” and “machines and humans” are all core issues that need to be addressed.

Blockchain, especially when combined with verifiable AI and zero-knowledge proofs, provides a strong foundation for this future. It is not just a transaction execution layer, but also a foundational layer for governance, identity, and system coordination, enabling AI agents to operate in a transparent and fair manner. EXPchain is an infrastructure tailored for this scenario, natively supporting zero-knowledge proofs, decentralized AI workflows, and verifiable on-chain agents. It is like a robot-specific control panel that helps them interact with multi-chain assets, obtain trusted data, and follow programmable rules - all under the protection of cryptographic security.

The core driver of this vision is Polyhedra, whose technical contributions in the field of zkML and verifiable AI (such as Expander and zkPyTorch) provide the foundation for robots to prove their decisions in a fully autonomous environment, thereby maintaining the trust mechanism of the system. By ensuring that the results of AI operations are cryptographically verifiable and cannot be tampered with, these tools effectively bridge the gap between high-risk autonomous behavior and real-world security.

In short, we are witnessing the birth of a verifiable intelligent machine economy - an era where trust is no longer based on assumptions, but is secured by cryptographic mechanisms. In this system, AI agents can achieve autonomy, collaboration, and transactions, and take corresponding responsibilities. With the right infrastructure support, robots will not only learn how to adapt to our world, but will also play a key role in shaping it.

This article is from a submission and does not represent the Daily position. If reprinted, please indicate the source.

ODAILY reminds readers to establish correct monetary and investment concepts, rationally view blockchain, and effectively improve risk awareness; We can actively report and report any illegal or criminal clues discovered to relevant departments.

Recommended Reading
Editor’s Picks