Fluence is building an AI infrastructure that cannot be achieved by centralized clouds: an open, low-cost, enterprise-class computing layer that is sovereign, transparent, and open to everyone.
2025 continued the trend of 2024, and cloud computing giants are accelerating their competition for dominance in AI infrastructure: Microsoft plans to invest more than $80 billion to build data centers, Google has launched an AI supercomputer, Oracle has invested $25 billion to build the Stargate AI cluster, and AWS is also shifting its focus to native AI services.
At the same time, professional players are growing rapidly. CoreWeave raised $1.5 billion in its IPO in March this year and is currently valued at over $70 billion.
As AI becomes a critical infrastructure, the right to acquire computing power will become one of the most important battlefields of this era. Centralized giants are monopolizing computing power by building their own data centers and vertically integrating chips, while Fluence has proposed another vision: a decentralized, open, and neutral AI computing platform. Fluence capitalizes computing power and uses FLT as a real-world asset (RWA) token on the chain to cope with the exponential growth demand of AI.
Fluence has collaborated with multiple decentralized infrastructure projects, including AI networks (Spheron, Aethir, IO.net) and storage networks (Filecoin, Arweave, Akave, IPFS), to jointly promote the construction of a neutral computing-data underlying layer.
From 2025 to 2026, Fluences technology roadmap focuses on the following core directions:
1. Building a global GPU computing network
Fluence will introduce global GPU nodes to support high-performance hardware required for AI tasks, injecting reasoning, fine-tuning and model service capabilities into the network. This will upgrade the current CPU-based computing platform to a truly AI-oriented computing layer. The platform will integrate a containerized operating environment to ensure the security and portability of tasks.
In addition, Fluence will also explore the confidential computing capabilities of GPUs to ensure the secure reasoning and execution of private data. Through the trusted execution environment (TEE) and encrypted memory, sensitive business data can be processed even in a decentralized architecture, promoting the implementation of sovereign AI agents.
Key time points:
GPU Node Access Plan - Q3 2025
GPU container runtime environment is launched - Q4 2025
GPU confidential computing research and development started - Q4 2025
Confidential Reasoning Mission Pilot Execution — Q2 2026
2. Managed AI Models and Unified Inference Interface
Fluence will provide one-click deployment templates, covering mainstream open source models (such as LLM), orchestration frameworks such as LangChain, agent systems and MCP servers, and expand the platforms AI function stack. Deployment models will be more convenient, and community developers will be supported to participate together to enhance ecological vitality.
Key time points:
Model + Orchestration Templates Launch — Q4 2025
Inference endpoints and routing system deployment — Q2 2026
3. Implementing a Verifiable Community-Driven SLA
Fluence is building a decentralized trust and service guarantee mechanism, introducing the Guardians mechanism . These participants (can be individuals or institutions) are responsible for verifying the availability of network computing power and supervising the execution of service agreements through on-chain telemetry mechanisms, thereby obtaining FLT rewards.
Guardians can participate in infrastructure governance without hardware investment, transforming the enterprise-level computing network into a public platform that everyone can participate in . This mechanism will also be paired with the [Pointless Program] system to encourage community behavior and enhance the qualifications to become a guardian.
Key time points:
First batch of Guardians will be launched - Q3 2025
Guardian fully deployed SLA agreement launched - Q4 2025
4. Integration of AI computing power and composable data stack
The future of AI is not just computing power, but the integration of computing power + data . Fluence is deeply integrated with decentralized storage networks (such as Filecoin, Arweave, Akave, IPFS), giving developers the ability to access verifiable data sets and combine with GPU nodes to complete execution tasks.
Developers will be able to easily define AI jobs that access distributed data, run them in a GPU environment, and build a complete AI backend—all tasks coordinated by FLT. The platform will also provide SDK modules and composable templates to facilitate the connection of storage space and on-chain data, suitable for building AI agents, LLM tools, or scientific research applications.
Key time points:
Distributed storage backup is launched - Q1 2026
Datasets integrated into AI workflows — Q3 2026
From getting rid of cloud dependence to intelligent collaboration
Fluence is building a decentralized, censorship-resistant, open and collaborative computing power foundation for the AI era with GPU access, verifiable execution and data composability as the core. It is not monopolized by a few super-large cloud vendors, but driven by global developers and computing nodes .
The infrastructure of future AI should reflect the values we want AI to have: openness, collaboration, verifiability, and accountability . Fluence is encoding these principles into the protocol.
How to join Fluence:
Apply to become a GPU node provider
Sign up for the Fluence Cloudless VM beta
Participate in the Pointless Program and unlock Guardian status