Nous Research secured $50 million in Series A funding at a $1 billion token valuation, accelerating its decentralized AI development and reinforcing the burgeoning open-source AI sector.
Financial Overview and Strategic Investments
Nous Research, a decentralized artificial intelligence collective, has announced a Series A funding round of $50 million, led by Paradigm, valuing the project at a $1 billion token valuation. This investment brings Nous Research's total funding to over $70 million, following a $5.2 million seed round in January 2024 and a $14.8 million seed round in January 2025. Concurrently, Pluralis Research, another entity focused on decentralized AI training, recently completed a $7.6 million seed funding round co-led by USV and CoinFund.
Decentralized AI Strategy and Technical Innovations
Nous Research is dedicated to developing open-source, human-centric large language models (LLMs) and supporting infrastructure. The organization leverages the Solana blockchain to coordinate global compute resources for distributed training, aiming to democratize access to advanced AI capabilities and foster transparency and community ownership. Its flagship model, Hermes 3, is built upon LLaMA and Mistral architectures.
Karan Malhotra, Co-founder of Nous Research, stated, "> We believe the future of AI lies at the intersection of open-source development and the crypto ethos." The integration of blockchain technology serves to ensure transparency, incentivize participation, and mitigate against data poisoning.
Pluralis Research is pioneering a novel approach called Protocol Learning, which enables collaborative, multi-party training runs for foundation models without a single entity controlling the full model weights. Alexander Long, Founder and CEO of Pluralis Research, emphasized, "> Pluralis paves the way to true collective ownership at the model layer, by training foundation models that are split across geographically separated devices connected only over the internet."
Technical breakthroughs are crucial for distributed AI training. These include communication-efficient optimizers such as DeMo, DisTrO, OpenDiLoCo, SparseLoCo, and Skip-Pipe, alongside diverse parallelism approaches. Nous Research has reported advancements, including a framework that claims to reduce data transmission needs by 3,000 times for pre-training and potentially up to 10,000 times for post-training without degrading model performance. DeepMind's DiLoCo approach, based on federated averaging, minimizes communication by allowing each GPU unit to train its model independently before synchronizing weight updates.
Broader Market Impact and Ethical Considerations
The advancements in decentralized AI training by projects like Nous Research and Pluralis Research signify a potential reshaping of AI development. By democratizing access to training compute and fostering open-source models, these initiatives challenge the centralized control historically held by major AI laboratories such as OpenAI, Anthropic, Meta, Google, and xAI. This shift could lead to new crypto-economic models and significant market shifts in AI infrastructure and applications.
The adoption of permissionless, decentralized systems introduces new complexities regarding governance, bias, and accountability for AI. Traditional regulatory frameworks are often unsuited for autonomous agents operating on a global, distributed network. New governance models are emerging, with Decentralized Autonomous Organizations (DAOs) serving as a primary mechanism for on-chain AI governance, utilizing smart contracts for transparent and participatory structures. Hybrid architectures, which perform intensive computations off-chain while using blockchain as an immutable, verifiable ledger, are also being explored. For instance, OORT has introduced DataHub Launchpad, a platform for crowdsourced AI data collection, which rewards contributors with tokens, fostering both community growth and data generation for AI teams.