This weekend, Andrej Karpathy, the former director of AI at Tesla and a founding member of OpenAI, decided he wanted to read a book. But he did not want to read it alone. He wanted to read it accompanied by a committee of artificial intelligences, each offering its own perspective, critiquing the others, and eventually synthesizing a final answer under the guidance of a "Chairman."
To make this happen, Karpathy wrote what he called a "vibe code project" — a piece of software written quickly, largely by AI assistants, intended for fun rather than function. He posted the result, a repository called "LLM Council," to GitHub with a stark disclaimer: "I’m not going to support it in any way... Code is ephemeral now and libraries are over."
Yet, for technical decision-makers across the enterprise landscape, looking past the casual disclaimer reveals something far more significant than a weekend toy. In a few hundred lines of Python and JavaScript, Karpathy has sketched a reference architecture for the most critical, undefined layer of the modern software stack: the orchestration middleware sitting between corporate applications and the volatile market of AI models.
As companies finalize their platform investments for 2026, LLM Council offers a stripped-down look at the "build vs. buy" reality of AI infrastructure. It demonstrates that while the logic of routing and aggregating AI models is surprisingly simple, the operational wrapper required to make it enterprise-ready is where the true complexity lies.
To the casual observer, the LLM Council web application looks almost identical to ChatGPT. A user types a query into a chat box. But behind the scenes, the application triggers a sophisticated, three-stage workflow that mirrors how human decision-making bodies operate.
First, the system dispatches the user’s query to a panel of frontier models. In Karpathy’s default configuration, this includes OpenAI’s GPT-5.1, Google’s Gemini 3.0 Pro, Anthropic’s Claude Sonnet 4.5, and xAI’s Grok 4. These models generate their initial responses in parallel.
In the second stage, the software performs a peer review. Each model is fed the anonymized responses of its counterparts and asked to evaluate them based on accuracy and insight. This step transforms the AI from a generator into a critic, forcing a layer of quality control that is rare in standard chatbot interactions.
Finally, a designated "Chairman LLM" — currently configured as Google’s Gemini 3 — receives the original query, the individual responses, and the peer rankings. It synthesizes this mass of context into a single, authoritative answer for the user.
Karpathy noted that the results were often surprising. "Quite often, the models are surprisingly willing to select another LLM's response as superior to their own," he wrote on X (formerly Twitter). He described using the tool to read book chapters, observing that the models consistently praised GPT-5.1 as the most insightful while rating Claude the lowest. However, Karpathy’s own qualitative assessment diverged from his digital council; he found GPT-5.1 "too wordy" and preferred the "condensed and processed" output of Gemini.
For CTOs and platform architects, the value of LLM Council lies not in its literary criticism, but in its construction. The repository serves as a primary document showing exactly what a modern, minimal AI stack looks like in late 2025.
The application is built on a "thin" architecture. The backend uses FastAPI, a modern Python framework, while the frontend is a standard React application built with Vite. Data storage is handled not by a complex database, but by simple JSON files written to the local disk.
The linchpin of the entire operation is OpenRouter, an API aggregator that normalizes the differences between various model providers. By routing requests through this single broker, Karpathy avoided writing separate integration code for OpenAI, Google, and Anthropic. The application does not know or care which company provides the intelligence; it simply sends a prompt and awaits a response.
This design choice highlights a growing trend in enterprise architecture: the commoditization of the model layer. By treating frontier models as interchangeable components that can be swapped by editing a single line in a configuration file — specifically the COUNCIL_MODELS list in the backend code — the architecture protects the application from vendor lock-in. If a new model from Meta or Mistral tops the leaderboards next week, it can be added to the council in seconds.
While the core logic of LLM Council is elegant, it also serves as a stark illustration of the gap between a "weekend hack" and a production system. For an enterprise platform team, cloning Karpathy’s repository is merely step one of a marathon.
A technical audit of the code reveals the missing "boring" infrastructure that commercial vendors sell for premium prices. The system lacks authentication; anyone with access to the web interface can query the models. There is no concept of user roles, meaning a junior developer has the same access rights as the CIO.
Furthermore, the governance layer is nonexistent. In a corporate environment, sending data to four different external AI providers simultaneously triggers immediate compliance concerns. There is no mechanism here to redact Personally Identifiable Information (PII) before it leaves the local network, nor is there an audit log to track who asked what.
Reliability is another open question. The system assumes the OpenRouter API is always up and that the models will respond in a timely fashion. It lacks the circuit breakers, fallback strategies, and retry logic that keep business-critical applications running when a provider suffers an outage.
These absences are not flaws in Karpathy’s code — he explicitly stated he does not intend to support or improve the project — but they define the value proposition for the commercial AI infrastructure market.
Companies like LangChain, AWS Bedrock, and various AI gateway startups are essentially selling the "hardening" around the core logic that Karpathy demonstrated. They provide the security, observability, and compliance wrappers that turn a raw orchestration script into a viable enterprise platform.
Perhaps the most provocative aspect of the project is the philosophy under which it was built. Karpathy described the development process as "99% vibe-coded," implying he relied heavily on AI assistants to generate the code rather than writing it line-by-line himself.
"Code is ephemeral now and libraries are over, ask your LLM to change it in whatever way you like," he wrote in the repository’s documentation.
This statement marks a radical shift in software engineering capability. Traditionally, companies build internal libraries and abstractions to manage complexity, maintaining them for years. Karpathy is suggesting a future where code is treated as "promptable scaffolding" — disposable, easily rewritten by AI, and not meant to last.
For enterprise decision-makers, this poses a difficult strategic question. If internal tools can be "vibe coded" in a weekend, does it make sense to buy expensive, rigid software suites for internal workflows? Or should platform teams empower their engineers to generate custom, disposable tools that fit their exact needs for a fraction of the cost?
Beyond the architecture, the LLM Council project inadvertently shines a light on a specific risk in automated AI deployment: the divergence between human and machine judgment.
Karpathy’s observation that his models preferred GPT-5.1, while he preferred Gemini, suggests that AI models may have shared biases. They might favor verbosity, specific formatting, or rhetorical confidence that does not necessarily align with human business needs for brevity and accuracy.
As enterprises increasingly rely on "LLM-as-a-Judge" systems to evaluate the quality of their customer-facing bots, this discrepancy matters. If the automated evaluator consistently rewards "wordy and sprawled" answers while human customers want concise solutions, the metrics will show success while customer satisfaction plummets. Karpathy’s experiment suggests that relying solely on AI to grade AI is a strategy fraught with hidden alignment issues.
Ultimately, LLM Council acts as a Rorschach test for the AI industry. For the hobbyist, it is a fun way to read books. For the vendor, it is a threat, proving that the core functionality of their products can be replicated in a few hundred lines of code.
But for the enterprise technology leader, it is a reference architecture. It demystifies the orchestration layer, showing that the technical challenge is not in routing the prompts, but in governing the data.
As platform teams head into 2026, many will likely find themselves staring at Karpathy’s code, not to deploy it, but to understand it. It proves that a multi-model strategy is not technically out of reach. The question remains whether companies will build the governance layer themselves or pay someone else to wrap the "vibe code" in enterprise-grade armor.
It's not just Google's Gemini 3, Nano Banana Pro, and Anthropic's Claude Opus 4.5 we have to be thankful for this year around the Thanksgiving holiday here in the U.S.
No, today the German AI startup Black Forest Labs released FLUX.2, a new image generation and editing system complete with four different models designed to support production-grade creative workflows.
FLUX.2 introduces multi-reference conditioning, higher-fidelity outputs, and improved text rendering, and it expands the company’s open-core ecosystem with both commercial endpoints and open-weight checkpoints.
While Black Forest Labs previously launched with and made a name for itself on open source text-to-image models in its Flux family, today's release includes one fully open-source component: the Flux.2 VAE, available now under the Apache 2.0 license.
Four other models of varying size and uses — Flux.2 [Pro], Flux.2 [Flex], and Flux.2 [Dev] —are not open source; Pro and Flex remain proprietary hosted offerings, while Dev is an open-weight downloadable model that requires a commercial license obtained directly from Black Forest Labs for any commercial use. An upcoming open-source model is Flux.2 [Klein], which will also be released under Apache 2.0 when available.
But the open source Flux.2 VAE, or variational autoencoder, is important and useful to enterprises for several reasons. This is a module that compresses images into a latent space and reconstructs them back into high-resolution outputs; in Flux.2, it defines the latent representation used across the multiple (four total, see blow) model variants, enabling higher-quality reconstructions, more efficient training, and 4-megapixel editing.
Because this VAE is open and freely usable, enterprises can adopt the same latent space used by BFL’s commercial models in their own self-hosted pipelines, gaining interoperability between internal systems and external providers while avoiding vendor lock-in.
The availability of a fully open, standardized latent space also enables practical benefits beyond media-focused organizations. Enterprises can use an open-source VAE as a stable, shared foundation for multiple image-generation models, allowing them to switch or mix generators without reworking downstream tools or workflows.
Standardizing on a transparent, Apache-licensed VAE supports auditability and compliance requirements, ensures consistent reconstruction quality across internal assets, and allows future models trained for the same latent space to function as drop-in replacements.
This transparency also enables downstream customization such as lightweight fine-tuning for brand styles or internal visual templates—even for organizations that do not specialize in media but rely on consistent, controllable image generation for marketing materials, product imagery, documentation, or stock-style visuals.
The announcement positions FLUX.2 as an evolution of the FLUX.1 family, with an emphasis on reliability, controllability, and integration into existing creative pipelines rather than one-off demos.
FLUX.2 extends the prior FLUX.1 architecture with more consistent character, layout, and style adherence across up to ten reference images.
The system maintains coherence at 4-megapixel resolutions for both generation and editing tasks, enabling use cases such as product visualization, brand-aligned asset creation, and structured design workflows.
The model also improves prompt following across multi-part instructions while reducing failure modes related to lighting, spatial logic, and world knowledge.
In parallel, Black Forest Labs continues to follow an open-core release strategy. The company provides hosted, performance-optimized versions of FLUX.2 for commercial deployments, while also publishing inspectable open-weight models that researchers and independent developers can run locally. This approach extends a track record begun with FLUX.1, which became the most widely used open image model globally.
Flux.2 arrives with 5 variants as follows:
Flux.2 [Pro]: This is the highest-performance tier, intended for applications that require minimal latency and maximal visual fidelity. It is available through the BFL Playground, the FLUX API, and partner platforms. The model aims to match leading closed-weight systems in prompt adherence and image quality while reducing compute demand.
Flux.2 [Flex]: This version exposes parameters such as the number of sampling steps and the guidance scale. The design enables developers to tune the trade-offs between speed, text accuracy, and detail fidelity. In practice, this enables workflows where low-step previews can be generated quickly before higher-step renders are invoked.
Flux.2 [Dev]: The most notable release for the open ecosystem is the 32-billion-parameter open-weight checkpoint which integrates text-to-image generation and image editing into a single model. It supports multi-reference conditioning without requiring separate modules or pipelines. The model can run locally using BFL’s reference inference code or optimized fp8 implementations developed in partnership with NVIDIA and ComfyUI. Hosted inference is also available via FAL, Replicate, Runware, Verda, TogetherAI, Cloudflare, and DeepInfra.
Flux.2 [Klein]: Coming soon, this size-distilled model is released under Apache 2.0 and is intended to offer improved performance relative to comparable models of the same size trained from scratch. A beta program is currently open.
Flux.2 – VAE: Released under the enterprise friendly (even for commercial use) Apache 2.0 license, updated variational autoencoder provides the latent space that underpins all Flux.2 variants. The VAE emphasizes an optimized balance between reconstruction fidelity, learnability, and compression rate—a long-standing challenge for latent-space generative architectures.
Black Forest Labs published two sets of evaluations highlighting FLUX.2’s performance relative to other open-weight and hosted image-generation models. In head-to-head win-rate comparisons across three categories—text-to-image generation, single-reference editing, and multi-reference editing—FLUX.2 [Dev] led all open-weight alternatives by a substantial margin.
It achieved a 66.6% win rate in text-to-image generation (vs. 51.3% for Qwen-Image and 48.1% for Hunyuan Image 3.0), 59.8% in single-reference editing (vs. 49.3% for Qwen-Image and 41.2% for FLUX.1 Kontext), and 63.6% in multi-reference editing (vs. 36.4% for Qwen-Image). These results reflect consistent gains over both earlier FLUX.1 models and contemporary open-weight systems.
A second benchmark compared model quality using ELO scores against approximate per-image cost. In this analysis, FLUX.2 [Pro], FLUX.2 [Flex], and FLUX.2 [Dev] cluster in the upper-quality, lower-cost region of the chart, with ELO scores in the ~1030–1050 band while operating in the 2–6 cent range.
By contrast, earlier models such as FLUX.1 Kontext [max] and Hunyuan Image 3.0 appear significantly lower on the ELO axis despite similar or higher per-image costs. Only proprietary competitors like Nano Banana 2 reach higher ELO levels, but at noticeably elevated cost. According to BFL, this positions FLUX.2’s variants as offering strong quality–cost efficiency across performance tiers, with FLUX.2 [Dev] in particular delivering near–top-tier quality while remaining one of the lowest-cost options in its class.
A pricing calculator on BFL’s site indicates that FLUX.2 [Pro] is billed at roughly $0.03 per megapixel of combined input and output. A standard 1024×1024 (1 MP) generation costs $0.030, and higher resolutions scale proportionally. The calculator also counts input images toward total megapixels, suggesting that multi-image reference workflows will have higher per-call costs.
By contrast, Google’s Gemini 3 Pro Image Preview aka "Nano Banana Pro," currently prices image outputat $120 per 1M tokens, resulting in a cost of $0.134 per 1K–2K image (up to 2048×2048) and $0.24 per 4K image. Image input is billed at $0.0011 per image, which is negligible compared to output costs.
While Gemini’s model uses token-based billing, its effective per-image pricing places 1K–2K images at more than 4× the cost of a 1 MP FLUX.2 [Pro] generation, and 4K outputs at roughly 8× the cost of a similar-resolution FLUX.2 output if scaled proportionally.
In practical terms, the available data suggests that FLUX.2 [Pro] currently offers significantly lower per-image pricing, particularly for high-resolution outputs or multi-image editing workflows, whereas Gemini 3 Pro’s preview tier is positioned as a higher-cost, token-metered service with more variability depending on resolution.
FLUX.2 is built on a latent flow matching architecture, combining a rectified flow transformer with a vision-language model based on Mistral-3 (24B). The VLM contributes semantic grounding and contextual understanding, while the transformer handles spatial structure, material representation, and lighting behavior.
A major component of the update is the re-training of the model’s latent space. The FLUX.2 VAE integrates advances in semantic alignment, reconstruction quality, and representational learnability drawn from recent research on autoencoder optimization. Earlier models often faced trade-offs in the learnability–quality–compression triad: highly compressed spaces increase training efficiency but degrade reconstructions, while wider bottlenecks can reduce the ability of generative models to learn consistent transformations.
According to BFL’s research data, the FLUX.2 VAE achieves lower LPIPS distortion than the FLUX.1 and SD autoencoders while also improving generative FID. This balance allows FLUX.2 to support high-fidelity editing—an area that typically demands reconstruction accuracy—and still maintain competitive learnability for large-scale generative training.
The most significant functional upgrade is multi-reference support. FLUX.2 can ingest up to ten reference images and maintain identity, product details, or stylistic elements across the output. This feature is relevant for commercial applications such as merchandising, virtual photography, storyboarding, and branded campaign development.
The system’s typography improvements address a persistent challenge for diffusion- and flow-based architectures. FLUX.2 is able to generate legible fine text, structured layouts, UI elements, and infographic-style assets with greater reliability. This capability, combined with flexible aspect ratios and high-resolution editing, broadens the use cases where text and image jointly define the final output.
FLUX.2 enhances instruction following for multi-step, compositional prompts, enabling more predictable outcomes in constrained workflows. The model exhibits better grounding in physical attributes—such as lighting and material behavior—reducing inconsistencies in scenes requiring photoreal equilibrium.
Black Forest Labs continues to position its models within an ecosystem that blends open research with commercial reliability. The FLUX.1 open models helped establish the company’s reach across both the developer and enterprise markets, and FLUX.2 expands this structure: tightly optimized commercial endpoints for production deployments and open, composable checkpoints for research and community experimentation.
The company emphasizes transparency through published inference code, open-weight VAE release, prompting guides, and detailed architectural documentation. It also continues to recruit talent in Freiburg and San Francisco as it pursues a longer-term roadmap toward multimodal models that unify perception, memory, reasoning, and generation.
Black Forest Labs (BFL) was founded in 2024by Robin Rombach, Patrick Esser, and Andreas Blattmann, the original creators of Stable Diffusion. Their move from Stability AI came at a moment of turbulence for the broader open-source generative AI community, and the launch of BFL signaled a renewed effort to build accessible, high-performance image models. The company secured $31 million in seed funding led by Andreessen Horowitz, with additional support from Brendan Iribe, Michael Ovitz, and Garry Tan, providing early validation for its technical direction.
BFL’s first major release, FLUX.1, introduced a 12-billion-parameter architecture available in Pro, Dev, and Schnell variants. It quickly gained a reputation for output quality that matched or exceeded closed-source competitors such as Midjourney v6 and DALL·E 3, while the Dev and Schnell versions reinforced the company’s commitment to open distribution. FLUX.1 also saw rapid adoption in downstream products, including xAI’s Grok 2, and arrived amid ongoing industry discussions about dataset transparency, responsible model usage, and the role of open-source distribution. BFL published strict usage policies aimed at preventing misuse and non-consensual content generation.
In late 2024, BFL expanded the lineup with Flux 1.1 Pro, a proprietary high-speed model delivering sixfold generation speed improvements and achieving leading ELO scores on Artificial Analysis. The company launched a paid API alongside the release, enabling configurable integrations with adjustable resolution, model choice, and moderation settings at pricing that began at $0.04 per image.
Partnerships with TogetherAI, Replicate, FAL, and Freepik broadened access and made the model available to users without the need for self-hosting, extending BFL’s reach across commercial and creator-oriented platforms.
These developments unfolded against a backdrop of accelerating competition in generative media.
The FLUX.2 release carries distinct operational implications for enterprise teams responsible for AI engineering, orchestration, data management, and security. For AI engineers responsible for model lifecycle management, the availability of both hosted endpoints and open-weight checkpoints enables flexible integration paths.
FLUX.2’s multi-reference capabilities and expanded resolution support reduce the need for bespoke fine-tuning pipelines when handling brand-specific or identity-consistent outputs, lowering development overhead and accelerating deployment timelines. The model’s improved prompt adherence and typography performance also reduce iterative prompting cycles, which can have a measurable impact on production workload efficiency.
Teams focused on AI orchestration and operational scaling benefit from the structure of FLUX.2’s product family. The Pro tier offers predictable latency characteristics suitable for pipeline-critical workloads, while the Flex tier enables direct control over sampling steps and guidance parameters, aligning with environments that require strict performance tuning.
Open-weight access for the Dev model facilitates the creation of custom containerized deployments and allows orchestration platforms to manage the model under existing CI/CD practices. This is particularly relevant for organizations balancing cutting-edge tooling with budget constraints, as self-hosted deployments offer cost control at the expense of in-house optimization requirements.
Data engineering stakeholders gain advantages from the model’s latent architecture and improved reconstruction fidelity. High-quality, predictable image representations reduce downstream data-cleaning burdens in workflows where generated assets feed into analytics systems, creative automation pipelines, or multimodal model development.
Because FLUX.2 consolidates text-to-image and image-editing functions into a single model, it simplifies integration points and reduces the complexity of data flows across storage, versioning, and monitoring layers. For teams managing large volumes of reference imagery, the ability to incorporate up to ten inputs per generation may also streamline asset management processes by shifting more variation handling into the model rather than external tooling.
For security teams, FLUX.2’s open-core approach introduces considerations related to access control, model governance, and API usage monitoring. Hosted FLUX.2 endpoints allow for centralized enforcement of security policies and reduce local exposure to model weights, which may be preferable for organizations with stricter compliance requirements.
Conversely, open-weight deployments require internal controls for model integrity, version tracking, and inference-time monitoring to prevent misuse or unapproved modifications. The model’s handling of typography and realistic compositions also reinforces the need for established content governance frameworks, particularly where generative systems interface with public-facing channels.
Across these roles, FLUX.2’s design emphasizes predictable performance characteristics, modular deployment options, and reduced operational friction. For enterprises with lean teams or rapidly evolving requirements, the release offers a set of capabilities aligned with practical constraints around speed, quality, budget, and model governance.
FLUX.2 marks a substantial iterative improvement in Black Forest Labs’ generative image stack, with notable gains in multi-reference consistency, text rendering, latent space quality, and structured prompt adherence. By pairing fully managed offerings with open-weight checkpoints, BFL maintains its open-core model while extending its relevance to commercial creative workflows. The release demonstrates a shift from experimental image generation toward more predictable, scalable, and controllable systems suited for operational use.
Researchers at Alibaba’s Tongyi Lab have developed a new framework for self-evolving agents that create their own training data by exploring their application environments. The framework, AgentEvolver, uses the knowledge and reasoning capabilities of large language models for autonomous learning, addressing the high costs and manual effort typically required to gather task-specific datasets.
Experiments show that compared to traditional reinforcement learning–based frameworks, AgentEvolver is more efficient at exploring its environment, makes better use of data, and adapts faster to application environments. For the enterprise, this is significant because it lowers the barrier to training agents for bespoke applications, making powerful, custom AI assistants more accessible to a wider range of organizations.
Reinforcement learning has become a major paradigm for training LLMs to act as agents that can interact with digital environments and learn from feedback. However, developing agents with RL faces fundamental challenges. First, gathering the necessary training datasets is often prohibitively expensive, requiring significant manual labor to create examples of tasks, especially in novel or proprietary software environments where there are no available off-the-shelf datasets.
Second, the RL techniques commonly used for LLMs require the model to run through a massive number of trial-and-error attempts to learn effectively. This process is computationally costly and inefficient. As a result, training capable LLM agents through RL remains laborious and expensive, limiting their deployment in custom enterprise settings.
The main idea behind AgentEvolver is to give models greater autonomy in their own learning process. The researchers describe it as a “self-evolving agent system” designed to “achieve autonomous and efficient capability evolution through environmental interaction.” It uses the reasoning power of an LLM to create a self-training loop, allowing the agent to continuously improve by directly interacting with its target environment without needing predefined tasks or reward functions.
“We envision an agent system where the LLM actively guides exploration, task generation, and performance refinement,” the researchers wrote in their paper.
The self-evolution process is driven by three core mechanisms that work together.
The first is self-questioning, where the agent explores its environment to discover the boundaries of its functions and identify useful states. It’s like a new user clicking around an application to see what’s possible. Based on this exploration, the agent generates its own diverse set of tasks that align with a user’s general preferences. This reduces the need for handcrafted datasets and allows the agent and its tasks to co-evolve, progressively enabling it to handle more complex challenges.
According to Yunpeng Zhai, researcher at Alibaba and co-author of the paper, who spoke to VentureBeat, the self-questioning mechanism effectively turns the model from a “data consumer into a data producer,” dramatically reducing the time and cost required to deploy an agent in a proprietary environment.
The second mechanism is self-navigating, which improves exploration efficiency by reusing and generalizing from past experiences. AgentEvolver extracts insights from both successful and unsuccessful attempts and uses them to guide future actions. For example, if an agent tries to use an API function that doesn't exist in an application, it registers this as an experience and learns to verify the existence of functions before attempting to use them in the future.
The third mechanism, self-attributing, enhances learning efficiency by providing more detailed feedback. Instead of just a final success or failure signal (a common practice in RL that can result in sparse rewards), this mechanism uses an LLM to assess the contribution of each individual action in a multi-step task. It retrospectively determines whether each step contributed positively or negatively to the final outcome, giving the agent fine-grained feedback that accelerates learning.
This is crucial for regulated industries where how an agent solves a problem is as important as the result. “Instead of rewarding a student only for the final answer, we also evaluate the clarity and correctness of each step in their reasoning,” Zhai explained. This improves transparency and encourages the agent to adopt more robust and auditable problem-solving patterns.
“By shifting the training initiative from human-engineered pipelines to LLM-guided self-improvement, AgentEvolver establishes a new paradigm that paves the way toward scalable, cost-effective, and continually improving intelligent systems,” the researchers state.
The team has also developed a practical, end-to-end training framework that integrates these three mechanisms. A key part of this foundation is the Context Manager, a component that controls the agent's memory and interaction history. While today's benchmarks test a limited number of tools, real enterprise environments can involve thousands of APIs.
Zhai acknowledges this is a core challenge for the field, but notes that AgentEvolver was designed to be extended. “Retrieval over extremely large action spaces will always introduce computational challenges, but AgentEvolver’s architecture provides a clear path toward scalable tool reasoning in enterprise settings,” he said.
To measure the effectiveness of their framework, the researchers tested it on AppWorld and BFCL v3, two benchmarks that require agents to perform long, multi-step tasks using external tools. They used models from Alibaba’s Qwen2.5 family (7B and 14B parameters) and compared their performance against a baseline model trained with GRPO, a popular RL technique used to develop reasoning models like DeepSeek-R1.
The results showed that integrating all three mechanisms in AgentEvolver led to substantial performance gains. For the 7B model, the average score improved by 29.4%, and for the 14B model, it increased by 27.8% over the baseline. The framework consistently enhanced the models' reasoning and task-execution capabilities across both benchmarks. The most significant improvement came from the self-questioning module, which autonomously generates diverse training tasks and directly addresses the data scarcity problem.
The experiments also demonstrated that AgentEvolver can efficiently synthesize a large volume of high-quality training data. The tasks generated by the self-questioning module proved diverse enough to achieve good training efficiency even with a small amount of data.
For enterprises, this provides a path to creating agents for bespoke applications and internal workflows while minimizing the need for manual data annotation. By providing high-level goals and letting the agent generate its own training experiences, organizations can develop custom AI assistants more simply and cost-effectively.
“This combination of algorithmic design and engineering pragmatics positions AgentEvolver as both a research vehicle and a reusable foundation for building adaptive, tool-augmented agents,” the researchers conclude.
Looking ahead, the ultimate goal is much bigger. “A truly ‘singular model’ that can drop into any software environment and master it overnight is certainly the holy grail of agentic AI,” Zhai said. “We see AgentEvolver as a necessary step in that direction.” While that future still requires breakthroughs in model reasoning and infrastructure, self-evolving approaches are paving the way.
President Donald Trump’s new “Genesis Mission” unveiled Monday, November 24, 2025, is billed as a generational leap in how the United States does science akin to the Manhattan Project that created the atomic bomb during World War II.
The executive order directs the Department of Energy (DOE) to build a “closed-loop AI experimentation platform” that links the country’s 17 national laboratories, federal supercomputers, and decades of government scientific data into “one cooperative system for research.”
The White House fact sheet casts the initiative as a way to “transform how scientific research is conducted” and “accelerate the speed of scientific discovery,” with priorities spanning biotechnology, critical materials, nuclear fission and fusion, quantum information science, and semiconductors.
DOE’s own release calls it “the world’s most complex and powerful scientific instrument ever built” and quotes Under Secretary for Science Darío Gil describing it as a “closed-loop system” linking the nation’s most advanced facilities, data, and computing into “an engine for discovery that doubles R&D productivity.”
The text of the order outlines mandatory steps DOE must complete within 60, 90, 120, 240, and 270 days—including identifying all Federal and partner compute resources, cataloging datasets and model assets, assessing robotic laboratory infrastructure across national labs, and demonstrating an initial operating capability for at least one scientific challenge within nine months.
The DOE’s own Genesis Mission website adds important context: the initiative is launching with a broad coalition of private-sector, nonprofit, academic, and utility collaborators. The list spans multiple sectors—from advanced materials to aerospace to cloud computing—and includes participants such as Albemarle, Applied Materials, Collins Aerospace, GE Aerospace, Micron, PMT Critical Metals, and the Tennessee Valley Authority. That breadth signals DOE’s intent to position Genesis not just as an internal research overhaul but as a national industrial effort connected to manufacturing, energy infrastructure, and scientific supply chains.
The collaborator list also includes many of the most influential AI and compute firms in the United States: OpenAI for Government, Anthropic, Scale AI, Google, Microsoft, NVIDIA, AWS, IBM, Cerebras, HPE, Hugging Face, and Dell Technologies.
The DOE frames Genesis as a national-scale instrument — a single “intelligent network," an “end-to-end discovery engine,” one intended to generate new classes of high-fidelity data, accelerate experimental cycles, and reduce research timelines from “years to months.” The agency casts the mission as foundational infrastructure for the next era of American science.
Taken together, the roster outlines the technical backbone likely to shape the mission’s early development—hardware vendors, hyperscale cloud providers, frontier-model developers, and orchestration-layer companies. DOE does not describe these entities as contractors or beneficiaries, but their inclusion demonstrates that private-sector technical capacity will play a defining role in building and operating the Genesis platform.
What the administration has not provided is just as striking: no public cost estimate, no explicit appropriation, and no breakdown of who will pay for what. Major news outlets including Reuters, Associated Press, Politico, and others have all noted that the order “does not specify new spending or a budget request,” or that funding will depend on future appropriations and previously passed legislation.
That omission, combined with the initiative’s scope and timing, raises questions not only about how Genesis will be funded and to what extent, but about who it might quietly benefit.
Soon after DOE promoted the mission on X, Teknium of the small U.S. AI lab Nous Research posted a blunt reaction: “So is this just a subsidy for big labs or what.”
The line has become a shorthand for a growing concern in the AI community: that the U.S. government could offer some sort of public subsidy for large AI firms facing staggering and rising compute and data costs.
That concern is grounded in recent, well-sourced reporting on OpenAI’s finances and infrastructure commitments. Documents obtained and analyzed by tech public relations professional and AI critic Ed Zitron describe a cost structure that has exploded as the company has scaled models like GPT-4, GPT-4.1, and GPT-5.1.
The Registerhas separately inferred from Microsoft quarterly earnings statements that OpenAI lost about $13.5 billion on $4.3 billion in revenue in the first half of 2025 alone. Other outlets and analysts have highlighted projections that show tens of billions in annual losses later this decade if spending and revenue follow current trajectories
By contrast, Google DeepMind trained its recent Gemini 3 flagship LLM on the company’s own TPU hardware and in its own data centers, giving it a structural advantage in cost per training run and energy management, as covered in Google’s own technical blogs and subsequent financial reporting.
Viewed against that backdrop, an ambitious federal project that promises to integrate “world-class supercomputers and datasets into a unified, closed-loop AI platform” and “power robotic laboratories” sounds, to some observers, like more than a pure science accelerator. It could, depending on how access is structured, also ease the capital bottlenecks facing private frontier-model labs.
The aggressive DOE deadlines and the order’s requirement to build a national AI compute-and-experimentation stack amplify those questions: the government is now constructing something strikingly similar to what private labs have been spending billions to build for themselves.
The order directs DOE to create standardized agreements governing model sharing, intellectual-property ownership, licensing rules, and commercialization pathways—effectively setting the legal and governance infrastructure needed for private AI companies to plug into the federal platform. While access is not guaranteed and pricing is not specified, the framework for deep public-private integration is now fully established.
What the order does not do is guarantee those companies access, spell out subsidized pricing, or earmark public money for their training runs. Any claim that OpenAI, Anthropic, or Google “just got access” to federal supercomputing or national-lab data is, at this point, an interpretation of how the framework could be used, not something the text actually promises.
Furthermore, the executive order makes no mention of open-source model development — an omission that stands out in light of remarks last year from Vice President JD Vance, when, prior to assuming office and while serving as a Senator from Ohio and participating in a hearing, he warned against regulations designed to protect incumbent tech firms and was widely praised by open-source advocates.
That silence is notable given Vance’s earlier testimony, which many in the AI community interpreted as support for open-source AI or, at minimum, skepticism of policies that entrench incumbent advantages. Genesis instead sketches a controlled-access ecosystem governed by classification rules, export controls, and federal vetting requirements—far from the open-source model some expected this administration to champion.
Another viral reaction came from AI influencer Chris (@chatgpt21 on X), who wrote in an X post that that OpenAI, Anthropic, and Google have already “got access to petabytes of proprietary data” from national labs, and that DOE labs have been “hoarding experimental data for decades.” The public record supports a narrower claim.
The order and fact sheet describe “federal scientific datasets—the world’s largest collection of such datasets, developed over decades of Federal investments” and direct agencies to identify data that can be integrated into the platform “to the extent permitted by law.”
DOE’s announcement similarly talks about unleashing “the full power of our National Laboratories, supercomputers, and data resources.”
It is true that the national labs hold enormous troves of experimental data. Some of it is already public via the Office of Scientific and Technical Information (OSTI) and other repositories; some is classified or export-controlled; much is under-used because it sits in fragmented formats and systems. But there is no public document so far that states private AI companies have now been granted blanket access to this data, or that DOE characterizes past practice as “hoarding.”
What is clear is that the administration wants to unlock more of this data for AI-driven research and to do so in coordination with external partners. Section 5 of the order instructs DOE and the Assistant to the President for Science and Technology to create standardized partnership frameworks, define IP and licensing rules, and set “stringent data access and management processes and cybersecurity standards for non-Federal collaborators accessing datasets, models, and computing environments.”
Equally notable is the national-security framing woven throughout the order. Multiple sections invoke classification rules, export controls, supply-chain security, and vetting requirements that place Genesis at the junction of open scientific inquiry and restricted national-security operations. Access to the platform will be mediated through federal security norms rather than open-science principles.
Taken at face value, the Genesis Mission is an ambitious attempt to use AI and high-performance computing to speed up everything from fusion research to materials discovery and pediatric cancer work, using decades of taxpayer-funded data and instruments that already exist inside the federal system. The executive order spends considerable space on governance: coordination through the National Science and Technology Council, new fellowship programs, and annual reporting on platform status, integration progress, partnerships, and scientific outcomes.
The order also codifies, for the first time, the development of AI agents capable of generating hypotheses, designing experiments, interpreting results, and directing robotic laboratories—an explicit embrace of automated scientific discovery and a significant departure from prior U.S. science directives.
Yet the initiative also lands at a moment when frontline AI labs are buckling under their own compute bills, when one of them—OpenAI—is reported to be spending more on running models than it earns in revenue, and when investors are openly debating whether the current business model for proprietary frontier AI is sustainable without some form of outside support.
In that environment, a federally funded, closed-loop AI discovery platform that centralizes the country’s most powerful supercomputers and data is inevitably going to be read in more than one way. It may become a genuine engine for public science. It may also become a crucial piece of infrastructure for the very companies driving today’s AI arms race.
Standing up a platform of this scale—complete with robotic labs, synthetic data generation pipelines, multi-agency datasets, and industrial-grade AI agents—would typically require substantial, dedicated appropriations and a multi-year budget roadmap. Yet the order remains silent on cost, leaving observers to speculate whether the administration will repurpose existing resources, seek congressional appropriations later, or rely heavily on private-sector partnerships to build the platform.
For now, one fact is undeniable: the administration has launched a mission it compares to the Manhattan Project without telling the public what it will cost, how the money will flow, or exactly who will be allowed to plug into it.
For enterprise teams already building or scaling AI systems, the Genesis Mission signals a shift in how national infrastructure, data governance, and high-performance compute will evolve in the U.S.—and those signals matter even before the government publishes a budget.
The initiative outlines a federated, AI-driven scientific ecosystem where supercomputers, datasets, and automated experimentation loops operate as tightly integrated pipelines.
That direction mirrors the trajectory many companies are already moving toward: larger models, more experimentation, heavier orchestration, and a growing need for systems that can manage complex workloads with reliability and traceability.
Even though Genesis is aimed at science, its architecture hints at what will become expected norms across American industries.
The specificity of the order’s deadlines also signals where enterprise expectations may shift next: toward standardized metadata, provenance tracking, multi-cloud interoperability, AI pipeline observability, and rigorous access controls. As DOE operationalizes Genesis, enterprises—particularly in regulated sectors such as biotech, energy, pharmaceuticals, and advanced manufacturing—may find themselves evaluated against emerging federal norms for data governance and AI-system integrity.
The lack of cost detail around Genesis does not directly alter enterprise roadmaps, but it does reinforce the broader reality that compute scarcity, escalating cloud costs, and rising standards for AI model governance will remain central challenges.
Companies that already struggle with constrained budgets or tight headcount—particularly those responsible for deployment pipelines, data integrity, or AI security—should view Genesis as early confirmation that efficiency, observability, and modular AI infrastructure will remain essential.
As the federal government formalizes frameworks for data access, experiment traceability, and AI agent oversight, enterprises may find that future compliance regimes or partnership expectations take cues from these federal standards.
Genesis also underscores the growing importance of unifying data sources and ensuring that models can operate across diverse, sometimes sensitive environments. Whether managing pipelines across multiple clouds, fine-tuning models with domain-specific datasets, or securing inference endpoints, enterprise technical leaders will likely see increased pressure to harden systems, standardize interfaces, and invest in complex orchestration that can scale safely.
The mission’s emphasis on automation, robotic workflows, and closed-loop model refinement may shape how enterprises structure their internal AI R&D, encouraging them to adopt more repeatable, automated, and governable approaches to experimentation. In this sense, Genesis may serve as an early signal of how national-level AI infrastructure is likely to influence private-sector requirements, especially for companies operating in critical industries or scientific supply chains.
Here is what enterprise leaders should be doing now:
Expect increased federal involvement in AI infrastructure and data governance. This may indirectly shape cloud availability, interoperability standards, and model-governance expectations.
Track “closed-loop” AI experimentation models. This may preview future enterprise R&D workflows and reshape how ML teams build automated pipelines.
Prepare for rising compute costs and consider efficiency strategies. This includes smaller models, retrieval-augmented systems, and mixed-precision training.
Strengthen AI-specific security practices. Genesis signals that the federal government is escalating expectations for AI system integrity and controlled access.
Plan for potential public–private interoperability standards. Enterprises that align early may gain a competitive edge in partnerships and procurement.
Overall, Genesis does not change day-to-day enterprise AI operations today. But it strongly signals where federal and scientific AI infrastructure is heading—and that direction will inevitably influence the expectations, constraints, and opportunities enterprises face as they scale their own AI capabilities.
OpenAI expanded its data residency regions for ChatGPT and its API, giving enterprise users the option to store and process their data closest to their business operations and better comply with local regulations. This expansion removes one of the biggest compliance blockers preventing global enterprises from deploying ChatGPT at scale.
Data residency, often an overlooked piece of the enterprise AI puzzle, processes and governs data according to the laws and customs of the countries where it is stored.
ChatGPT Enterprise and Edu subscribers can now choose to have their data processed in:
Europe (European Economic Area and Switzerland)
United Kingdom
United States
Canada
Japan
South Korea
Singapore
India
Australia
United Arab Emirates
OpenAI said in a blog post that it “plans to expand availability to additional regions over time.”
Customers can store data such as conversations, uploaded files, custom GPTs, and image-generation artifacts. This applies only to data at rest, not while it moves through a system or when it is used for inference. OpenAI’s documentation notes that, for now, inference residency remains available only in the U.S.
ChatGPT Enterprise and Edu users can set up new workspaces with data residency. Enterprise customers on the API who have been approved for advanced data controls can enable data residency by creating a new project and selecting their preferred region.
OpenAI first began offering data residency in Europe in February this year. The European Union has some of the strictest data regulations globally, based on the GDPR.
Enterprises until now had fewer choices for processing data flowing through ChatGPT. For example, some organizational data would be processed under U.S. law rather than under European rules.
Enterprises risk violating data compliance rules if their data at rest is processed elsewhere and does not meet strict policies.
“With over 1 million business customers around the world directly using OpenAI, we have expanded where we offer data residency — allowing business customers to store data in certain regions, helping organizations meet local regulatory and data protection requirements,” the company said in its blog post.
However, enterprises must also understand that if they are using a connector or integration within ChatGPT, those applications have different data residency rules. When OpenAI launched company knowledge for ChatGPT, it warned users that depending on the connector they use, data residency may be limited to the U.S.