
The buzzy but still secretive New York City startup Augmented Intelligence Inc (AUI)one who strives to go beyond the popular "Transformer" The architecture used by most of today’s LLMs such as ChatGPT and Gemini is Raised $20 million in Bridge Safe round at a valuation of $750 million, bringing its total funding to nearly $60 millionVentureBeat can exclusively reveal.
The round, closed in less than a week, comes amid growing interest in deterministic conversational AI and now precedes a larger raise in advanced stages.
AUI relies on a mixture of transformer technology and a new technology called "Neuro-symbolic AI," Described in more detail below.
"We realize that you can combine LLM talent in linguistic abilities with guaranteed symbolic AI," Said ohad elhello, Co-Founder and CEO of AUI In a recent interview with VentureBeat. Alhello launched the company in 2017 Co-founder and Chief Product Officer Ori Cohen.
The new financing includes participation from eGateway Ventures, New Era Capital Partners, existing shareholders and other strategic investors. This follows a $10 million raise at a $350 million valuation cap in September 2024, coinciding with. Company announces go-to-market partnership with Google In October 2024. Early investors include Vertex Pharmaceuticals founder Joshua Boger, UKG Chairman Aaron Ahn and former IBM Chairman Jim Whitehurst.
According to the company, the bridge round is a precursor to significantly larger raises already in advanced stages.
AUI is the company behind Apollo-1, a new foundation model designed for task-oriented communication, which it describes as "economic half" Conversational AI – Different from open-ended dialogue driven by LLMs like ChatGPT and Gemini.
The firm argues that existing LLMs lack the determinism, policy enforcement and operational certainty required for enterprises, especially in regulated sectors.
Chris Varelas, co-founder of Redwood Capital and advisor to AUI, said in a press release provided to VentureBeat: “I’ve seen some of today’s top AI leaders walk away with their heads spinning after interacting with Apollo-1.”
A specific neuro-symbolic architecture
The main innovation of Apollo-1 is its neuro-symbolic architecture, which separates linguistic flow from functional logic. Instead of using the most common technology underpinning most LLM and conversational AI systems today – the acclaimed Transformer architecture described in a seminal 2017 Google paper "all you need is attention" – AUI’s system integrates two layers:
-
Neural modules, powered by LLM, handle perception: encoding user input and generating natural language responses.
-
A symbolic logic engine developed over many years interprets structured task elements such as intentions, entities, and parameters. This symbolic state engine determines appropriate next actions using deterministic logic.
This hybrid architecture allows Apollo-1 to maintain state consistency, enforce organizational policies, and reliably trigger tool or API calls – capabilities that Transformer agents alone lack.
Alhello said the design emerged from a multi-year data collection effort: “We created a consumer service and recorded millions of human-agent interactions across 60,000 live agents. From this, we extracted a symbolic language that defines the structure of task-based interactions separate from their domain-specific content.”
However, enterprises that have already built systems built around Transformer LLMs need not worry. AUI wants to make it as easy as possible to adopt its new technology.
"Apollo-1 deploys like any modern foundation model," Elhello told VentureBeat in a text last night. "It does not require a dedicated or proprietary cluster to run. It works in standard cloud and hybrid environments, taking advantage of both GPUs and CPUs, and is significantly more cost-effective to deploy than frontier reasoning models. Apollo-1 can also be deployed in an isolated environment on all major clouds for increased security."
Generalization and domain flexibility
Apollo-1 is described as a base model for task-oriented communication, meaning it is domain-agnostic and generalizable to areas such as health care, travel, insurance, and retail.
Unlike consulting-heavy AI platforms that require the creation of specific logic per customer, Apollo-1 allows enterprises to define behaviors and tools within a shared symbolic language. This approach supports rapid onboarding and reduces long-term maintenance. According to the team, an enterprise can launch a working agent in less than a day.
Importantly, procedural rules are encoded at the symbolic layer – not learned from examples. This enables deterministic execution for sensitive or regulated tasks.
For example, a system might prevent the cancellation of a basic economy flight not by guessing intent but by applying hard-coded logic to the symbolic representation of the booking class.
As Elhello explained to VentureBeat, LL.M. "This is not a good mechanism when you are looking for certainty. It would be better if you know what you are going to send (to the AI ​​model) and always send it, and you know, always, what is going to come back (to the user) and how to handle it.
Availability and developer access
Apollo-1 is already in active use within Fortune 500 enterprises in a closed beta, and a broader general availability release is expected before the end of 2025. Previous report by Information, Which broke the initial news on the startup.
Enterprises can integrate with Apollo-1 through:
-
A developer playground, where business users and technical teams jointly configure policies, rules, and behaviors; Or
-
A standard API using OpenAI-compatible formats.
The model supports operations through policy enforcement, rule-based optimization, and guardrails. Symbolic rules allow businesses to set certain behavior, while LLM modules handle open-text interpretation and user interaction.
Enterprise Fit: When reliability beats flow
While general purpose communication and creativity in LLM have advanced, they remain potential – hindering enterprise deployment in finance, healthcare and customer service.
Apollo-1 targets this gap by offering a system where policy adherence and deterministic task completion are first-order design goals.
AlHello puts it clearly: “If your use case is task-oriented communication, you have to use ours, even if you are ChatGPT.”

