Want smart insight into your inbox? Enterprise AI, only what matters to data and security leaders, sign up for our weekly newspapers. Subscribe now
A new startup established by one Initial human rent Today the enterprises have raised $ 15 million to solve one of the most pressured challenges: how to deploy artificial intelligence systems without risking horrific failures that can damage their businesses.
Artificial Intelligence Underwriting Company (AIUC)Which publicly launches publicly, AI agents – connects insurance coverage with strict security standards and independent audit to give confidence to companies in deploying autonomous software systems that can perform complex functions such as customer service, coding and data analysis.
Seed funding round led Nut freedmanCEO of former Github, through its firm NFDGWith participation from Emergence capital, AreaAnd including many notable fairy investors Ben MaanCo-founder of Anthropic, and former Chief Information Security Officer at Google Cloud and MongodB.
“Enterprises are walking tightly,” Run KevistAIUC co-founder and CEO, in an interview. “On the one hand, you can live on the edge and see that you can make your rivals irrelevant, or you can make and put headlines to make your chatbot preaching, or to make your return policy a hallucinations or discriminate against those that you are trying to recruit.”
AI Impact series returns to San Francisco – 5 August
The next phase of AI is here – are you ready? Leaders of Block, GSK and SAP include how autonomous agents are re-shaping the enterprise workflows-from the decision making of time-to-end and automation.
Now secure your location – space is limited:
The company’s approach deal with a fundamental trust gap that is faster as AI capabilities. While AI systems can now do tasks that rival human graduate-level arguments, many enterprises hesitate to deploy unpredictable failures, liability issues and concerns about the prestigious risks.
Safety standards constructing AI speed
Kvist “SOC 2 for AI agents” on creating AIUC’s solution centers – a comprehensive security and risk structure is specifically designed for artificial intelligence systems. Sok 2 There is a widely adopted cyber security standard that enterprises require usually require sellers before sharing sensitive data.
“SoC 2 is a standard for cyber security that specifies all the best practices that you should adopt in adequate detail so that a third party can come and check if a company meets the requirements,” the Kevist explained. “But it says nothing about AI. There are tones of new questions such as: How are you handling my training data? What about hallucinations? What about these tool calls?”
AIUC-1 standard Adds six major categories: security, security, reliability, accountability, data privacy and social risk. In framework, AI companies need to implement specific safety measures, from monitoring systems to event reaction plans, which can be verified independently through rigorous testing.
“We take these agents and test them extensively, using customer aid as an example, because it is easy to get related to it. We try to get the system to say something racist, to give me a refund, I am not worth it, to give me a big comeback, to give me a big comeback, to say something derogatory, or to bite another customer’s data.
Benjamin Franklin’s fire insurance to AI risk management
The insurance-centered approach sets an example of centuries, where private markets proceed rapidly than regulation to enable transforming technologies to safely adopt. Kvist often refers to Benjamin Franklin’s creation of America’s first fire insurance company in 1752, causing Blaze that promotes the rapid development of Philadelphia to create code and fire inspections.
“In the whole history, insurance has been the right model for this, and the reason is that the insurers have an incentive to tell the truth,” the Kevist explained. “If they say that the risk is older than them, then a person is going to sell cheap insurance. If they say that they are younger than that, they will have to pay the bill and go out of business.”
The same pattern emerged with automobiles in the 20th century, when the insured created Institute of insurance And developed the crash test standards that encourage security features such as airbags and seatbelt – before the years the government regulation made them mandatory.
Major AI companies are already using new insurance models
AIUC Already started working with many high-profile AI companies to validate its approach. The company works with unicorn startup ADA (Customer aid) and Cognizance (Coding) Trust to help unlock the interpressed deployment that stops due to concerns.
“ADA, we help them unlock a deal with the top five social media company where we came and run independent tests at the risks that the company cared for, and it helped unlock the deal, basically assure them that it can actually be shown to their customers,” said the keyist.
Startup is also developing partnership with the insurance providers to provide financial support for policies. This is a significant concern about relying on a startup with major liability coverage. “Insurance policies are going to be supported by the balance sheet of large insurers,” the Kevist explained.
Quarterly update vs. year -long regulatory cycle
One of the major innovations of AIUC is designing standards that can keep pace with the speed of breaknake development of AI. While traditional regulatory structures EU AI Act It takes years for developed and implementation, AIUC There are plans to update your standards in the quarter.
“The EU was started back to the AI Act 2021, they are now about to release it, but they are stopping it again because it is too much four years later,” said the Kevist. “This cycle makes it very difficult to achieve the heritage regulatory process to maintain this technique.”
This agility has become rapidly important because there is a competitive difference between our and Chinese AI capabilities. “A year and a half ago, everyone will say, like, we are now two years ahead, which sounds like eight months,” is something, “the Kevist saw.
How AI Insurance really works: Test system for braking point
AIUC’s insurance policies cover a variety of AI failures, from the practice of data violations and discriminatory work to intellectual property violations and wrong automated decisions. The price of the company is coverage based on comprehensive testing, which tries to break the AI system thousands of times in various failure mode.
“For some other things, we think it’s interesting to you. Or do not wait for you. So for example, if you issue a wrong refund, then great, right, its price is clear, the amount that you have returned wrongly,” the Kevist explained.
Startup works with a union of partners PWC (One of the “Big Four” accounting firms), Ryk (A major AI law firm), and academics Stanford And MIT To develop and validate its standards.
Former anthropic executive leaves to solve AI Trust Problem
The founding team brings deep experiences from both AI development and institutional risk management. Before the launch of the KVIST chat, Anthropic had the first product and Go-to-Market Fare in early 2022, and sit on the board board Center for AI SecurityCo -founder Brandon wang There is a theial partner who built consumer underwriting businesses while earlier, while Rajiv Dattani A former McKinse is a partner who led the global insurance work and served as the COO of a non -profit organization, evaluating the major AI model.
“This question that is really interested in me is: how, as a society, are we going to deal with this technique that is washing over us?” The Kevist said about his decision to leave anthropic. “I think the creation of AI, which is anthropic, is very exciting and will do a great job for the world. But the most central question that arises in me in the morning is: how, as a society, we are going to deal with it?”
Race to make AI safe before catching regulation
The launch of AIUC indicates a comprehensive change on how AI contacts risk management because technology leads to mission-mating business applications to experimental deployment. Insurance models provide a way to the enterprises a way amid paralyzed inactivity, waiting for the peak and comprehensive government inspection of careless AI adoption.
Startup approach can prove to be important because AI agents become more competent and wider in industries. Companies by creating financial incentives for responsible development, enabling rapid deployment AIUC Building infrastructure that can determine whether artificial intelligence changes the economy safely or chaotic.
“We are expecting this insurance model, this market-based model, both faster adopting and encouraging investment in security,” said the Kevist. “We have seen it throughout history – that the market can move faster than the law on these issues.”
The bets cannot be too much. Close to human-level argument in more domains as AI systems, the window may be faster for the construction of strong security infrastructure. The condition of AIUC is that as long as the regulators hold the Breakage speed of AI, the market would have already constructed the railing.
After all, Philadelphia’s fire did not wait for the government construction code – and today’s AI Arms Race would not wait even for Washington.

