
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- AI responsibility and security are the top issues for 2026.
- The best defense is to build AI in a sandbox.
- Keep AI development simple and open.
author of the book lincoln lawyerMichael Connelly focuses his attention on the issues behind uncontrolled corporate artificial intelligence. His latest work of fiction, proving groundis about a lawyer who files a civil suit against an AI company “whose chatbot told a sixteen-year-old boy it was okay for him to hit his ex-girlfriend for her infidelity.”
Too: Your favorite AI tool just barely missed this security review – why that’s a problem
The book describes the case, which “explores the largely unregulated and explosive AI business and the lack of training guardrails.”
While this is a work of fiction, and the case presented is extreme, it is an important reminder that AI can go off the ethical or logical track in many ways – either through bias, bad advice, or misdirection – with consequences. At the same time, at least one notable AI voice advises against going too far in efforts to regulate AI in the process slowing innovation.
need balance
As we reported in November, at least six in 10 companies (61%) in a PwC survey say responsible AI is actively integrated into their core operations and decision making.
There needs to be a balance between governance and speed, and this will be the challenge for professionals and their organizations in the coming year.
Andrew Ng, founder of DeepLearning.AI and assistant professor at Stanford University, says that examining all AI applications through a sandbox approach is the most effective way to maintain this balance between speed and responsibility.
Too: The AI leader’s new equilibrium: What changes (and what stays) in the age of algorithms.
“Many of the most responsible teams move really, really fast,” he said in a recent industry interview. keynote speaker and follow up panel discussion“We test software in a sandbox secure environment to find out what’s wrong before we release it to the wider world,”
At the same time, the recent push toward responsible and governed AI – by both governments and corporations – may actually Very Bully, he said.
“A lot of businesses set up protective mechanisms. Before you ship something, you need legal approval, marketing approval, brand review, privacy review, and GDPR compliance. An engineer needs to get five VPs to sign off before anything can be done. Everything grinds to a halt,” Ng said.
“It is a best practice to move quickly by building a sandbox early on,” he added. In this scenario, “put in place a set of rules to say ‘no shipping stuff externally under the company brand,’ ‘no sensitive information that can’t be leaked,’ whatever. It’s only tested on the company’s own employees under NDA, with a budget of only $100,000 in AI tokens. By creating sandboxes that are guaranteed safe, it makes it easier for the product and engineering teams to really get up and running quickly and try things out internally. “Can make a lot of space for.”
Once an AI application is determined to be safe and responsible, “then invest in scalability, security, and reliability to take it to scale,” Ng concluded.
keep it simple
On the governance side, a keep-it-simple approach can help keep AI clear and open.
“Since every team, including non-technical ones, is now using AI for work, it was important for us to set straightforward, simple rules,” said Michael Krach, chief innovation officer at JobLeads. “Make clear where AI is allowed, where it is not, what company data it can use, and who needs to review high-impact decisions.”
Too: Why complex logic models could make it easier to catch misbehaving AI
“It is important that people trust that AI systems are fair, transparent and accountable,” said Justin Salaman, partner at Radiant Product Development. “Trust starts with clarity: being open about how AI is used, where the data comes from, and how decisions are made. It grows when leaders apply balanced human-in-the-loop decision making, ethical design, and rigorous testing for bias and accuracy.”
Such trust stems from clarity with employees about their company’s intentions with AI. Be clear about ownership, Kracht advised. “Every AI feature should have someone accountable for potential failure or success. Test and iterate, and once you feel confident, publish a plain-English AI charter so employees and customers know how AI is used and trust you on this matter.”
Key principles of responsible AI
What are the markers of a responsible AI approach that should be on the radar of executives and professionals in the coming year?
Too: Want real AI ROI for business? This could finally happen in 2026 – here’s why
The eight key principles of responsible AI were recently Posted By Dr. Khuloud Almani, Founder and CEO of HKB Tech:
- Anti-discrimination: End discrimination.
- Transparency and explainability: Make AI decisions clear, traceable, and understandable.
- Strength and Safety: Avoid damage, failure and unexpected actions.
- Accountability: Assign clear responsibility for AI decisions and behaviors.
- Privacy and data security: Secure personal data.
- Social impact: Consider long-term impacts on communities and economies.
- Human-centered design: Prioritize human values in every interaction.
- Collaboration and multi-stakeholder engagement: Involve regulators, developers and the public.
,

