
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- IT, engineering, data, and AI teams now lead responsible AI efforts.
- PwC recommends a three-tier “defense” model.
- Embed, don’t bolt, responsible AI into everything.
“Responsible AI” is a very hot and important topic these days, and technology managers and professionals have a responsibility to ensure that the artificial intelligence work they are doing builds trust while aligning with business goals.
Fifty-six percent of 310 executives are participating in the new PwC survey Says their first-line teams – IT, engineering, data and AI – now lead their responsible AI efforts. According to the PwC authors, “This shift brings responsibility closer to the teams building AI and sees governance move to where decisions are made, shifting the focus of responsible AI from compliance conversations to quality enablement.”
Also: Deloitte survey says consumers more likely to pay for ‘responsible’ AI tools
According to the PwC survey, responsible AI – linked to eliminating bias and ensuring fairness, transparency, accountability, privacy and security – is also relevant to business viability and success. “Responsible AI is becoming a driver of business value, increasing ROI, efficiency and innovation while strengthening trust.”
“Responsible AI is a team sport,” the report’s authors point out. “As AI adoption accelerates, clear roles and strong support are now essential to move forward safely and confidently.” To leverage the benefits of responsible AI, PwC recommends introducing AI applications within an operating structure with three “lines of defense.”
- First Line: Build and Operate Responsibly.
- Second row: review and governance.
- Third row: assurance and audit.
The challenge to achieving responsible AI, cited by half of survey respondents, is “translating responsible AI principles into scalable, repeatable processes”, PwC found.
Nearly six in ten respondents (61%) to the PwC survey say responsible AI is actively integrated into core operations and decision making. Roughly one in five (21%) report being in the training phase, which focuses on developing staff training, governance structures and practical guidance. The remaining 18% say they are still in the early stages, working to create basic policies and frameworks.
Also: So long, SaaS: Why AI is killing per-seat software licenses – and what comes next
There is debate across the industry about how tight the reins on AI should be to ensure responsible applications. “There are certainly situations where AI can provide great value, but rarely within the risk tolerance of enterprises,” said Jake Williams, a former hacker at the US National Security Agency and faculty member at IANS Research. “The LLMs that underpin most agents and general AI solutions do not produce consistent output, leading to unpredictable risks. Enterprises value repeatability, yet most LLM-enabled applications are close to perfect, most of the time.”
As a result of this uncertainty, “we are seeing more organizations scale back the adoption of AI initiatives as they realize they cannot effectively mitigate the risks, particularly those that introduce regulatory risks,” Williams added. “In some cases, this will result in reapplication of applications and use cases to counter regulatory risk. In other cases, it will result in entire projects being abandoned.”
8 expert guidelines for responsible AI
Industry experts offer the following guidelines for building and managing responsible AI:
1. Build responsible AI from end to end: Make responsible AI part of system design and deployment, not an afterthought.
“For technology leaders and managers, ensuring that AI is responsible starts with how it is built,” Rohan Sen, head of cyber, data and technology risk at PwC US and co-author of the survey report, told ZDNET.
“To build trust and safely scale AI, focus on embedding responsible AI at every stage of the AI development lifecycle and include key functions such as cyber, data governance, privacy and regulatory compliance,” Sen said.
Also: 6 essential rules for incorporating AI into your software development process – and the No. 1 risk
2. Give AI a purpose – not just deploy AI for AI’s sake: “Too often, leaders and their technology teams treat AI as a tool for experimentation, allowing them to generate countless bytes of data,” said Danielle Ann, senior software architect at Meta.
“Use technology with taste, discipline, and purpose. Use AI to sharpen human intuition – to test ideas, identify weak points, and accelerate informed decisions. Design systems that enhance human judgment, not replace it.”
3. First underline the importance of responsible AI: According to Joseph Logan, Chief Information Officer of iManage, responsible AI initiatives “must start with clear policies that define acceptable AI use and clarify what is prohibited.”
“Start with a value statement around ethical use,” Logan said. “From here, prioritize periodic audits and consider a steering committee that spans privacy, security, legal, IT, and procurement. Constant transparency and open communication is paramount so users know what’s approved, what’s pending, and what’s prohibited. Additionally, investing in training can help strengthen compliance and ethical use.”
4. Make responsible AI an important part of jobs: Responsible AI practices and oversight need to be given the same priority as security and compliance, said Mike Blandina, Snowflake’s chief information officer. “Ensure models are transparent, explainable, and free of harmful bias.”
There are also critical governance frameworks for such efforts that meet the needs of regulators, boards, and customers. “These frameworks need to span the entire AI lifecycle – from data sourcing to model training, deployment, and monitoring.”
Also: The best free AI courses and certifications for upskilling – and I’ve tried them all
5. Keep humans alert at all stages: “Continuously discuss how to use AI responsibly to drive value for customers while ensuring both data security and IP concerns are addressed,” said Tony Morgan, senior engineer at Priority Designs.
“Our IT team reviews and vets every AI platform we approve to ensure it meets our standards to protect us and our customers. To respect new and existing IP, we make sure our team is educated on the latest models and methods, so they can implement them responsibly.”
6. Avoid Acceleration Risk: “Many technology teams have a desire to put generic AI into production before the team can answer question X or risk Y,” said Andy Zenkevich, founder and CEO of Epic.
“A new AI capability will be so exciting that projects will move on to using it in production. The result is often a great demo. Then things break when real users start relying on it. Maybe the wrong kind of transparency is in place. Maybe it’s not clear who is accountable if you return something invalid. Take extra time to risk map or check model interpretation. The business loss from missing an early deadline is nothing compared to fixing a broken rollout. Is.”
Also: Everyone thinks AI will change their business – but only 13% do
7. Documents, Documents, Documents: Ideally, “every decision made by AI should be logged, easy to explain, auditable, and have a clear path for humans to follow,” McGee said. “Any effective and sustainable AI governance will include a review cycle every 30 to 90 days to appropriately examine assumptions and make necessary adjustments.”
8. Check your data: “How organizations source training data can have significant security, privacy and ethical implications, said Fredrik Nilsson, vice president of the Americas at Axis Communications.
“If an AI model consistently shows signs of bias or has been trained on copyrighted material, customers are likely to think twice before using that model. Businesses should use their own, fully vetted data sets rather than external sources when training AI models to avoid exfiltration and intrusion of sensitive information and data. The more control you have over the data your model is using, the easier it will be to mitigate ethical concerns.”
Get our top stories delivered to your inbox every morning Tech Today Newsletter,

