“We are selectively working for AI and machine learning expertise, but we are also investing in our existing talents – train them to understand how AI works, how to validate the model, and how to use these devices responsibly,” she says.
Feel a rapid pressure
Knesek is concerned about AI’s unknown, yet she says that companies are pushing security teams to build new capabilities quickly to say they can say that AI has AI embedded in its products. Security and “it is like a transport team to laying roads and railings so that things are not out of control,” she says. “We are working on breakcate speed in some areas and the reality is that, we do not know what the threats are. Therefore, we are trying to make sure that we have received the strongest rules.”

Jill Kankank, Siso, Blackline
Black line
Echoing olexes, Kambro says that she feels strongly firmly about using traditional security and keeping the right control in place. She says that by gaining basic security rights, you will have a long way.
‘Then, as you learn about more sophisticated attacks … we have to pill our tooling and capabilities for those risks. “For now,” the most important thing for us is that the business has to be tied up with the fact that the business is very quick (and) to make sure that today (security) is doing what she needs to do from a fundamental point of view, “she says.
Raise the output
As organizations reconsider their approach to safety, OlexC recommends Sisos not to be “dazzled by publicity”, and remember that AI is not a strategy, but a tool. “Treat it like any other technology investment,” they say. “Start with your risk preferences, then decide where AI can help actually.”
This means that remembering AI enhances strength and weaknesses. “If your property list is incomplete, if your iam control is loose, or if your patching rhythm is bad, AI will not cure those problems; it will accelerate the dirt,” says olex.
It is also important to take a cautious approach for deployment. He recommends piloting AI devices in cases of narrow use – such as for alert triaz, log analysis and fishing detection – and measuring the results. “Focus on increasing human decisions, not replacing it,” they say.
Security teams will also build faith through transparency. “Train your teams to question the AI output and educate your officers and employees on both profit and risk to question the AI output,” says Olexc. “CISO’s job is not only to deploy the AI tool, but to ensure that the organization understands how they fit in large security photos.”
Formation coalition
AI should be used where it helps in reducing risk, improving speed or strengthening flexibility, says deforms. “Especially form a partnership with legal, data and operation teams,” she says. “Invest in education in the organization and stay in morality. AI’s decisions are the real -world consequences, so organizations should use AI with care and how it is used, considering the potential accountability implications related to it.”
While AI is a powerful tool, Defiore says that it is people that make it meaningful. “In United, security is our foundation. AI helps us fulfill that promise with more accurate and agility-but it is the human decision behind it that runs the trust, influence and long-term value,” she says.
Olexer says that AI is feared to be afraid, but its unique impact on security should be respected.
The lander emphasizes the need to identify that AI is not just a new tool, but also “a new domain, which requires careful governance, thoughtful integration, strategic thinking and continuous learning. By embedding safety from day to day, confusing cross-functional stakeholders, the possibility of confusing the risks, the possibility of unique AI risks, and recommends to invest in people by creating ai The era should be planned and prepared, ensuring that AI should not be managed as a silo, but to be managed as a shared responsibility.

