
Follow ZDNET: Add us as a favorite source On Google.
Key takeaways of zdnet
- Cloud AI can now make and edit documents and other files.
- The facility can compromise your sensitive data.
- Monitor each conversation with AI for suspicious behavior.
Most popular generic AI services can work to some extent with your personal or work -related data and files. Reverse? This can save you time and labor, whether it is at home or at a job. negative side? With accessible or confidential information access, AI can be cheated in sharing that data with the wrong people.
Also: Cloud chat can now make PDF, slide and spreadsheet for you
The latest example is cloud AI of anthropic. On Tuesday, the company announced that its AI can now create and edit Word Documents, Excel spreadsheets, powerpoint slides and PDFS directly Word Documents, Excel spreadsheets, powerpoint slides and edit Cloud website And in desktop apps for windows and Macos. Just describe what you want on the signal, and the cloud hope you will distribute the results you want.
For now, the facility is available only for Cloud MaxTeam, and enterprise subscribers. However, Anthropic said it would be available to Pro users in the coming weeks. To access the new file construction feature, go to the settings and select the option for “upgraded file construction and analysis” under the experimental category.
Risk warning
Looks like a useful skill, right? But before you dive, keep in mind that this type of interaction includes risk. In its Tuesday news releaseEven anthropic admitted that “the cloud gives an internet access to create and analyze the feature files, which can put your data at risk.”
Also: AI agents will threaten humans to achieve their goals, anthropic reports.
But A support pageThe company delayed the potential risks more deeply. Keeping some safety in mind, this feature provides clouds with a sandbox environment that uses limited Internet to download and use JavaScript packages for the process.
But even with that limited internet access, an attacker can use early injections and other tricks to add instructions through external files or websites, which trick the cloud to run malicious codes or read sensitive data from the connected source. From there, the code can be programmed to use a sandbox environment to connect the outer network and leak data.
Is security available?
How can you secure yourself and your data from this type of agreement? The only advice that offers anthropic offer is to monitor the cloud when you work with a file construction feature. If you notice it unexpectedly using or using data, stop it. You can also report issues using a thumb-down option.
Also: AI’s free web scrapping days can end, thanks for this new licensing protocol
Well, it is not all very helpful, as it puts the burden on the user for malicious or suspicious attacks. But this is equal to the course for the generous AI industry at this point. Prompt injection is an familiar and notorious way for the attackers to enter malicious codes in an AI prompt, giving them the ability to compromise sensitive data. Nevertheless, AI providers have slowed down to combat such hazards, with users at risk.
In an attempt to combat the dangers, the Anthrics outlined several characteristics for cloud users.
- You have complete control over the file construction feature, so you can turn it on and off at any time.
- You can monitor the progression of the cloud when using the facility and stop its functions whenever you want.
- You are capable of reviewing and auditing the work done by the cloud in the atmosphere with sandbox.
- You can disable public sharing of conversations that include any information from convenience.
- You are capable of limiting the duration of any work completed by the cloud and the amount of time allotted to the single sandbox container. Doing so can help you to avoid loops that can indicate malicious activity.
- Network, containers and storage resources are limited.
- You can set rules or filters to detect quick injection attacks and stop them if they are detected.
Also: Microsoft Words and Excels AI Anthropic for AI, signaling distance from OpenaiI
Maybe the facility is not for you
Anthropic said in its release, “We have done red-teaming and safety tests on the feature.” “We have a continuous process for the ongoing safety testing and red-teaming of this feature. We encourage organizations to evaluate these safety against their specific security requirements when deciding to enable this feature.”
That last sentence can be the best advice of all. If your business or organization sets the file construction of the cloud, you want to assess it against your own safety rescue and see if it passes the muster. If not, the facility is probably not for you. The challenges for home users can be even greater. In general, avoid sharing individual or sensitive data in your signals or conversations, see out for unusual behavior from AI, and update AI software regularly.

