
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- Anthropic announced Skills for Cloud on Wednesday.
- Users are able to create their own custom skills.
- Keep in mind that more access to personal data means more risk.
Anthropic on Wednesday launched Skills for Cloud, a new feature aimed at making the company’s flagship AI chatbot more customizable for individual users and businesses.
You can think of skills as little digital instruction manuals, each of which can be used by the cloud to guide its behavior on a given task. Anthropic described them in a blog post “Custom onboarding content that lets you package expertise, making the cloud the expert that matters most to you.”
How do they work?
Anthropic has developed its own, one-size-fits-all skills – including skills to help generate Word docs, Excel spreadsheets and PowerPoint docs – but users can also create and upload their own custom skills. The cloud can automatically determine which skills it should use based on the prompt given by the user.
The main idea behind Skills is to make the cloud more customizable: individuals and businesses have their own unique preferences when it comes to using AI chatbots, and Skills are essentially a shortcut to ensure you get the outputs you’re looking for as quickly as possible.
Plus: Cloud’s latest model is cheaper and faster than Sonnet 4 – and free
For example, let’s say you’re turning to the cloud for help creating a pitch deck for a potential new client. Instead of carefully explaining the nuances of your brand voice and sales approach, the cloud can automatically reference the “Brand Guidelines” skills you uploaded earlier, and from there, guess everything it needs to know to generate a PowerPoint presentation that’s tailored for your company. (You can learn more about creating custom skills Here,
Therefore, the launch of Skills is a step towards enabling the cloud to create more agents or complete complex tasks with minimal user oversight based on the increased ability to analyze the context and goals of particular users.
Skills are now available to Pro, Max, Team, and Enterprise users, and can be viewed through Settings > Capabilities. Account administrators of Teams and enterprise accounts must first enable organization-wide access before individual employees can use the skill. Skills work on cloud apps, cloud code, and cloud APIs, meaning developers only need to build them once before deploying them across their organizations.
From generalists to experts
AI chatbots like ChatGPT, Cloud, and Gemini were designed as general-purpose tools, trained effectively across the entire Internet and able to respond to prompts covering virtually any topic.
However, tech developers are increasingly turning their AI systems into more specialized tools. A prominent, emerging vision of AI that is beginning to blossom in Silicon Valley is a future in which each of us has a personal AI assistant (or a collection of assistants) that knows us on an intimate level and is customized to execute tasks according to our particular preferences.
Anthropic’s launch of Skills for Cloud on Wednesday is the latest in a long line of similar efforts across the AI industry. For example, just last week, Microsoft launched new connectors for Copilot, which allow the chatbot to pull user information from Gmail, Google Drive, and other apps. Last week, Anthropic announced a new Slack integration for the cloud.
security considerations
All of these personalization features also introduce new risks. The more personal data you choose to share with AI tools, the more stringent your security protocols should be.
Plus: The cloud now integrates directly with Microsoft 365
While each AI developer has its own data privacy policies, it is generally wise to operate under the assumption that any information you share with the chatbot will be stored by the system and added to its training data, thus reducing the possibility that it will be inadvertently exposed by another user in the future.
The fact that Skills for Cloud can enable chatbots to auto-run code has its own dangers. “While powerful, this means you should be careful about which skills you use – stick to trusted sources to keep your data safe,” Anthropic wrote in its blog post.

