Social platform x Pilot will be a feature This allows AI chatbots to generate community notes.
Community notes are characterized by a Twitter-era that Elon Musk has expanded under its ownership of service, now called X. Users who are part of this fact-zanch program can contribute to comments that add reference to some posts, which then appear to be associated with a post before being checked by other users. A community note, for example, may appear to be a position of AI-related videos, which is not clear about its synthetic origin, or as an appendix to a misleading post from a politician.
Notes become public when they obtain consensus between groups that historically disagree on the previous rating.
Community notes have been quite successful on XTA, Tikkok and motivate X YouTube To carry forward a similar initiative, Meta completely eliminated its third party fact-ties programs, which is completely in exchange for this low-cost, community-sour labor.
But it remains to be seen whether the use of AI chatbots as a fact-check will prove to be helpful or harmful.
These can be generated using Grake of AI notes X or by using other AI tools and connecting them to X via API. Any note is that an AI submits will be considered as a note presented by a person, which means that it will go through the same veating process to encourage accuracy.
The use of AI actually seems suspicious in investigation, given how common it is to halve to AIS, or make references that are not based in reality.

According to a paper Published This week by researchers working on X community notes, it is recommended that humans and LLMs work together. Human response can increase the AI Note generation through reinforcement learning, remaining as the last check before notes are published with human note.
“The goal is not to build an AI accessory that tells users what to think, but to build an ecosystem that gives humans more seriously thinking and the world the right to understand better,” is called paper. “LLM and humans can work together in a virtuous loop.”
Even with human probe, there is still a risk to rely on the AI too much, especially since the users will be able to embed the LLM from third parties. For example, the Chatgpt of Openai has been experienced by issues recently experienced with a model. If a LLM preference “help” prioritizes “help” on completing a fact-oriented correctly, AI-borne comments may be incorrect.
There is also a concern that human rats will be overloaded by the amount of AI-borne comments, this volunteer’s work will be reduced to their motivation to adequately complete the work.
Users should not yet expect to look at AI-based community notes-The plan plans to test these AI contributions for a few weeks, before they succeed.
