
Technology standardization has been something of an elusive sacred grille, emerging faster than standard groups with new technology. Nevertheless, somehow, things eventually come together – at least for mature systems – and get interoperability, it is an email network or developer tool.
Now, a new race against time has come to light, with efforts to date one of the fastest growing techniques-artificial intelligence. Can the standard groups be ahead of the AI curve, with their deliberately slow and highly participating consultations? And can they achieve standards in a technique as abstract and unknowable as AI, which also turns every few months?
Also: 60% of the managers now use AI to make decisions, including and to set fire – what is yours?
A good case is to be made for AI standards, as it is a technique filled with trap: deepfack, prejudice, misunderstanding and hallucinations. And, unlike the technologies that have gone before, AI presents more than a software engineering problem – this is a social problem.
A union of standard bodies wants to take a new approach to AI, recognizing its broad implications. Active efforts are underway to incorporate non-technical professionals that will define AI in the next years.
A collective of standard bodies – AI and multimedia authenticity standard support (AMAS) – A properly suspected AI looks for a perfectly suspicious world. The initiative was recently announced at the global summit in Geneva, demanding an address to misuse of AI-related material. Is operated by effort International Electrotechnical Commission (IEC), International Organization for Standardization (ISO), and International Telecom Association (Itu).
Also: I used Google’s photo-to-video AI Tool on my selfie-and it tangs me
The group hopes to develop standards that will help protect the integrity of information, maintain personal rights and promote the trust in the digital ecosystem. They want to ensure that users can identify the perfection of AI-generated and changed materials. Human rights, never mentioned in technical standards, today’s standards are above the mind for supporters.
All good stuff, of course. But will major enterprises and technology firms fully buy in AI standards that can become an obstacle in innovation in a rapidly growing space?
“We are originally saying that the AI space is a bit messed up, because the technology goes in all directions,” said Giles Thanett in a private briefing, Deputy General Secretary for IEC. “You will not get an answer in the AI application. You need to define what a system is.”
Since the AI system involves interaction at several levels, it may be important to define those systems. For example, Thornet continued, “Consider the visualization of the system for driving a car: distance keeping, wheel rotation, all sensors.
The use of the incentive market to follow the standards, Thanet said. “It is basically trying to understand a series of needs.” In this process, the role of organizations of standards such as IEC is developing-from efforts to more efforts to engineers, wide cross-sections of the society are included.
Also: 5 reasons why I still like every other AI chatboat
“This change in mindset is important to us,” Thanet continued. “Earlier, if I talk to any engineer and mention the word ‘human rights’, they will answer that’ this is not our job, we only worry about standards.” What we have seen in recent years is (that) changing the makeup of technical committees or subcommittee.
The categories of standards under development within AMAS include material perfection, trust and authenticity, asset identifiers and rights declarations. Such efforts began in Bayana about five years ago, with the creation of a fundamental standard for reliability in artificial intelligence. The standard provided guidelines to assess the reliability and integrity of the AI system. Earlier this year, IEC and ISO published the first part of a new JPEG Trust series of international standards for the media, including videos and audio – a major weapon against deep -fec video and rise of images.
The standards issued this year under the aegis of AMAS include the following:
- JPEG Trust Part 1: Provence focuses on faith and authenticity in JPEG images through detection and fact-checking. It provides a framework to embed the metadata in JPEG files directly as trust indicators.
- Material credentials: To ensure that the materials for documentation of the material credentials prepare the methods that digital material is detected and its authenticity can be verified. It specifies the types of metadata that must be included and format to store this information.
- Cawg Matadata: Matadata provides a framework to express the detailed information about the material, including ownership and writer.
Also: I found a 5AI content detector that can correctly identify 100% AI text of time.
There are now many standards in the pipeline that want to build confidence in digital media and AI:
- Digital Watermarking: Overasin by IEEE, it provides ways to evaluate the strengthening of the proposed standard digital watermarking. This includes guidelines for creating and maintaining evaluation files, which can be used to document the evaluation of digital assets.
- Basic profile: Guidelines for creating and maintaining profiles include the manufacturer of the material and the detailed information about the construction process.
- Trust.txt Digital content includes the outline of ways to establish confidence and guidelines for making and maintaining trust.txt files, which can be used to document the reliability of digital assets.
- Use case terminology: A standardized terminology of cases of use that can be targeted when expressing machine-elective opt-outs related to lesson and data mining and AI training. The use of digital assets enables the parties to be declared for restrictions or permissions.
- Exhale for authentication of multimedia material: Multimedia material specifies a technical solution to verify integrity, allowing users to confirm the authenticity of the material by their creators. The solution is based on the digital signature of the data stream. The material uses a private key to sign the manufacturer (encoder) content, while the recipient (decoder) uses a related public key to verify authenticity.
Also: Why ignoring AI morality is such a risky business – and how to correct AI
Want more stories about AI? Sign up for innovationOur weekly newspapers.