MPAI extends 3 of its standards and develops 2 new standards

Contact Us
5 Cours des Bastions, c/o Me Olivier BRUNISHOLZ
Switzerland CH-1205 

Geneva, Switzerland – 26 October 2022. The international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 25th General Assembly (MPAI-25). Among the outcomes is the decision, based on substantial inputs received in response to its Calls for Technologies, to extend 3 of its existing standards and to initiate the development of two new standards.

The three standards being extended are:
1. AI Framework (MPAI-AIF). AIF is an MPAI-standardised environment where AI Workflows (AIW) composed of AI Modules (AIM) can be executed. Based on substantial industry input, MPAI is in a position to extend the MPAI-AIF specification with a set of APIs that allow a developer to configure the security solution adequate for the intended application.
2. Context-based Audio Enhancement (MPAI-CAE). Currently, MPAI-CAE specifies four use cases: – Emotion-Enhanced Speech, Audio Recording Preservation, Speech Restoration Systems, and Enhanced Audioconference Experience. The last use case includes technology to describe the audio scene of an audio/video conference room in a standard way. MPAI-CAE is being extended to support more challenging environments such as human interaction with autonomous vehicles and metaverse applications.
3. Multimodal Conversation (MPAI-MMC). MPAI-MMC V1 has specified a robust and extensible emotion description system. In the currently developed V2, MPAI is generalising the notion of Emotion to cover two more internal statuses: Cognitive State and Attitude and is specifying a new data format covering the three internal statuses called Personal Status

The two new standards are:
1. Avatar Representation and Animation (MPAI-ARA). The standard intends to provide technology to enable:
a. A user to generate an avatar model and then descriptors to animate the model, and an independent user to animate the model using the model and the descriptors.
b. A machine to animate a speaking avatar model expressing the Personal Status that the machine has generated during the conversation with a human (or another avatar).
2. Neural Network Watermarking (MPAI-NNW). The standard specifies methodologies to evaluate neural network watermarking technologies:
a. The impact on the performance of a watermarked neural network (and its inference) versus its initial, non-watermarked neural network.
b. The ability of the detector/decoder to detect/decode a payload when the watermarked neural network has been modified.
c. The computational cost of injecting, detecting in or decoding a payload from the watermark.
Development of these standards is planned to be completed in the early months of 2023.

MPAI-25 has also confirmed its intention to develop a Technical Report (TR) called MPAI Metaverse Model (MPAI-MMM). The TR will cover all aspects underpinning the design, deployment, and operation of a Metaverse Instance, especially interoperability between Metaverse Instances.

So far, MPAI has developed five standards for applications that have AI as the core enabling technology. It is now extending three of those existing, developing two new standards and one technical report, and engaged in the drafting of functional requirements for nine future standards. It is thus a good opportunity for legal entities supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data to join MPAI. ( Also considering that by joining on or after the 1st of November 2022, membership is immediately active and will last until 2023/12/31.

Please visit for more information on MPAI standards and visit also the MPAI web site (

Contact the MPAI secretariat ( for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media:
– LinkedIn (
– Twitter (
– Facebook ( ,
– Instagram (
– YouTube (

Most important: join MPAI (, share the fun, build the future.