微信公众号随时随地查标准

QQ交流1群(已满)

QQ群标准在线咨询2

QQ交流2群

购买标准后,可去我的标准下载或阅读

This standard defines a set of tools for efficient image coding, including tools for encoding, for decoding, and for encapsulation. Some of the tools are based on trained neural networks and shall perform block partitioning, prediction, transform, quantization, entropy coding, filtering, and so on.

定价: 168元 / 折扣价: 143

在线阅读 收 藏
ITU-T H.264.1 Conformance specification for H.264 advanced video coding 历史 发布日期 :  2013-06-01 实施日期 : 

定价:

在线阅读 收 藏

This standard specifies the storage file formats and real-time transport protocol (RTP) payload formats for IEEE 1857 video, IEEE 1857.2 audio, 1857.4 video, and 1857.5 mobile speech and audio. The storage of video and audio not only uses the existing capabilities of the ISO base media file format, but also defines extensions to support specific features of the IEEE 1857, IEEE 1857.2, IEEE 1857.4, and IEEE 1857.5 video and audio codecs. The target applications and services include but are not… read more limited to Internet media streaming, IPTV, video conference, video telephony, and video-on-demand. read less

定价:

在线阅读 收 藏

This standard defines a set of tools for efficient video coding and the corresponding decoding procedure, including intra prediction, inter prediction, transform, quantization, and entropy coding.

定价: 160元 / 折扣价: 136

在线阅读 收 藏

This standard defines a set of tools for efficient image coding, including tools for encoding, for decoding, and for encapsulation. All or some of the tools may be based on trained neural networks, and may perform block partitioning, prediction, transform, quantization, entropy coding, filtering, etc.

定价: 160元 / 折扣价: 136

在线阅读 收 藏

This standard sets forth bar-code label requirements for overhead, pad-mounted, and underground-type distribution transformers and step-voltage regulators. Included herein are requirements for data content, symbology, label layout, print quality, and label life expectancy. This standard assumes the existence of central transformer databases within utility companies so that bar-code labels need only carry basic transformer identification data.

定价: 55元 / 折扣价: 47

在线阅读 收 藏

This standard sets forth information code requirements for overhead, pad-mounted, and subsurface distribution transformers and step-voltage regulators. Included are requirements for data content, symbology, layout, and life expectancy. This standard assumes the existence of user databases so information codes need only carry basic identification data.

定价: 80元 / 折扣价: 68

在线阅读 收 藏

This standard sets forth information code requirements for overhead, pad-mounted, and subsurface distribution transformers and step-voltage regulators. Included are requirements for data content, symbology, layout, and life expectancy. This standard assumes the existence of user databases so information codes need only carry basic identification data.

定价: 56元 / 折扣价: 48

在线阅读 收 藏

This standard specifies a method to construct two-level low-density parity-check (LDPC) codes and to utilize them as the error correction coding (ECC) scheme in non-volatile memories (NVM). The encoding and decoding methods as well as the implications on memory and overall system latency are presented. The simulation results comparing the two-level code construction scheme and the traditional one-level scheme, as well as the parity check matrices for several LDPC code rates and lengths are provided.

定价: 83元 / 折扣价: 71

在线阅读 收 藏

Multimodal Conversation (MPAI-MMC) is an MPAI Standard comprising five Use Cases, all sharing the use of artificial intelligence (AI) to enable a form of human-machine conversation that emulates human-human conversation in completeness and intensity: 1. "Conversation with Emotion" (CWE), supporting audio-visual conversation with a machine impersonated by a synthetic voice and an animated face. 2. "Multimodal Question Answering" (MQA), supporting request for information about a displayed object.… read more 3. Three Uses Cases supporting conversational translation applications. In each Use Case, users can specify whether speech or text is used as input and, if it is speech, whether their speech features are preserved in the interpreted speech: a. "Unidirectional Speech Translation" (UST). b. "Bidirectional Speech Translation" (BST). c. "One-to-Many Speech Translation" (MST). read less

定价: 114元 / 折扣价: 97

在线阅读 收 藏

Multimodal Conversation (MPAI-MMC) specifies: 1. Data Formats for analysis of text, speech, and other non-verbal components as used in human-machine and machine-machine conversation applications. 2. Use Cases implemented in the AI Framework using Data Formats from MPAI-MMC and other MPAI standards and providing recognized applications in the Multimodal Conversation domain. This Technical Specification includes the following Use Cases: 1. Conversation with Personal Status (CPS), enabling… read more conversation and question answering with a machine able to extract the inner state of the entity it is conversing with and showing itself as a speaking digital human able to express a Personal Status. By adding or removing minor components to this general Use Case, five Use Cases are spawned: 2.Conversation About a Scene (CAS) where a human converses with a machine pointing at the objects scattered in a room and displaying Personal Status in their speech, face, and gestures while the machine responds displaying its Personal Status in speech, face, and gesture. 3.Virtual Secretary for Video conference (VSV) where an avatar not representing a human in a virtual avatar-based video conference extracts Personal Status from Text, Speech, Face, and Gestures, displays a summary of what other avatars say, and receives and act on comments. 4.Human-Connected Autonomous Vehicle Interaction” (HCI) where humans converse with a machine displaying Personal Status after having been properly identified by the machine with their speech and face in outdoor and indoor conditions while the machine responds by displaying its Personal Status in speech, face, and gesture. 5.Conversation with Emotion (CWE), supporting audio-visual conversation with a machine impersonated by a synthetic voice and an animated face. 6.Multimodal Question Answering (MQA), supporting request for information about a displayed object. 7.Three Uses Cases supporting text and speech translation applications. In each Use Case, users can specify whether speech or text is used as input and, if it is speech, whether their speech features are preserved in the interpreted speech: 7.1.Unidirectional Speech Translation (UST). 7.2.Bidirectional Speech Translation (BST). 7.3.One-to-Many Speech Translation (MST). 8.The “Personal Status Extraction Composite AIMs that estimates the Personal Status Conveyed by Text, Speech, Face, and Gesture – of a real or digital human. read less

定价: 103元 / 折扣价: 88

在线阅读 收 藏

The Compression and Understanding of Industrial Data (MPAI-CUI) Technical Specification predicts the performance of a Company from its Governance, Financial and Risk data in a Prediction Horizon expressed as Default Probability, Adequacy Index of Organizational Model, and Business Continuity Index.

定价: 62元 / 折扣价: 53

在线阅读 收 藏

The MPAI AI Framework (MPAI-AIF) Technical Specification specifies architecture, interfaces, protocols and Application Programming Interfaces (API) of an AI Framework (AIF), especially designed for execution of AI-based implementations, but also suitable for mixed AI and traditional data processing workflows. MPAI-AIF possesses the following main features: -Operating System-independent. -Component-based modular architecture with standard interfaces. -Interfaces encapsulate Components to… read more abstract them from the development environment. -Interface with the MPAI Store enables access to validated Components. -Component can be implemented as: software only (from Micro-Controller Units to High-Performance Computing), hardware only, and hybrid hardware-software. -Component system features are: -Execution in local and distributed Zero-Trust architectures. -Possibility to interact with other Implementations operating in proximity. -Direct support to Machine Learning functionalities. read less

定价: 88元 / 折扣价: 75

在线阅读 收 藏

Multimodal Conversation (MPAI-MMC) specifies: 1. Data Formats for analysis of text, speech, and other non-verbal components as used in human-machine and machine-machine conversation applications. 2. Use Cases implemented in the AI Framework using Data Formats from MPAI-MMC and other MPAI standards and providing recognized applications in the Multimodal Conversation domain. This Technical Specification includes the following Use Cases: 1. Conversation with Personal Status (CPS), enabling… read more conversation and question answering with a machine able to extract the inner state of the entity it is conversing with and showing itself as a speaking digital human able to express a Personal Status. By adding or removing minor components to this general Use Case, five Use Cases are spawned: 2. Conversation About a Scene (CAS) where a human converses with a machine pointing at the objects scattered in a room and displaying Personal Status in their speech, face, and gestures while the machine responds displaying its Personal Status in speech, face, and gesture. 3.Virtual Secretary for Video conference (VSV) where an avatar not representing a human in a virtual avatar-based video conference extracts Personal Status from Text, Speech, Face, and Gestures, displays a summary of what other avatars say, and receives and act on comments. 4.Human-Connected Autonomous Vehicle Interaction” (HCI) where humans converse with a machine displaying Personal Status after having been properly identified by the machine with their speech and face in outdoor and indoor conditions while the machine responds by displaying its Personal Status in speech, face, and gesture. 5.Conversation with Emotion (CWE), supporting audio-visual conversation with a machine impersonated by a synthetic voice and an animated face. 6.Multimodal Question Answering (MQA), supporting request for information about a displayed object. 7.Three Uses Cases supporting text and speech translation applications. In each Use Case, users can specify whether speech or text is used as input and, if it is speech, whether their speech features are preserved in the interpreted speech: 7.1.Unidirectional Speech Translation (UST). 7.2.Bidirectional Speech Translation (BST). 7.3.One-to-Many Speech Translation (MST). 8.The “Personal Status Extraction Composite AIMs that estimates the Personal Status Conveyed by Text, Speech, Face, and Gesture – of a real or digital human. read less

定价: 103元 / 折扣价: 88

在线阅读 收 藏
230 条记录,每页 15 条,当前第 15 / 16 页 第一页 | 上一页 | 下一页 | 最末页  |     转到第   页