H.265/HEVC 8K 60p Main10 Encoder

Product Overview

H.265/HEVC is the ISO/IEC standard for video compression and decompression. H.265/HEVC which has double compression performance of the H.264/AVC video compression standard, is expected to be used for high-definition video distribution such as 4K (3840 x 2160) and 8K (7680 x 4320), as well as video distribution services for mobile devices on the limited bandwidth.
We have developed and commercialized a software H.265/HEVC encoder.

Features

  • TMC original algorithm “DMNA” made it possible the high image quality, high speed, small size and low power consumption.
  • Original parallel processing which minimizes the dispersion processing time of each core, achieves 4K (3840 x 2160) 60 frames per second real-time processing on a general-purpose PC server.
  • Processing performance is not disturbed by the memory access costs in multiprocessor environments with supporting NUMA architecture.
  • Optimized for Intel x86 and ARM Cortex-A (ARMv7 and ARMv8).
  • Supports the slice 4-division encoding method specified in the 8K format of ARIB SDT-B32.

Specification

Stream Format Byte stream format (Annex B)
Image Format YCbCr 4:2:0/4:2:2/4:4:4/4:0:0 Planar Format
Bit Depth 8 bits/10 bits/12 bits
Resolution 64 x 64 to 7680 x 4320
Operation Mode Five levels can be set at trade-off between processing speed and image quality.

 

This IP supports both color spaces of YCbCr and YUV.

Performance


What is NUMA architecture?

NUMA is an architecture for shared-memory multiprocessor computer systems, in which the cost of accessing the main memory shared by multiple processors is not uniform, depending on the memory area and processor. There are multiple processor-memory pairs (called Node), and from the perspective of a given processor, memory on the same node is called Local Memory, while memory on other nodes is called Remote Memory. Local Memory access latency < Remote Memory access latency If the processor does not need to refer to the memory frequently, it is called Local Memory. By placing data that processors need to refer to frequently in memory with low access cost, and placing data that is not frequently referred to in memory with high access cost, bus congestion is prevented and bus clocking is improved by reducing the number of processors sharing the bus.