Follow

NDI Encoding/Decoding

Compression

NDI uses compression to enable transmission of a large number of video streams across existing infrastructure, specifically discrete cosine transform (DCT), which converts video signals into elementary frequency components. This method of compression is commonly used in encoding formats and mezzanine codecs within the industry.

One of the most efficient codecs in existence, NDI achieves significantly better compression than the majority of codecs that have been accepted for professional broadcast use. On a typical, modern Intel-

based i7 processor, the codec is able to compress a 1920x1080 video signal at 250 frames per second using a single core.

The peak signal-to-noise ratio (PSNR) of the NDI codec exceeds 70dB for typical video content. Uniquely, and importantly, NDI is the first ever codec to provide multi-generational stability. This means that once a video signal is compressed, there is no further loss. As a practical example, generation 2 and generation 1000 of a decode-to-encode sequence would be identical. Examples of this concatenation are provided in NewTek’s NDI SDK[1].

The NDI codec is designed to run very fast and is largely implemented in hand-written assembly to ensure that the process of compressing video frames occurs as quickly as possible. Latency is both a factor of the network connection and the endpoint products. NDI has a technical latency of 16 video scan lines, although in practice, most implementations would be one field of latency[2]. Hardware implementations can provide full end-to-end latency of within 8 scan lines.

 

[1] For more information, please visit http://ndi.newtek.com.

[2] The implementations of NDI using the SDK prior to March 3, 2016 typically provides frame-at-a-time delivery for reasons of compatibility with older systems. In these cases the minimum latency is likely to be one frame.

 

Formats

NDI fully supports all resolutions, frame rates and video streams, with and without alpha channel. In practical terms, resolution and frame rates will be determined by the capabilities of endpoint devices.

The most common implementations are expected to utilize 8-bit UYVY and RGBA video, however support for 10-bit and 16-bit is available. The internal pipeline of the codec is maintained entirely at 16-bit or better.

 

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request

0 Comments

Article is closed for comments.