Back to previous page.
There are many compression standards to choose from:
Image and video compression can be done either in a lossless or lossy approach. In lossless compression, each and every pixel is kept unchanged resulting in an identical image after decompression. The downside is that the compression ratio, i.e. the data reduction, is very limited. A well-known lossless compression format is GIF (Graphics Interchange Format), PNG (Portable Network Graphics) or TIFF (Tagged Image File Format). Since the compression ratio is so limited, these formats are impractical for use in network video solutions where large amounts of images need to be stored and transmitted. Therefore, several lossy compression methods and standards have been developed. The fundamental idea is to reduce things that appear invisible to the human eye and by doing so, tremendously increase the compression ratio. Compression methods also involve two different approaches to compression standards: still image compression and video compression.
STILL IMAGE COMPRESSION STANDARDS:
All still image compression standards are focused only on one single picture at a time. The most well known and widespread standard is JPEG.
This is short for Joint Photographic Experts Group international – a good and very popular standard for still images that is supported by many modern programs. With JPEG, decompression and viewing can be done from standard Web browsers.
JPEG compression can be done at different user-defined compression levels, which determine how much an image is to be compressed. The compression level selected is directly related to the image quality requested.
Besides the compression level, the image itself also has an impact on the resulting compression ratio. For example, a white wall may produce a relatively small image file (and a higher compression ratio), while the same compression level applied on a very complex and patterned scene will produce a larger file size, with a lower compression ratio.
Above are JPEG images using different compression ratios.
Another still image compression standard is JPEG2000, which was developed by the same group that also developed JPEG. Its main target is for use in medical applications and for still image photography. At low compression ratios, it performs similar to JPEG but at really high compression ratios it performs slightly better than JPEG. The downside is that support for JPEG2000 in Web browsers and image displaying and processing applications is still very limited.
Top of Page
VIDEO COMPRESSION STANDARDS
Motion JPEG offers video as a sequence of JPEG images. Motion JPEG is the most commonly used standard in network video systems. A network camera, like a digital still picture camera, captures individual images and compresses them into JPEG format. The network camera can capture and compress, for example, 30 such individual images per second (30 fps – frames per second), and then make them available as a continuous flow of images over a network to a viewing station. At a frame rate of about 16 fps and above, the viewer perceives full motion video. We refer to this method as Motion JPEG. As each individual image is a complete JPEG compressed image, they all have the same guaranteed quality, determined by the compression level chosen for the network camera or video server.
Example of a sequence of three complete JPEG images:
The H.263 compression technique targets a fixed bit rate video transmission. The downside of having a fixed bit rate is that when an object moves, the quality of the image decreases. H.263 was originally designed for video conferencing applications and not for surveillance where details are more crucial than fixed bit rate.
The image of a moving person will become like a mosaic if
H-series compression is used. The normally uninteresting back-
ground will, however, retain its good and clear image quality.
Top of Page
One of the best-known audio and video streaming techniques is the standard called MPEG (initiated by the Motion Picture Experts Group in the late 1980s).
MPEG’s basic principle is to compare two compressed images to be transmitted over the network. The first compressed image is used as a reference frame, and only parts of the following images that differ from the reference image are sent. The network viewing station then reconstructs all images based on the reference image and the “difference data”.
Despite higher complexity, applying MPEG video compression leads to lower data volumes being transmitted across the network than is the case with Motion JPEG. This is illustrated below where only information about the differences in the second and third frames is transmitted.
Example of a sequence of three complete MPEG images:
Naturally, MPEG is far more complex than indicated, often involving additional techniques or tools for parameters such as prediction of motion in a scene and identifying objects. There are a number of different MPEG standards:
☑ MPEG-1 was released in 1993 and intended for storing digital video onto CDs. Therefore, most MPEG-1 encoders and decoders are designed for a target bit-rate of about 1.5Mbit/s at CIF resolution. For MPEG-1, the focus is on keeping the bit-rate relatively constant at the expense of a varying image quality, typically comparable to VHS video quality. The frame rate in MPEG-1 is locked at 25 (PAL)/30 (NTSC) fps.
☑ MPEG-2 was approved in 1994 as a standard and was designed for high quality digital video (DVD), digital high-definition TV (HDTV), interactive storage media (ISM), digital broadcast video (DBV), and cable TV (CATV). The MPEG-2 project focused on extending the MPEG-1 compression technique to cover larger pictures and higher quality at the expense of a lower compression ratio and higher bit-rate. The frame rate is locked at 25 (PAL)/30 (NTSC) fps, just as in MPEG-1.
☑ MPEG-4 is a major development from MPEG-2. There are many more tools in MPEG-4 to lower the bit-rate needed to achieve a certain image quality for a certain application or image scene. Furthermore, the frame rate is not locked at 25/30 fps. However, most of the tools used to lower the bit-rate are today only relevant for non real-time applications. This is because some of the new tools require so much processing power that the total time for encoding and decoding (i.e. the latency) makes them impractical for applications other than studio movie encoding, animated movie encoding, etc. In fact, most of the tools in MPEG-4 that can be used in a real-time application are the same tools that are available in MPEG-1 and MPEG-2.
Top of Page
The latest video compression standard, H.264, is expected to become the video standard of choice in the coming years. It has already been successfully introduced in electronic gadgets such as mobile phones and digital video players. For the video surveillance industry, H.264 offers new possibilities to reduce storage costs and to increase the overall efficiency.
H.264 (sometimes referred to as MPEG-4 Part 10/AVC) is an open, licensed standard that supports the most efficient video compression techniques available today. Without compromising image quality, an H.264 encoder can reduce the size of a digital video file by more than 80% compared with the Motion JPEG format and as much as 50% more than with the traditional MPEG-4 Part 2 standard. It is the magnitude of these numbers that makes H.264 highly relevant for video surveillance applications.
Reduced storage and bandwidth costs:
One immediate benefit of the drastically reduced file sizes is the impact on storage and bandwidth requirements. For the same amount of video data, with the same image quality, a video surveillance system supporting H.264 compression will basically reduce the storage cost and bandwidth occupancy by at least 50% compared to when using conventional compression technologies. As the systems grow larger, and the requirements for high resolution images in combination with high frame rates increase, H.264 will be a key differentiator between various system solutions.
Higher resolution and frame rate:
Depending on application needs, there are various ways to benefit from the impressive compression rate of H.264. Today, it is common to choose a limited frame rate or lower resolution in order to stay within the specified storage limitations of an application. This has a negative impact on the video images, which become either blurry or less detailed. Introducing video surveillance equipment that support H.264 compression in such an application will enable several combinations of increased frame rate and image resolution, thus providing higher image quality.
Bit rate comparison for a 115 seconds video stream, given the same level of image
quality, among different video standards. The H.264 encoder was at least three times
more efficient than an MPEG-4 encoder with no motion compensation and at least six
times more efficient than Motion JPEG.
Top of Page
Does one compression standard fit all?
When considering this question and when designing a network video application, the following issues should be addressed:
- What frame rate is required?
- Is the same frame rate needed at all times?
- Is recording/monitoring needed at all times, or only on motion/event?
- For how long must the video be stored?
- What resolution is required?
- What image quality is required?
- What level of latency (total time for encoding and decoding) is acceptable?
- How robust/secure must the system be?
- What is the available network bandwidth?
- What is the budget for the system?