Broadcast Engineering Basics


This is mainly about broadcast engineering, and the elements of the system that software developers don't usually see. This is by no means complete, and is nothing more than a sketch to show you some of what goes on in the rest of the system. A typical digital TV transmission setup looks something like this:
The components that make up a typical broadcast system.
This equipment is normally all connected together using high-speed connections like SDI (Serial Digital Interface) or ASI (Asynchronous Serial Interface) which are standard in the TV field. In addition to this, all of the equipment will me connected via ethernet to a control system and monitoring equipment to make sure that nothing goes wrong (or that if something does go wrong, the viewer doesn't see it). There will normally be a large number of some of these components, including some redundant spares in the event of problems. A typical head-end will contain many MPEG encoders and multiplexers, for instance. Now that we've seen how it's put together, let's examine each of these components in more detail.
The encoder

The encoder is used to take an analog signal and convert it to MPEG-2. This is more commonly used in live shows - for other shows, we may have a selection of pre-encoded MPEG streams that we can play out from a dedicated playout system. This playout system is usually a highly customized PC or workstation with a large high-speed disk array and a number of digital interfaces for transmitting the data to the rest of the transmission system.
An encoder can generate two types of MPEG stream. Constant bit-rate streams always have the same bit-rate, no matter what the complexity of the scene they contain. If the signal is too complex to be coded at the specified bit-rate, the quality of the encoding will be reduced. If the scene takes less data to code than the specified bit-rate, it will be stuffed with null packets until the correct bit-rate is reached. This makes later parts of the processing easier, because the fact the bit-rate does not change makes things easier to predict later, but it does waste bandwidth.
Most encoders can now produce variable bit-rate MPEG streams as well. In this case, the bit-rate of the stream can be adjusted dynamically, as more or less bandwidth is needed to encode the images with a given picture quality. Since some scenes take significantly more bandwidth to encode than others, this lets the picture quality be maintained throughout a show while the bandwidth changes. The fact that the bit-rate of the stream can change doesn't mean that it will reach higher levels than a constant bit-rate encoding of the same stream of course: the operator can usually set the maximum bit-rate that the encoder can use, and the encoder will reduce the quality of the encoded output, if necessary to meet this.
Most broadcasters today use variable bit-rate encoding because it offers better quality while using lower bandwidth. In particular, variable bit-rate encoding lets us make maximum use of the available bandwidth at the multiplexing stage.




The multiplexer

One MPEG stream on its own isn't much use to us as a TV broadcast. Even several MPEG streams aren't terribly useful, because we have no way of associating them with each other. What we really need is a single stream containing all the MPEG streams needed for a single service, or ideally multiple services. A transport stream, in other words.
The multiplexer takes one or more MPEG streams and converts them into a single transport stream. The input streams may be individual elementary streams, transport streams or even raw MPEG data - most multiplexers can handle a range of input types.
The multiplexer actually does a number of jobs - multiplexing the data is one of the more complex of these, for a variety of reasons. Each transport stream typically has a fixed bandwidth available to it, which depends on the transmission medium and the way the transmission network is set up. One of the jobs of the multiplexer is to fit a set of services in to this bandwidth. The easy way of doing this is to use constant bit-rate MPEG streams, because then the operator knows exactly how much bandwidth each stream will take, and setting up the multiplexer is easy. This gets pretty inefficient, though, since some streams may be using less than their share of the bandwidth, while others may need to reduce the picture quality in order to fit in their allocated share. This wasted space is a real problem, since the transmission costs are high enough (especially in a satellite environment) that you want to make maximum use of your bandwidth.
The way round this is to use variable bit-rate MPEG streams and a technique known as statistical multiplexing. This system takes advantage of the statistical properties of the multiplexed stream when compared to the properties of the several independent streams. While the bit-rate of each individual stream can vary considerably, these variations are smoothed out when we consider ten or fifteen streams (video plus audio for five to seven services) multiplexed together. Each stream will have different bit-rate needs at each point in time, and these differences will partially cancel one another out at any given time. Some streams will need a higher bit-rate than average at that time, but others will probably need less than average. This makes the bit-rate problems easier to handle, since they are now less severe. By maintaining a separate buffer model for each stream, the multiplexer can decide how to order packets in the most efficient way, while making sure that there are no glitches in any of the services.
At some points, the streams being multiplexed may have a bit-rate that is higher than the available bandwidth. A statistical multiplexer will use another one of the statistical features on MPEG streams to handle this situation. Since most MPEG streams only reach their peak bandwidths at fairly wide intervals for fairly short periods, delaying one or more of the streams will move the peak to a point where the bandwidth is available to accommodate it. This is another reason to maintain a buffer model for each stream - to ensure that these peaks are not moved to a point where they would cause a glitch in the service.
In some older statistical multiplexing systems, the multiplexer and encoders are connected and can communicate with one another. In particular, the multiplexer can provide feedback to the encoders and set the bit-rate that they encode their streams at. The feedback from the multiplexer means that if one stream needs more bandwidth than it's currently getting, the bandwidth for that stream can be increased temporarily at the expense of the others. This doesn't use true variable bit-rate encoding, since in many cases the streams are actually constant bit-rate streams, where the bit-rate used to encode them changes from time to time.
Despite appearances, this system is less flexible than true statistical multiplexing, because if the total bit-rate of the streams is higher than the available bandwidth, then the quality of one of the streams must be reduced. This isn't necessary in the case of the latest generation of statistical multiplexers, where these peaks can often be moved slightly to accommodate them. The other place where flexibility is lost is in the need for a connection between the encoder and the multiplexer. In practical terms, this means that the multiplexer and encoder have to be on the same site, or at least that the encoder feeds only one multiplexer at a time. In these days of remote processing, that can cause problems. Without this need, a network can handle streams where they have no control over the encoder, such as streams from remote sites, from other networks or from a playout system. This offers some big advantages in terms of bandwidth saving.