The history of the development of video formats (article plus video)

From the nineteenth century, people began to achieve success in creating systems of moving images. Phenakiistisk, Cineograph, Praxinoscope - primitive, but successful attempts to show the viewer the movement. The entire twentieth century and beyond film and video formats have evolved, becoming better, more difficult, cheaper and more accessible, eventually leading to the birth of streaming video services, where you can watch an interesting video about this. For Hiktaimes, I prepared it in an article format, and left a video below.

The article is more convenient by having links for more detailed acquaintance with interesting formats, and the video contains many illustrations.

Many pictures

Before the cinema

In 1832, Joseph Plateau created the phenakisciscope . The device had a rotating disk, which had to be looked through the hole. Drawn with equal spacing "frames", quickly replacing each other, created the illusion of movement.

Zootrop is believed to have been invented by William George Horner in 1833, although a similar device is mentioned in the Chinese annals from the 180th year. Inside the hollow drum, frames were painted on the walls.

In 1868, John Burns Lynnett patented the word “kineograph” as the name for a device for quickly flipping through pages of paper with painted frames of a film. Like the ones you drew in the margins of a notebook in school.

In 1877, Emil Raynaud patented the praxinoscope . In fact, it is a zoootrop with a prism that reflects the image so much to compensate for the deviation of the “frame” until the next one.

From the seventeenth century there is the idea of ​​a projector. Only then it was called a magic lantern , and as a source of light in the design they used, for example, a candle, above which there was a chimney.


Among the devices that allow you to record a moving image from the frame, the kinetograph in 1891 used celluloid film as a storage device.

In 1895, the Lumière brothers collected a bunch of past achievements and made a cinema , which they shot the famous Arrival of a train.

The frame rate was limited to 16: a robust and angry grab was used in the device to change shots with jerks, and with more frequent jerks the film was torn. For the same reason, it was impossible to use the film longer than 17 meters: the heavy roll did not want to spin so quickly and the film was again torn.
In 1897, the problem was solved by Woodville Latham, having patented his loop . That is, creating a buffer between the continuous rotation of the film roll and the hopping mechanism, which dramatically changes frames.

Already then the question of standards rose sharply before the industry. In the first ten years of the twentieth century, the width of a film - those 35 mm — was more or less standardized. It turned out to be more difficult to find a single standard of perforation , in other words, holes along the edges of the film, allowing it to be shifted exactly one frame. With non-standard holes suffered until the thirties and even forties.

Today, the Society for Film and Television Engineers SYMPT is responsible for the standardization of holes. It was founded in 1916, although filmmakers in 1908 tried to bring perforation to one denominator. Attempts by filmmakers to save money by buying non-perforated film and somehow hopping it on their own machines or even inside the imaging apparatus led not only to incompatibilities, but also to a jumping image during playback. Normal factory punching solves both problems.

Magnetic tape

In 1944, the Russian émigré Alexander Ponyatov founded the company AMPEX. In the 56th, the company invented the cross-line video , which used a magnetic tape on reels.

In the same year, the BBC already used the technology to broadcast news not live. It is difficult to overestimate the possibility of recording broadcasts for television, it was a breakthrough. It is from television that many formats inherited interlaced scanning .
In the 59th, Toshiba offered to use the oblique-line recording invented by the Japanese Norikazu Sawadzaki instead of the cross-lowercase: the lines at an angle to the video head allowed, for example, to pause the video with the image on the screen fading: in this format the reading head overlaps the lines for outputting one screen at any given time.
In the 65th, Ampex developed a color film.

At that time, the film was stored on reels, which means that the recording could be easily ruined by touching the film with your hands. And refueling it into a device for reproducing was a process that requires understanding, something like when refueling a thread into a sewing machine. And if with sewing machines over the years has not become easier, then the video production industry has changed the tape.

From analogue to digital

The first industrial format of cassettes was U-matic from Sony. The professionals of 1971 were happy: the cassettes lived longer than reels of film, had a resolution of 400 lines and excellent quality thanks to a wide tape of almost two centimeters (inch) and high scroll speed, and two-channel sound.
For home use, the format did not fit: the tapes were huge, with a limit of 90 minutes. The tape recorders were even more huge. So, despite further improvements, the Sonevsky ¾ did not conquer the world.

But JVC conquered him by launching the 76th Video Home System format cassette. Or just VHS , which by 84 became the main format of domestic video.
Cassettes with a tape width of 12.5 mm (½ inch) could store up to six hours of video with a resolution of 240 lines, although more often they kept up to three hours. The tapes had no copy protection, which was already a good argument against the use of proprietary Betamax - a competitor format from Sony, the heir to U-matic.

VHS players were cheaper. In addition, Sonya miscalculated, forbidding to sell porn on her cassettes.
In the 83rd year, the famous Soviet tape recorder Electronics VM-12 was released. The same one, with a cassette slot jumping up, and lapped with a Panasonic NV-2000.
But even though Betamax lost the war for the user market, its Betacam version was actively used in the professional niche. In television, for example. Because VHS was not suitable for professional use: with each rewrite of the tape, the quality dropped, and the distortion grew. This is a consequence of composite recording , which accumulates the so-called cross-distortion . The component signal was recorded on the Betakam: the video was divided into channels of brightness and chromaticity, which reduced wear and distortion during dubbing.

For professionals, it was equally important that Betakam cameras were written directly to their cassette, and there was no need to pull wires to a separate recorder. And this additional convenience and mobility.

Betakam developed in parallel with other formats, but was always a professional solution.

Rumor has it that in some places still recordings are broadcast from betakamovsky cassettes.

With us, VHS lived beautifully right up until the mass arrival of cheap “home theaters” and DVDs, and in the west at this time new formats appeared.

Eight years after the release of VHS, Sony released a competitor: Video-8 .

The format was compact: eight is just the width of the film. The format gave a slightly better quality VHS'nogo with a resolution of 250 lines. Not to be confused with the Super-8 of the 65th year: a popular home-filming format in which film was used. But the domestic video market didn’t capture the eight, although it gained some popularity: these small convenient cassettes found their niche, becoming the standard for Handycam camcorders. It is likely that your parents somewhere in the table is a cassette with their wedding.

Replaced by S-VHS and Hi-8. The quality of the video has grown, the principles of signal recording have changed, the film coating has improved, the tapes have ceased to be oxide and the metal-powder steel.

S-VHS moved from a composite signal to a two-component one: the brightness and chrominance channels were written separately. Resolution increased to 400 lines. The format was started either with pride, or with doubt called semi-professional, devices for professional editing and broadcasting based on it appeared. The tapes looked the same as regular VHS, and the tape recorders were backward compatible.

Hi-8 - the highest quality of domestic analog formats. Resolution - 420 lines. The cassette looks like Video-8.

The history of the development of analog formats ends there, but the history of video cassettes does not end. Just now on the tape write a digital signal.

But first, let's talk about disks . Which also first stored analog video.

The first attempts to record video on a disk were undertaken at the end of the nineteenth century.
The first patent for such a system, capable of storing a little more than a minute of video, was registered in 1907.

Twenty centimeter Ted at the beginning of the seventies kept from five to ten minutes.
In the 78th, a 12-inch (30 cm) vinyl VISC stored for an hour on each side, but did not even allow the video to be paused.

A potentially successful CED was planned in the 64th, and was released in the 81st, immediately outdated and disastrous.

The locally famous 30-centimeter Laserdisc of the 78th year kept up to an hour on the side in the resolution of 440 lines. In addition to the states and Japan has not become anywhere else successful.

The 25-centimeter VHD of the 83rd year kept an hour on the side, but did not become successful and died three years later.

Digital discs start with compacts. The first adequate format was the Video CD of the 93rd year, which gave VHS-quality, but not the most economical MPEG1 codec, about which a little later, limited the duration of such recording by an hour and a quarter. Three years later, a DVD was released and for a long time no one could compete with it.

Now back to the tapes, which have become digital.

Before that, there were digital modules in recorders and tape recorders. For example, manipulations with the recording of the component signal require digital computations, which means the processor (at least in the recorder system), but the signal was written to the tapes themselves as analog.

Now, instead of the channels of brightness and color, digital data streams were written on the tapes, otherwise everything was similar.

And if for the viewer this meant only a pleasant improvement in quality, then here is the incredible simplification of life for the professionals in the video production.

You can’t overclock an analog cassette, but you can overclock a digital cassette fifty or even a hundred times, without losing the ability to read a record. And it greatly simplifies installation and critically reduces the time from footage to a record ready for broadcast.

And finally: the digital signal can be copied and rewritten (almost) as many times as necessary, no degradation occurs - the figure is a number.

The first digital format: D1 from Sony. Where D - means Digital, and 1 - means that the first. Appeared in the 86th.

Interestingly, the cassettes are very similar to the cassettes of the very first U-matic video format: also a ¾-inch wide film, and this film is oxide, and not metal-powder. The system provided 270 Mbit / s data flow. Interestingly, with modern codecs, video in 8K looks fine at only 50, but more on that later.

The format implies encoding the component signal in 4: 2: 2 and was very much loved by professionals for the abundance of convenient devices for editing and processing and the convenience of the format itself.

The D2 format is not listed for Sony, but for Ampex, although the first was involved in the development.

The format turned out to be holivar: the cassettes were cheaper, the tape recorders docked with analog equipment without additional DACs, but the quality was worse and the format was sharpened for home use. The best that could be heard from the professionals about the D2: “well, this is better than the VHS”.

D3 compressed the recording in half, making video production cheaper.

D4 did not exist on the market.

D5 finally brought joy to the eyes of professionals: 10-bit encoding and lack of compression covered the need for D1. His HD version allowed to choose between interlaced 1080 and progressive 720 with a frequency of up to 30 fps.

D6 in the 93rd year made it possible to write a stream to the insane by computer standards of 1.2 Gbit / s without compression. For the normal implementation of such a data density, we had to develop a new error correction system. And that’s where the boring D formats end.

In the same 93rd, Sony launched the Digital Betacam .

The successor blocked D1 and allowed cheap enough to produce and process video, making a modular system of compatible devices. It was also backward compatible with old Betacams. Operators and manufacturers of video system fell in love.

In the 95th she had a competitor Digital-S .

It is also called the D9 in the boring digital tradition. The cassettes looked like VHS. A little later came the HD version. The signal was encoded on the system DV.

DV or Digital Video is a whole group of formats that was collectively developed by Sony, Panasonic, Philips, Hitachi and JVC and has greatly influenced the market since the 95th.

DV cassettes could have different form factors, even small ones, which may contain your parents' second weddings.

Through DV, we seamlessly move from physical media to digital interfaces and computers. And digital video gets the opportunity to be stored and transmitted as files.

This means that terms such as codec and container appear. Well, finally, we stop talking about resolution in the "lines" teleformat and start talking in computer pixel format.

Files and streams

A container is a file or data stream format in which data is encoded in one way.

A codec is an encoder and a decoder. What transforms the data. In the case of media, codecs are designed to compress the data stream and often do so with losses.

Within DV formats, the container can be AVI, Quicktime, or a lesser-known MXF. Codecs within these containers and formats may be different.

If we talk about video compression, then there is a general rule: the more advanced means we encode, the less data stream or file size can be, but more resources are needed for playback with subjectively equal recording quality.

The development of codecs took place in parallel with the increase in computer performance.

Back in 1988, the H.261 codec appeared. Few people have heard about it, although it was in it that the concepts of reference frames, block vector transformations and other technologies, which are now used in all popular codecs, appeared.
That is, the video is not stored as a sequence of frames, as in film. The video is analyzed by the encoder, who finds a sharp change of the picture - for example, the beginning of a new scene - and saves a frame that is called reference. And until the next reference frame describes only the changes of this frame in time, dividing the image into blocks.

In the 93rd, the Moving Image Expert Group (MPEG), formed by the International Organization for Standardization (ISO), developed the MPEG-1 compression standards group.

Regarding H.261, it became possible to build changes not only from the past reference frame, but also from the subsequent one; as well as encode some section in isolation from the rest.

In the 96th appeared MPEG-2 . DVDs will be encoded later, you can imagine the scale of distribution. An interlaced scan has returned to the game, and nothing radically new.

On the DVD-video it is necessary to stop in detail. These discs appeared in the distant 96th, and by 2003 they had become the main consumer video format.

Movies were recorded with a resolution of 720 × 576 pixels, which coincides with the format of D1. At the same time, compression made it possible to reduce the bitrate - that is, the data stream, up to 9.8 Mbit / s, which made it possible to write movies on disks with a capacity of 4.7 GB. The encoding format is 4: 2: 0, with a decrease in the resolution of the color channels - this trick allows you to reduce the size of files without greatly affecting the picture quality, because the luminance channel remains in its original resolution.

The third MPEG does not exist separately, all its chips are absorbed by the second. It has no relation to mp3 either. He began to develop about the same as the second, aiming at higher bit rates, but then he solved all his tasks within MPEG2.

98th - hurray piracy or MPEG-4 .

A DVD was burned onto a CD with the help of the DivX proprietary codec, then its open analog Xvid . The quality was, of course, much worse than a DVD.
But the hour and a half film occupied 700 MB and in the zero boom of film piracy was tied to these codecs. If there were films on the computer, they were films of this format, with rare exceptions.

And since 2003, modernity begins. The Joint Video Team, under the patronage of the omnipresent expert group on moving images, introduced the H.264 codec, which encoded the video at the bottom of the post.

Well, almost all the same since that time it was finalized, and YouTube generally overtook my video in VP9 =) For example, in 2007 there was an add-in for H.264 - SVC (Scalable Video Coding), which not only complicated the decoding and so hard for a codec for computers, but also allowed to store in a stream of video in several resolutions in such a format that the higher ones relied on the lower ones. You, most likely saw on the Internet pictures in a progressive jeep , when they are loaded not from top to bottom, but first in the squares, and then everything is better worked until they are fully loaded. Here is a similar story. With the advantage that devices that need to output video in a resolution smaller than a movie, can not waste resources on decoding extra layers.

And the codec is really resource intensive. It contains a lot of advanced technologies, in which I, alas, is not strong. Nevertheless, today even the phones successfully cope with FullHD-video in this format, and the top ones pull and 4K.

At the same time, the bitrate of such a video in 1080p varies around 2 Mbit / s, and without sound even less. And the fact of how you can reduce the amount of data, correctly increasing the volume and complexity of calculations, still amazes me.

In 2006, Blur discs appeared .

In two years, they ousted their rival HD-DVD . Alive to this day. The databases were developed by a whole consortium of large companies. The disks are single-layer and double-layer, with a capacity of 25 and 50 GB, respectively. The video for them is encoded in MPEG-2, MPEG-4, H.264 and in the new at that time codec from Microsoft VC-1 .

In HD-DVD, the capacity values ​​were slightly more modest - 15 and 30 GB - but they could also be two-sided. The codec set is the same.

At the same time, the future is slowly approaching. Many would like to meet him in the face of the free VP9 codec, but, most likely, it will be the corporate grin of H.265 , which is also called HEVC. What can I say, with the coming =)

Seriously, both codecs will find a place for themselves. Already today you can find video inserts on sites that are implemented on the open WebM format, which uses either VP9 or 8. And since Google just forces VP9, ​​then YouTube will also support both new codecs.

Both codecs are not revolutionary, but this is another round of video technologies. Video in H.264, and in VP8, and in H.265, and in VP9 look great. Only the latter two are smaller and have a higher ceiling of use. Another question is how much faster or slower will the video be encoded in new formats, so that modest content producers - like Sliama - will also be comfortable. Yes, there are no special competitors for these codecs, because today it is again important whether the devices are able to decode video with hardware: your smartphone will easily pull any open source Theora, but it will be discharged much faster. Therefore, we have again good and evil, Coca-Cola and Pepsi, Android and iPhone, VP9 and H.265.


All Articles