Monday, June 18, 2012

Nonlinear Video Editing Depends on Metadata

A Ciro Guillotine Super-8 film editor
My very first editing system.
At one point in my career as a video editor, I was the corporate editor for Avid Technology - a pioneer in the development of nonlinear editing systems. During that time, I used to collaborate on edits with a very talented editor located in Maryland (I’m located in Massachusetts). In order to work together we had to devise a workflow where we could both edit the same video project.
Keep in mind that in the early 90′s most people were still using dial-up, and moving large media files over the Internet was completely out of the question.

For those of you unfamiliar with editing video, here is very brief overview of how it works: Nonlinear editing systems (NLE’s) edit video in a non-destructive manner, meaning the edited sequences are just a series pointers back to the original clip media files. During an edit, the original media is never actually changed – it’s all just the manipulation of metadata – the metadata holds all of the instructions about the edit. If a transition, composite or other effect is created, the metadata holds instructions on how it should look and if needed, new media is created with a render. When making edits, the metadata of the sequence changes to reflect new timing, new sources and details on the visual effects.



In an old Avid nonlinear editing suite
My old Avid editing suite before systems became portable.
As part of our collaborative workflow we decided to use this to our advantage. At the start of the project, I would digitize all of the media. The early 90′s also pre-dated widespread adoption of digital video (DV) and later tapeless video, so all of our tapes had to be converted from analog component video into motion JPEG files. Once all of the media that we would need was on a hard drive, I would copy the hard drive and send the duplicate drive down to Maryland. I would also send a copy of all the bin files which included all of the metadata about each video clip – the tapenames, start and stop timecode, number of tracks, video and audio formats and all the other pertinent information.



As we began editing, we’d both be able to work off each other’s edits by emailing the modified bins with sequences (the metadata) back and forth. These bins were generally very small (a couple KB or so) and were very easy to email. The only time we’d need to send actual media files would be if either of us generated a title. The media files associated to the title were still as small as 1MB and could easily be emailed. Although the media for the titles could easily be recreated based on metadata in the bin, it was best if both systems were always sharing exact duplicates of all media, including titles. Recreating title media based on the metadata in the title effect would create new media and would change the pointers of the title in the sequence to look for that new media.


Avid bin in text view showing some pertinent metadata.
A snapshot of a current Avid bin.

The trick to this type of collaboration was making sure that we had the most current sequence before making any changes. If one of us was to begin editing a section of the video based on a previous version of the cut, there would be problems – problems we had to deal with on a couple of occasions. Since I would be mastering the final sequence, it usually meant I had to backtrack and rebuild my changes into the sequence I had received. If the timing had changed between the previous and the current version of the sequence, I would have to re-cut the changes from Maryland, to make them fit within the current version.

Once we had the picture locked down, I would prepare and master the sequence for the sound edit. Sound prep meant adding a frame of white to the video at 2 seconds before the start of picture. This flash was called a 2-pop and was used to synchronize the finished sound once it was received.

Mastering was taking the sequence (that had up until this point just been metadata) and converting it to a single Quicktime movie file. The music and sound tools would work the same way as the video editing tools – manipulating small metadata files until the project was complete. When the sound was complete, they would add a sound blip to correspond with the 2-pop flash on the video, master to a final media file and send it back. The final step in completing the edit was to lock the picture and sound together using the 2-pop.

Without metadata this workflow would not have been possible. This data that describes the media we work with has always been an essential part of post production. As technology progresses, more parts of the production process will depend on new information in the metadata.

Below is the video titled "Avid Overview", it was the first video we used this long-distance collaborative workflow to produce. 




1 comment: