This list of terms and explanations was all made up by me personally unless I have specifically referred to someone else's work. No responsibility is taken for ensuring the information is accurate in any way or that the jokes are funny.
Any accident or damage to persons or personal property resulting from the use of this glossary is not only absurdly unlikely but was probably also meant to happen in a sort of fatalistic-type inescapable doom kind of way and I cannot be held responsible. If in doubt - in fact, whether in doubt or not - ask an expert. You have been warned.
Corrections are welcome, as are suggestions for additions. In fact, if you have a suggested addition, feel free to write the explanation and I will credit you appropriately.
The information here is © Copyright Maxim Jago 2004. If you would like to reproduce any of this text, please ask and obtain permission first. Thankyou.
This is simply the width of an image relative to its height. Regular Standard Definition video is 4:3, meaning it is 4 units wide for every 3 units tall. All regular TV sets have an approximation of this aspect ratio.
Widescreen Televisions usually have a screen ratio of 16:9. Full blown super duper widescreen film is 2.85:1. This is so wide, you can fall off your chair trying to see the edges.
There are also a number for surround imaging systems, projected onto a domed screen so your entire peripheral vision is filled with the image. You'll tend to only get these at special theme parks and unfortunately, as with the marvellous 3D IMAX system, they tend not to show anything worth watching except for the spectacle.
Although most people use Aspect Ratio to refer to the shape of the picture, pixels also have an aspect ratio and you can get into a pickle when combining formats unless you are prepared for mis-matches in the pixel domain. Computer graphics tend to use square pixels - 1:1 but regular PAL Standard Definition video uses 1.063:1 pixels. Widescreen PAL video has the same number of pixels as Standard Definition video but they are wider.
Sound recording formats will vary but you can generally reduce them to just two key issues - how robust the recording medium is (will it break if you drop it) and the sound signal equivalent of dots-per-inch. A well used older sound recording system was the Nagra. This system is still in use as a standalone sound recording system for motion picture film but has been superseded by cheaper, lighter and generally better systems. It used magnetic tape ¼” wide and you could choose if you wanted it to run the tape past the recording head fast or slow. The faster the tape moved, the more surface area there was to store the sound signal (which was put onto the tape as magnetic charges by the record-head) and that meant the signal became clearer. The principle of using up more of your storage medium per second to get a better recording is consistent across all recording and transmission (i.e., transmitting a radio signal through the air) mediums.
For video productions, it is common to record sound directly onto the video tape while you are filming. You must admit, it's a bit simpler and means there is very little chance of your sound losing synchronisation The microphones built into cameras are usually no good except for recording a reference. DAT is a tape format which can record high quality sound on very small tapes. It's very popular with the independent short film production world. Minidisk is OK, although the sound quality is really not that great if you want to record anything other than voice. If you don't know what these formats are, you are probably going to need to ask someone about it. However, rest assured that regardless of the format you choose for your sound recording, the principles of recording good quality sound do not change. There's no space to explain them here, of course.
Avid uses the Audio Mix Tool to make adjustments to the volume of a whole clip or whole track on your timeline. It is important to see the world 'whole' in this description. The Automation Gain Tool is used afterwards to set adjustments in the sound level which can change from moment to moment. If you want to, you can simply use the Automation Gain Tool and do away with the Audio Mix Tool but the simplicity of this tool keeps it useful.
Most people use the word Audio to refer to any kind of sound but actually it can be usefully broken down into several categories. Firstly, you have different types of sound your audience will hear. These can be 'sync-source', which means it was actually recorded at the time your subject was making a sound (i.e., talking); Wild Track (the general background atmospheric sound that was going on while your performers were busy performing. Top notch sound recordists will usually insist on recording a piece of wild track for each location to save the editor when something needs to be covered over with it); Music; Spot Effects (perhaps you didn't record the sound of a door closing on location and have to add it later, or even introduce proper 'Foley' sound, which is a process of creating a complete world of sound for your action in post-production rather than recording it on location - see the separate entry for this); Voice over and more.
Adobe bought a lovely piece of software called Cool Edit Pro and changed the name to Audition. The first version of Audition was almost exactly the same program but had a very, very nice noise reduction tool (for getting rid of things like background hiss) and some nice sound samples. There are probably some other differences too but I haven't played with it much yet.
Audition gives you a wide range of tools for producing and manipulating sounds. You can make music with it from samples or you can use it as a software sound recording studio. Very flexible and fun to use. It takes a short time to get to grips with the basics in this application, and then quite a lot of playing around (I mean carefully conducted research) to master it.
Avid uses the Automation Gain Tool to make changes to the level of volume on a particular clip or track that vary over time. The sound can get louder or quieter over time and this change can affect more than one track at a time if you choose. The Audio Mix Tool is used for making broader adjustments to audio for whole clips or tracks. The Automation Gain Tool makes use of key-frames which are extremely handy little guys that make your life easier, in spite of appearing to be complicated.
Avid is the most used non-linear editing system in the world. It is also widely considered to be the best and the most professional. Whether or not it is the best, there is strong recognition of this system in the media industries and you may not be taken seriously unless you use it. There are lower-cost versions available now which make it more accessible than before if you want to learn how it works.
This is just a plug and socket design which looks vaguely like an elongated Phono plug with springy bits of metal on all sides to ensure a good quality contact. It is actually an excellent plug design but is increasingly rarely used. I only included it on this list because I like the name.
Bit-rates are often described in much the same way that you might measure the amount of water flowing through a pipe. Rather than gallons per minute though, it's bit or kilobits or kilobytes per second. The speed at which you connect to the Internet, for example, is measured in kilobits per second. So a 56k modem, is supposed to be able to download up to 56,000 bits per second. This is, in fact, never true and you are more likely to get somewhere between 4,000 and 5000 bits per second. Be a little careful though, because there is a keen difference between a bit and a byte. A byte is 8 bits, used to define one character. Every letter on this screen is an expression of an 8 bit combination (one byte). However, there are extra bits included with bytes very often - particularly a stop bit (which says when a byte has finished, like a full stop) and an error checking bit (which is effectively odd or even depending on the combined value of the other bits in a byte) are you with me so far? The bottom line is, count on each byte being 10 bits, when it comes to the Internet, and you'll be fine.
When you are editing video, you would normally expect it to have contiguous Timecode. If you do not know what this is, go immediately to the Timecode reference on this glossary and give yourself a good spanking.
BetaCam SP or Beta SP is the prevalent 'Standard Definition' video recording system used for broadcast video in the UK. It has been superseded by Digital Betacam or Digi Beta which is unto Beta SP what DV is to Hi-8. Incomparable.
If you are making a movie using Digital Betacam, you don't need this glossary, you already have a job.
Non-Linear editing systems tend to organise your various bits of media in containers called Bins. This harks back to the old days of celluloid film editing. The various strips of film for each shot (shots are broken down into individual takes which are called clips) are hung on many hooks over a large bin which is lined with cloth. The film strips simply dangle in the bin and because of the way celluloid lies, it tends not to tangle with other strips of film. If you look closely, the icon for a closed bin inside Avid looks very much like a some strips of film hanging up.
This is a format of picture file. It is not terrible efficient in terms of storage space but tends to be compatible with lots of systems and does not degrade the picture because it simply records the information for every pixel as it appears.
This is a measurement of the amount of bits (ones or zeros) used per second to transfer information. It can be used to describe all sorts of things but is most often referred to when discussing streaming media file formats such as Windows Media Video, QuickTime or Real Media. The bitrate of your video is absolutely crucial when producing streaming media as internet connection speeds can vary enormously.
Compositing systems have different ways of allowing one layer of picture to interact with another layer. The simplest example is probably opacity. The more opaque the layer in front, the less of the background layer you can see. However, there are some immensely sexy blend modes available in software such as PhotoShop (for still images) and After Effects (for moving video). Both are made by Adobe.
This is the system for making a particular colour or a range of colours transparent so that an image behind your foreground image becomes visible. The classic example of this is weather presenters, although this is increasingly achieved with very big screens behind the presenter instead. Normally, the colour you want to make transparent is behind a presenter or an object in the foreground. Blue is often used because it rarely appears in nature, so you can easily dress your presenter so they don't become partially transparent themselves.
This is a single press AND let go of the left mouse button (assuming you have your mouse set up as a right-handed person). On a Macintosh computer, you may have only one button, or no button at all - you just press down on your mouse. It is important to note the 'let go' part of this description because you usually have to click on some particular icon or button. If you move the mouse pointer before letting go of the button, you will probably not get the result you want.
This is the general term used to refer to any one piece of video or sound you have organised somehow inside your Non-Linear Editing system. You edit your Clips into your finished film, adding effects and artistic judgement as you go.
Clip, or clipping is also used to describe the effect of recording sound too loud. When you go beyond the highest level your recording medium can take, you start to simply lose signal information and bits of your sound just are not recorded. Clipping is obviously a bad thing and since digital recording mediums are so good at not adding very much background noise, there is no excuse for setting your recording level too high.
Adobe Audition software includes an option to attempt to restore clipped audio. It does a reasonable job if you are only losing the odd brief moment of sound through clipping.
Compressors are sometimes referred to as Codecs. This is an abbreviation of compressor decompressor. You can broadly compare the different digital video and sound codecs, or formats, with the various real-world video and sound recording formats. Each format - such as Cassette Tape, CDROM, Reel to Reel Tape and so on have pros and cons, strengths and weaknesses and potential compatibility issues. Exactly the same is true of digital codecs.
Choosing the right codec is vital but thankfully, most systems automatically choose a default compressor that works when you record. Outputting via Export is a little more complicated - for example, do you want Real Media, QuickTime or Windows Media Video.
In fact, it might be helpful to think first of formats - such as AVI files or MOV files - and within them, the various codecs in the same was as languages might have dialects. The various Chinese dialects are so different, people sometimes have to resort to English to understand each other, even if the root language is the same. This is exactly how codecs work. Even if you are using a QuickTime Player to play a QuickTime movie file, if it uses a codec that is not installed on your computer, it won't work.
On the Internet, codecs are increasingly downloaded and installed automatically. You might have seen Windows Media Player, for example, telling you it is downloading a codec. It does this automatically based on information embedded in the media file it is trying to play.
Non-Linear Editing systems like Avid and Premiere allow you to have multiple tracks with different bits of video and sound on them. Any track that occurs simultaneously on the timeline will be played simultaneously. Sometimes, particularly with complicated Title sequences, you can end up with so many layers of video, your timeline becomes too messy to enable you to enjoy editing and find satori on your edit suite. Collapsing Tracks combines any layers of video (not sound) that you want into just one layer, making it tidier looking (as well as a more efficient for rendering). You can still get at the individual layers of video and work on them separately by stepping into your collapsed track.
Colour Correction is the process of changing the appearance of the colour and brightness of the pixels in your video.
Being human, we tend to think about the video we are working with in terms of complete pictures in which time appears to pass. Actually, you probably know the image is actually made up of a series of individual frames which change over time, giving the illusion of movement. That illusion works because of the latent image effect (the way it takes time for the light registered in your eye to fade away).
Further than that, PAL and NTSC video is actually normally divided into fields, which are separate groups of lines which appear alternately to even further aid the illusion of the frames connecting together and to give a television time to keep up. Fields are not used in Progressive Video, which is new and generally quite lovely, containing complete frames of picture information. However, for the purposes of understanding colour correction, we want to think about individual pixels.
Depending on the complexity and flexibility of your colour correction tools, you should find you are able to make some pretty sophisticated adjustments to your picture, including some familiar options like Brightness and Contrast, Hue, Saturation, Black level, White Level, Curves, Gain and more. Avid has an immensely flexible colour correction tool. There isn't space here to fully explain these terms but we hope to include some walk-through information as downloadable documents shortly.
This starts out seeming like a simple concept - you need VHS tapes if you want to use them in a VHS player and Hi-8 ones for a Hi-8 player. But in the world of computers compatibility is much more of a headache.
Many people will be familiar with the Mac vs PC debate. These two giants have eventually reached a point where, with careful planning, you can get information from one to the other. However, there are also possible compatibility problems between two separate software applications on one computer - or even between two pieces of hardware, different drivers or, in fact, anything whatsoever that you can try to get to work together on your computer.
The only solution is frequent back-ups and never, NEVER updating your computer and or making software changes when you are in the middle of an edit. Certainly never when you are close to a deadline.
This is fair warning now, based on bitter and painful experience: Anything you add to your computer has the real possibility of causing it to completely stop functioning. The cure may require you to lose the use of your computer for a week or more.
Quite simply, there are too many hardware and software manufacturers for them to get together and ensure all that they produce is fully compatible. Plus, the way that computers work means that different software applications and drivers tend to step on each others toes a lot. They have to share files to work.
This is a system for transmitting a video signal through a cable - well, actually three or four cables. The video signal is broken down into colour components - i.e., RGB (Red, Green and Blue) - and sent through the cable. This way, the individual parts of the video signal don't interfere with each other in transit, resulting in a cleaner signal at the other end. Actually, RGB is rarely used - more often it is YUV which is a component system that uses a special calculation to get an even better quality picture. This system separates out a signal with the luminance (brightness of each pixel), Red colour of each pixel with the luminance removed and Blue colour of each pixel with the luminance removed. The green is worked out from what is missing. Often, you will need a composite video signal as well, which is called a reference signal. The receiver will make use this to aid in rebuilding the complete picture from the component parts.
As you can tell, component video seems a little complicated but the good news is that the cables and sockets are usually very clearly marked so unless there is a fault with a cable, you'll be hard pushed to go wrong with it. Component Video is the highest quality ANALOGUE video cable transmission system.
Component tends to be used by BetaCam SP Video Recorders and Players. A good deal of broadcast video uses this connecting system. Some DVC Pro 50 recorder / players can only provide component video as well.
Composite video is a system for transmitting picture information along a cable where all the parts of the picture are combined into one signal. The result is that you only need one cable and the picture is as poor as it gets when coming through a cable. This is a last resort for video connections.
VHS and 8mm tape will only normally provide a composite video signal.
Combining multiple layers of video, graphics and so on, is usually called compositing. The resulting picture is a composite of the different parts.
Composition is a word most often used to describe the overall contents of your image. For example, if you have a tree on the left and a bucket on the right, that's the composition. Good composition is not easy to achieve. Even if you have a well designed set and lighting, it takes a good eye to position that camera just right.
You know how when you squeeze a big sponge into a suitcase and then leave it for a little while, then get it back out again, it doesn't unsqueeze completely? Well, that is pretty much the story with compression. Don't you put sponges in suitcases very often?
When you compress a signal, whether it is video, sound or whatever, it is made smaller by removing information that is either irrelevant or possible to recreate. After that, a kind of intelligent selective prioritising goes on, where real information to do with the signal is systematically dropped. The result is that, depending on how heavily you compress your signal, what you get out the other side, when you decompress it, will be more or less like the original signal.
The most common form of compression is probably RLE - Run Length Encoding. With this system the compressor just uses a better system for recording the information in the signal - much like shorthand - so it takes less space up. It loses no information and completely restores the signal but is not as efficient as some other systems available.
There are many books on the subject of compression and many different compression systems available. DV is a form of compression, DVDs use a system called MPEG2, video for the internet will usually be encoded as Windows Media Video, QuickTime or Real Media (most often all three).
It is worth spending some time finding out a little about compression but if you keep in mind the suitcase and the sponge, you'll have a good guide. The smaller you make it, the lower the quality will probably be when you open it up again.
Still images use compression just like anything else. JPEG is a common compression system used for pictures on the web. It is very effective and compatible with just about everything.
At it's simplest, continuity is the apparent consistency between one shot and another. For example, if your lead character has black hair in one shot, it should not suddenly be bright pink in the next.
However, it can be much more than this; you have continuity of action, where people walking from right to left in one shot should not suddenly be moving from left to right in the next; you have continuity in the colours at a particular location or in your film as a whole (you can positively identify particular looks in some feature films); you need continuity of sound quality and ambience and so on and so on. Continuity is so vital to the finish of a film but is often left til last by many short film makers.
When the picture in your video changes instantly from one image to another, that's a cut. This term is also often used to describe things that were removed.
If you are showing someone talking in answer to an interviewers questions and you want to join together two different answers without showing the join, the easiest solution is to make a cut, show a shot of something completely different but connected and make another cut to go back to the subject. Often, you'll see a shot of the interviewer nodding while listening to an answer. This is actually recorded after the subject has left and is called a noddy shot. Really.
Cutaways are an editors best friend. Without them you tie their hands, making it extremely difficult for them to work their magic, turning the timing of your presenters into perfection and obscuring continuity problems. It's a good idea to record cutaways whenever you aren't busy doing something else and get AT LEAST one cutaway for each location. It can be a shot of a clock on a mantelpiece, the subjects hands, the view out of the window - anything, as long as it is not bound up in the continuity of the action.
This is the term Avid uses to describe the process of putting your finished sequence onto a Master Tape. You can either just splosh the finished piece on to a tape at the beginning or, much more frequently, have the position on the tape match the position on your timeline so you can quickly identify any changes you want to make.
The second most common type of transition (if you count a cut as a transition, which you should). The contents a first shot fade away to reveal the contents of the next shot.
This is where you click twice in quick succession on the left mouse button. If you have a Macintosh computer, you probably don't have two mouse buttons. In fact, you may not have any, in which case you can simply lean meaningfully on the mouse in general, twice, in quick succession.
DV, or Digital Video is a very popular video recording system. One of the most significant features of this system is that the compression is uses occurs inside the camera at the time of recording. It records at 3.5mb per second using approximately 5:1 compression. That means, the recorded information is about five times smaller than the fully uncompressed signal. In fact, the amount of compression can vary depending on how complex the image is and it is worth noting that DV is particularly well suited to recording caucasian skin tones and very poor at recording complex green images, like grass.
Because compression happens on the tape, and for other more esoteric reasons, it is fairly easy for a computer to work with the video information created on these cameras.
Digital Video is actually a very broad term referring to any video signal that is recorded digitally but the abbreviation to 'DV' is pretty much always the name for a particular digital format.
The quality of picture you can expect from DV can vary greatly depending on the quality of the camera. Because of limitations in the way that DV registers picture information, it may be worth having a proper test shoot with a new camera before trusting it with your important film.
The quality of picture produced by a DV camera is widely accepted as being broadcastable. This is not necessarily the same as Broadcast Quality, which is a much more obscure and difficult to pin down definition. DV uses 'unlocked audio'.
DV Tapes come in regular and Mini-DV varieties.
This is a more 'Pro' version of DV. The picture information recorded is exactly the same but the tape runs through the camera faster, losing one third of the duration, meaning there should be a lower risk of signal drop-out (gaps in the information recorded on the tape, most likely because of a bad patch of tape). Also, the sound is recorded in individual packets for each frame of the video, unlike regular DV. DVCam sound is referred to as 'locked audio'. DVCam tapes are also supposed to be a little more robust than regular DV ones. Like DV, DVCam tapes come in regular and Mini-DVCam sizes.
Many people feel the signal recorded onto DVCam tapes is better than that recorded onto regular DV tapes. In fact, this is untrue. The difference is that DVCam cameras tend to be more expensive than DV ones, they have a better lens, and image registering system. This means there is more and better picture information available for compression and you get a better signal on the tape.
You all know what this is. But did you know there are a number of formats available. There are (at the time of writing) no domestic or even vaguely affordable DVD writers that will produce the type of discs you buy in the shops. For this, you'll need a 'glass master' (read this as 'paying someone else to do it'). This is not very expensive if you want 1000 copies made.
The most common format with the best compatibility with DVD Players, which you can burn on a computer is probably DVD-R. The disks are single-sided at will hold a little under 4.5 Gb of data on them. The disks are not that expensive.
A very good data storage format is called DVD-RAM and you can get writers that will burn both DVD-R and DVD-RAM disks if you want.
The format of video a DVD player needs on the disk is based on MPEG2. However, it is not actually written onto the disk as MPEG2. Instead, it is converted into a VOB file. This conversion is done pretty much automatically when you press the 'make my DVD' button inside your DVD Authoring software.
Anything that changes the look of your video, or the quality of your sound, is an effect when applied in post production. Be very careful when applying effects in an off-line non-linear editing system as the effects MUST be present on the on-line system as well if you want them to work. Even if they are present, I would want to do a test run with a number of effects before being confident about using them because you cannot always be sure about compatibility between different versions of the same effects.
The effect pallet is the window containing any special effects you have available to apply to your video. Avid's effect pallet has the effect types in a list on the left, with the individual effects themselves listed on the right, once you have selected a type. Adobe Premiere has separate pallets for video, audio and transition effects. These can be 'docked' together, to make one pallet with multiple tabs. The way in which you apply an effect to a clip varies from system to system but most often involves dragging the effect icon with the left mouse button onto the clip.
Export functions are used most often to convert the video or sound format you have inside your editing software into something else. A fine example would be turning your finished, edited film into a single MPEG 2 file to use in the making of a DVD. There is no reason why you can't export into a single file which is the same format. This is sometimes a useful technique for turning a complicated section of your timeline into a single, easily managed clip. Avid gets around exporting in this circumstance by allowing you to do a video 'Mix-down' into a new clip in a bin which you can then add to the timeline again. Effectively shortcutting the re-import process of your new clip. Avid also allows you to 'collapse tracks' into a container inside the editing timeline. This is very handy and quick to do.
Since exporting is so often used to convert formats, it might be worth explaining a bit about the options here. Most media production and post-production makes most sense if you prepare for it by planning for completion and working your way back. For example, if you were intending to produce a streaming media file on a website, you will need to compress your video to a format which maximises the available bandwidth, giving the best possible quality video. People often believe that you don't need to worry about the quality of the original video if you are intending to reduce it to a much lower quality picture in the end by using compression. In fact, the reverse is true and the finer the quality of the original, the better the results you can expect from your compression.
When you export from your timeline, you will have the choice of either exporting the whole sequence or just a selected area. Avid does this quite surreptitiously, by giving you a 'use marks' tick box. If it's ticked, you'll only get the section of your timeline between the In and Out mark. Frankly, you are almost always going to want to use this option, just to be sure.
If you are exporting to a compressed format, you may want to reduce the picture resolution in order to get the most out of your bitrate. Standard Definition video is 720 x 576 pixels. Because video compression systems (such as Windows Media Video, QuickTime and Real Media) tend to work out their maths in blocks of pixels that are 4 x 4, making 16 pixel squares, it is a good idea to try and make sure your final resolution is a) sized as a multiple of 4 (or preferably 16) and b) has the same aspect ratio as it originally did (4:3 or 16:9, for example). There are good reasons for this.
Used to describe either video gradually fading to black or sound gradually fading to silence. You can fade up or down. Fading up means things appear and fading down means they disappear. A cross-fade is where two separate pieces of sound fade over each other. Effectively, the sounds mix together and (hopefully) maintain one volume when combined rather than jumping or dipping in sound level at the cross-over point. The development of so-called 'Constant-power' or 'logarithmic' audio fades means cross-fade now produce the effect of a constant sound level as far as the human ear is concerned. Technically, it isn't constant at all but our ears are more sensitive to some frequencies than others.
Nothing to do with grass in this example. Interlaced Video (such as regular standard definition PAL or NTSC Video) is not made up of whole frames of picture information like film is. Instead, it is made of a series of lines which combine to make the whole picture. In order to add to the illusion of movement and make it easier for televisions to display the image, these lines are played alternatively odd, even, odd, even and so on. Each whole set of lines (odd or even) is called a field. In fact, PAL is made up of just two sets of lines or fields, NTSC has four.
Windows-based computers using containers called Folders to store almost everything. They are the virtual equivalent of folders in your real-world filing system except that they work like Doctor Who's Tardis. That is, they have no theoretical size of their own and will take files of any size inside of them. You can have folders inside folders and shortcuts to other folders.
Non-linear editing systems often refer to their folders as Bins. This is historical as it is the name film editors used for the containers they stored their film clips in. There are some subtle differences when it comes to the files sitting on your computer's hard drive but inside the editing software, they almost work exactly like regular computer folders. Macintosh computer systems often refer to these folders as directories, which is the more grown-up term for the same thing.
Once upon a time there was a man called Mr. Foley. He realised that the intimacy of the camera meant we would expect to hear a lot more of the sound going on in a scene than the regular microphone's picked up on-set. As a result, we were missing out, as an audience, on a lot of important textural detail in the action - that is, the sound of shoes walking on gravel, clothes rustling or bones crunching. He started working to produce these sounds and adding them in post-production to give films more of a sense of physicality.
Producing good quality Foley sound is a fine are and although it might sound a little silly, have a good listen to ANY feature film that wasn't shot on a shoe-string. The Foley artists are at work all the way through.
Moving images appear to be continuously changing as time passed but in fact, they are usually made up of individual still frames which flash past your vision so quickly they appear to be one changing image. This is because of the latent image effect. For just a fraction of a second, your eye retains a kind of left-over version of everything you see. In the case of a film frame, you see one image, then black, then the next image so quickly, your eye hasn't had time to clear the last image fully. The result is that these images seem to blur into one another and you have the illusion of movement. Most animators use a minimum of 16 frames per second to give the illusion (and it is always an illusion) of fluid movement but for really comfortably viewing, films tend to be shot and played at 25 frames per second.
Video material, is made of frames just like film, but those frames are usually divided into Fields as well. The number of frames per second is usually determined by the format you are using. For example, PAL Standard Definition video is 25 Frames Per Second. Always. NTSC is 30 Frames Per Second OR 29.97 Frames Per Second depending on how you are working the timecode.
The principle of maintaining the illusion of movement by the use of frames hasn't really changed since the old children's spinning toys with pictures of horses running in them. What has changed though, is the quality of the tools available. So now we have super crystal clear pictures.
One small development, is 3D IMAX. This system (gorgeous lovely, mmmmm, IMAX we love you) can be shot at 50 Frames Per Second, splitting the frames alternately between a lens for your left eye and a lens for your right eye. The result is that you get full frame, full colour, beautiful pictures. Especially since it is shot on 65mm film, rather than 35mm.
How often do you have a bath? Usually a certain number of times a year, a month or a week. That's the bathing frequency for you. Frequency, when talked about in terms of editing video or sound, is normally measured in cycles per second. A gentleman called Hertz, came up with the idea of measuring cyclic events in terms of seconds, whence the reference to sound frequencies being measured in Hertz.
Your ears, if you are human, will probably be able to detect sounds with frequencies between around 20 and 20,000 Hertz. Hertz is often abbreviated to 'Hz'. And 1000 is often abbreviated to 'k'. So, 15khz, means 15,000 Herts. Excuse me if I'm stating the obvious here.
Now, many, MANY things are measured in Hertz. Pretty much any frequency is described in this way, including audible sound, light, electromagnetic waves pulses down the cable to your speakers and even our electricity supply. In the UK, our supply is set to 50 Hz alternating current. That does not mean it takes turns with a sultana. Instead, it means the positive and negative terminals alternate 50 times a second. In the US, the power supply is 60 Hz. Since 25 goes into 50 so much more easily than 24 does, this was chosen as the frame rate for PAL video. The same goes for NTSC, the US video format, being 30 frames a second.
Actually, when they initially attempted to broadcast NTSC at 30 frames a second, they ran into problems with conflicting radio waves (also measured in Hertz, for what it's worth) and had to slow the whole things down to 29.97 frames a second instead. But that's another story.
The faster a speaker beats the air, the higher the audible pitch. The further it beats the air, the louder it is. Below a certain frequency, you can begin to hear the individual beats of the speaker.
For complicated reasons, digital recording systems need to use a sampling rate that is double that of the signal that is being recorded. So, in order to record a signal that covers the whole of the human hearing range, CDROM audio has a sample rate of 44.1Khz. The result is an effective recording range of up to 22.5Khz which should cover any normal human being's hearing. If it doesn't., they should think themselves lucky and not tell anyone about it.
The way that frequency relates to multimedia and media production is at once terrible boring and immensely fascinating. My advice is to listen well when someone seems to know about the subject but otherwise just remember some simple rules.
For DV Production, set your camera, and in fact all of your equipment, to 16bit, 48Khz sound.
For CDROM music, compress your audio as 16bit 44.1Khz sound. It all will be well in the world.
In digital terms, a graphic is really pretty much anything that doesn't move. Any still image, in whatever format it comes (be it a JPEG, BITMAP or TIFF), gets treated the same way in applications like Avid or Premiere. Photoshop, of course, is made to work with graphics but there are other applications out there too, such as Illustrator, which deals with drawing in a highly advanced way.
To briefly summarise, there are two main types of graphic - Vector and Rasterized. It took me ages to find anyone that could explain the difference to me - so here it is.
Vector graphics are really a kind of virtual blue-print for a drawing. Rather than have a drawing of a triangle, you have the recipe for a triangle, including details such as colour and shape based on relative units of size: i.e., rather than a triangle that has a hypotenuse 5 cm long, you have a triangle with a hypotenuse that is 5 units long, relative to the units in the other two dimensions (yep, I can't remember what they are called). The result is that you can apply ANY RESOLUTION YOU WANT to this triangle. You decide at any time how long 'one unit' is, and that is applied to the image when you use it - either for printing purposes or video, for example.
A rasterized triangle, by contrast, is a series of dots laid out in such as way as to look like a triangle. If you make it bigger, the computer just makes the dots bigger, or makes a best guess about what the additional dots would look like. If you make it smaller, the computer simply discards some of the dots you had before.
You might be tempted to think that Vector graphics are simply better in every way but actually there are occasions when one system works, other occasions when the other system works. That's why both technologies are going strong.
Avid makes titles which are effectively vector graphics - this means it renders the resulting image only when you need it and you can transfer titles from one system to another very easily - vector graphics are resolution-independent (as you can see above) so this is a good system for off-lining and on-lining.
Photoshop supports certain types of vector graphic and normally, if you have a postscript printer, prints those graphics automatically at the highest resolution the printer supports.
The common term for the storage medium used inside most modern-day computers. Inside a hard driver, the mechanism looks almost exactly like an old-style record player. In fact, apart from being ultra high-resolution and magnetic, rather than based on pits on the surface, and having meaningful data on the surface, and being re-writeable, and being made of metal rather than plastic, they are almost exactly the same idea. Well, they're not exactly the same…
When it comes to computers and storage mediums, there are really three main considerations.
Size, speed and ease of access, and cost.
With DV Quality video, you get about 4 mins 30 secs of video per Gigabyte of storage space on a hard disk drive. It's not very much. Fully uncompressed video is about 1 minute per Gigabyte. Hard drives give a good balance of the three main factors, allowing for fast enough access to the information stored on them, plenty of space and reasonable cost (insofar as you can attribute value to material things, Mr. Socrates). You can easily get hours of material on them now, inside your computer. You can also get external versions, connected to the computer via a firewire cable. These are still hard disk drives, they just connect to the computer in a different way.
HD, is a new video format that is meant, eventually to replace the current Standard Definition. Standard Definition video, in PAL, is 720 pixels wide by 576 pixels tall. NTSC is 720 pixels wide by 480 pixels tall. In fact, these standards are often described in terms of lines. What this means, is how many individual lines are theoretically visible as distinct and separate without blurring into a grey mass. NTSC is meant to be up to 525 lines. PAL video is meant to be 625 lines.
There are well over 30 recognised High Definition formats but there are two image resolutions that are most commonly used. These are 1280 pixels wide by 720 pixels tall and 1920 pixels wide by 1080 pixels tall. Clearly, these two image resolutions are substantially different but there are other factors that entice people to use one or the other.
This is not the place to discuss the differences between the many HD formats in detail. Quite simply, high definition means sharper pictures and (usually) bettter colour and luminance.
To give an example of how much better HD can be at recording colour, consider this: Regular Standard Definition television is usually recorded as 8 bit. This means a grey scale with 256 steps on it. High Definition can be recorded (with the right equipment) as 10 bit, which has a grey scale with 1024 steps on it. This, more subtle recording system, creates (in theory) a more natural rendering of the subject.
In computer terms, an icon is a small graphic symbol that represents something. Usually, the thing is a file but it could be an option or a button. It's important to apprehend the difference between an icon and the name, usually next to it, for the thing. For example, on your computer desktop, each icon also has a name. Both can be changed but in Avid, if you want to select an item, rather than rename it, you MUST click on the icon rather than the name. Avid is so ultra efficient, it will assume you want to rename or re-enter any text item you click on. If it's possible to change it, the letters will automatically be selected for replacement. You can do the same thing in Windows generally, by clicking to select the item, pausing, and clicking on the name of it.
Icons the sort of idea that make computers a little more human-friendly. There are icon editors available and downloadable icons on the internet.
As you might imagine, this simply means bringing something into an application (like Photoshop, Premiere or Avid) for you to include it, or part of it in the your work. However, there's one additional detail here. Many software applications automatically convert items you import into a format that is compatible with the rest of your project. This is usually a good thing, because it saves risking the appearance of items changing later, but it can also introduce an element of Russian roulette. Avid lets you define the way in which it will interpret items in advance. For example, if you make a graphic with RGB colours (the sort of colour system that computer graphics use, as opposed to the YUV system video uses) and square pixels (rather than the rather kooky non-square pixels video uses), you need to make sure your editing system knows this in order to make the appropriate changes while importing. If you don't get the import process right, you can end up with colours that are either bland or off the scale and the image appearing squashed or stretched.
You don't need to be an expert to get importing right. You just need to know the format the item is now, and the format you want it to become. The rest is in the options.
Oh, and some systems can automatically detect sequential file names - for example, house001.tga, house002.tga, house003.tga and so on. Some animation programs create each frame of the animation as individual pictures. Software that can identify sequential images can automatically turn them into a single stretch of video instead, making it much easier to manage them on your timeline.
The interface, or Graphical User Interface, simply means the design your software has in terms of how you interact with it. Where the buttons are, what colours they are, how big the text is - all the human interaction elements of the software are described as the interface. Interface design has really lagged behind the developments of computer hardware (in terms of brute force processing power) and we are now seeing some much more elegant solutions for allowing organic human beings to control inorganic computers. Of course, there is always the suspicion of the Ghost in the Shell - can spirits inhabit your computer as easily as they might inhabit a body? Anyone who has ever worked in technical support KNOWS this is true. Gremlins, nasty little viruses and illegal operations…. Makes you shiver.
Standard Definition PAL and NTSC video are both interlaced formats. This means that the picture is divided into odd and even lines and these lines are displayed alternately - all the odd lines, then all the even ones and so on. Interlaced video was a necessity because the early television sets were just not fast enough to display a whole pictures, in one go. To aid in the illusion of movement and make the job easier for televisions, PAL and NTSC video formats, while referred to as 25 or 30 frames per second, are actually made of individual fields. PAL is 50 fields per second. If you pause a video and you get shuddering movement in the still image, that is because the movement in the video is faster than a 50th of a second. So the two fields, which you are seeing repeated when you pause a video, have different picture information in them as a consequence.
There are two Jack plug designs - 3.5mm (small like a walkman headphone plug) and ¼ inch. Both operate exactly the same way and depending on the design will either support a stereo or mono signal. Effectively, this just means the cable has more wires in it on the stereo version, to carry the extra signal. You can also get what are sometimes referred to as multimedia cables. These carry three signals and will not usually work on a regular Jack socket. They are often used by Camcorders to allow a composite video signal and left and right audio.
Jack plugs are not especially exciting but it is worth noting they are not 'balance' audio cables. XLR cables have three pins. Two for the positive and negative signal and one which is an earth cable. The idea of the earth cable (which is very much like the earth pin in a standard power plug) is that it tends to absorb signal noise, protecting the important signal-bearing cables from it. As a result, XLR cables can usually be longer while retaining a reasonable signal to noise ratio. Jack plugs are not especially robust and are not really 'Pro' in the way that XLR cables are.
Joint Picture Experts Group. JPEG is a very popular format for digital images. There is a moving image compression system based on it called MPEG. The principle behind JPEG is that there is no point recording every single pixel in an image as a separate piece of information when many of them are exactly the same in areas of colour. For example, if you have a picture with a vivid blue sky in it, chances are that a large area of that sky will have pixels that are exactly the same colour. To cut a long story short, JPEG uses a system of notation that says 'one hundred pixels like this one' and '50 pixels like this one' and so on and so on. JPEG images also support a form of compression that is 'lossy' (it loses detail from the original image) allowing you to compress an image more and more, and soften it in order to make that compression less obtrusive, resulting in a smaller file.
Because of its effectiveness at compression into lossy formats, JPEG is used all over the Internet, where the size of every file is vital to the accessibility of a website. On the Internet, size is important, just the other way around.
Key-framing is fundamental to the way computers do work for us in a design environment. They work like this:
I want my music to start at zero volume and at 10 seconds, I want it to be full volume. Rather than tell the computer the volume I want for every individual millisecond (some applications will actually let you do this if you are mad enough to want to), I just tell it those two points in time. The computer works out the in-between and as a result, I get a lovely smooth fade from silence up to full volume exactly on the dot at 10 seconds.
This principle applies to all keyframable effects. If you are working with a visual effect, you are adjusting other parameters than sound volume, but still just doing it for each key moment. A good visual example would be key-framing a superimposed layer of video. You may want it to fade in and out in time with the music. All you would need to do is set key-frames for the moments when you definitely want the picture to be visible and invisible. Job done.
More advanced compositing systems allow you to specify how the keyframes will relate to time in terms of acceleration. This is pretty difficult without a drawing but imagine the difference between a graph that has straight lines on it, and one that has nice curves. Or even a graph that has no direct lines between the points marked on it, but straight steps at each point. These are the kind of options available. The smoother the curves, the less obtrusive the gradual changes will appear. This is usually, but not always, desirable. After Effects and Avid both support bezier key-frames which individual control over the angle of the curves connecting to them. Lovely.
Not damaging cars in some dodgy part of Manchester. Keying is the process of making part of a picture transparent. The computer uses the 'key' to identify which pixels should be opaque, which transparent and which somewhere in-between. A classic example of keying would be Chromakey, where the computer can make pixels of a particular colour transparent. Weather reporters always used (and often still do use) an advanced Chromakeying system.
Another option would be Lumakey, where the brightness of the image is used.
Another system for making pixels transparent, is to use a grey-scale image as a kind of map, showing which pixels should be visible, partly visible or transparent. That map is called a Matte.
Your eye, video and film all have a particular range of visible light in terms of the greatest difference detectable at the same time between light and dark areas of a picture. For example, if you are in a room with someone where they have a window behind them on a bright sunny day, chances are you will be able to see the person AND the view outside at the same time. On video, you would probably have to make do with EITHER seeing the person OR the view. If you see the person, the view is just completely bright and all the detail disappears. If you see the view, the person becomes a silhouette.
The edge of a medium's latitude, is the point at which detail begins to disappear. As you get close to the edge, detail gradually starts to disappear anyway, and this can be used to great effect if done deliberately.
Film has a much longer latitude than video and the human eye is the best of all.
Cell animation works by drawing different parts of the image on different layers of transparent acetate, then laying them on top of each other to give the appearance of those layers being one complete image. So, the hills in the background might be one layer, a tree in the middle distance on another layer, and the subject dancing around on a third layer (often there are many more layers than this).
Compositing on a computer or adding multiple video filters works very much the same way, except that the computer has to give you a graphical representation of the individual layers, to allow you to work on them independently. To know Photoshop, is to know layers.
Video is often described as being made of a certain number of lines. 625 for PAL and 525 for NTSC.
The process of identifying certain pixels as transparent based on how bright they are.
When you record video material onto a computer, it is stored on a hard drive in an appropriate format. In the case of Adobe Premiere, this is an AVI file, in the case of Avid, this is usually an OMF file. It is important to understand the distinction between this file (which is about 4 mins 30 seconds per gigabyte of storage space on your hard drive at DV resolution - enormous) and the item you see in the bin inside your editing software.
The item inside your bin, is nothing more than a shortcut to the original media file. In fact, Adobe sometimes refers to these as shortcuts but Avid calls them Master Clips. The interesting choice of name carries more significance than you might think. A Master Clip inside Avid contains not only a pointer to the media file on the hard drive, but also information about the timecode and tape name from which that media originates. This means you can always ask Avid to re-record the exact same piece of video from your source tape in the future. In fact, the OMF file also includes this information (including the name of the project it was originally recorded into) so with either the Master Clip or the OMF Media file, you can reclaim the video material from the tape automatically.
This process of automatically restoring media is extremely important for non-linear editing systems because it is exactly what allows collaboration between multiple designers and a facility for backing up your programme with nothing more than a CD-ROM disk and the original source tapes. Very useful, should you ever need to re-edit or make small alterations such as title changes.
The Master Tape is your final, finished work on tape. This is the tape you are aiming to produce with your non-linear editing system - in fact with any editing system. This should not be confused with a Source Tape, which is the original material you used in the making of your programme. Source Tapes are more important than Master Tapes because new Master Tapes can always be produced, as long as you have the original source material. However, the Master Tape is the golden egg your client is waiting for, so it must be perfect.
There are standards for the way in which a Master Tape should be laid out and these vary from broadcaster to broadcaster. It is common to include some colour bars at the start of the tape, to allow for any alignment issues to be resolved before transmission. It is also common to include a reference tone, which is simply a 1khz sound at a particular volume. Unfortunately, the volume differs from company to company and sometimes the only way to find out what reference tone volume they want, is to ask. It is important to mark the Decibel level for your reference tone on the tape, so there is no confusion when it is being accessed. For example, if they are expecting -16db reference tone and you actually give them -18db tone, your entire programme will be played back 2db louder than it should be. This can lead to clipping the audio making it unacceptable. They might send the tape back to you saying you are unprofessional and walk funny.
A Matte is simply a black and white image that is used to define the areas of a picture that should be transparent or opaque. Grey pixels are partially opaque. You can use all sorts of things as an original for a matte and some truly profound effects can be achieved with careful use. Mattes are often referred to as Matte Keys, to avoid confusion. A Moving Matte is simply one that changes over time. You could, for example, create an animated image that provides the basis for the visible areas of a picture.
Some Keying applications allow you to display the key, instead of the transformed video. The result is a grey-scale image that you can use as the basis for a matte on other bits of video. For example, you could take an image of someone's face, use Lumakey to make portions of it transparent and then use the resulting Matte to combine two completely different layers of video. The result would be the impression of a face made out of something else - such as fire over a shot of some kind of backgroun, such as the sea.
This is the actual video or sound material on your hard drive, as distinct from the Master Clips or Shortcuts inside your editing software. Most non-linear editing systems (including sound systems such as Adobe Audition) make a distinction between reference clips and Media Files. This allows you to make any changes you want non-destructively, so you can change your mind later without damaging the original files.
Media files are generally very big, particularly for video, while the reference shortcuts inside your editing application are often small enough to put altogether on a single floppy disk. Or even to e-mail to people.
Without the media files, you cannot make your programme because it IS the programme. The reference shortcuts just refer to them.
If you've got your Mojo baby, you know you're swinging.
Avid came out with this spectacularly named piece of hardware to coincide with the release of Avid Xpress Pro and it provides you with some extremely cooltastic features. It works as a transcoder, converting analogue video and sound signals to and from a digital signal for the computer. It gives you Composite Video, S-Video, and Component Video. What is especially clever about it though, is that it takes a fully uncompressed signal from the computer digitally and does the analogue conversion from that, rather than a regular DV signal. Apart from providing excellent quality video this way, it also means all the real-time effects Avid can do, play out to the Mojo as well, meaning little or no rendering while you are work.
This is particularly useful for setting up colour correction because you will need a regular TV-style monitor to check the colours and without the Mojo, you have to either render, or drag through the video looking at one rendered frame at a time. It does this quickly and smoothly but it isn't the same as just pressing play.
The Mojo will also work on a laptop, allowing realtime analogue output on a type of system that until now has really only been able to offer non-realtime DV out. If your system is powerful enough to support it, the Mojo will also allow you to record and play out fully uncompressed video. Although the effects all need to render with uncompressed video, this does theoretically mean it will allow you to produce full quality broadcasts standard component video on a relatively low-cost system.
With a sales pitch like this, I should work for them.
Adobe Premiere refers to effects involving moving the video around the screen as a motion effect. Avid refers to slow motion and fast motion effects as a motion effect. With the new Avid Xpress Pro version, Avid also have a 'Timewarp' function, that allows you to stretch time on the timeline.
MP3 is a very efficient system for compressing sound. It can reduce files by about 10:1 from a regular Wave file. This means a whole music CD would take about 60MB or 65MB if it were MP3. The perceptual quality is pretty close to CD but it uses a compression system called Psycho Acoustics (I kid you not) which works out the sounds you would not be able to hear because of limitations in human hearing and just doesn't bother recording them. This is great as a finishing format but because it is technically 'lossy' it means you would generally not choose it as a source format. There is a new MP3 format, called MP3 Pro, which is supposed to much better as a source format.
Minidisk recorders record in the MP3 format which is worth remembering if you are considering using a minidisk recorder to obtain sound on-location. It is generally fine just for recording speech but you should seriously consider either using a DAT recorder (which is much higher quality) or having good quality microphones plugged into your video camera. DV Cameras record sound at the same quality as DAT, although you should consider the likelihood of additional background noise. Generally, recording straight into the camera is the most efficient, effective and generally lovely option.
MPEG 1 is a compression system for low data-rate video. It was meant to provide near VHS quality video via the limited output speed of a regular CD-ROM drive. Since it was one of the first highly compressed formats created, it really took off and it's now easily the most compatible video format you will ever find. Just about EVERYONE can play MPEG1, Mac or PC, Windows 95 or Mac OS X. There are some special computer servers that can stream MPEG1 video and this is sometimes used by very large companies to distribute training materials across local area networks.
MPEG 1 video is usually half the dimensions of full Standard Definition video. A good resolution for PAL MPEG1 video is 352 pixels by 288 pixels. This retains the right aspect ratio.
MPEG1 video includes MPEG audio, which has a similar level of reduction in size to MP3 audio. MP3 audio is actually based on MPEG audio. The two combined mean that you can get between 45 minutes and an hour of near VHS quality video on a standard writeable CDROM. Given that VHS is close to as bad as it gets for picture quality, MPEG1 video is nothing to be proud about in the best compressor stakes. Compatibility is where it remains king.
There is a specific standard for Video CDs, which can be read by certain DVD players - you can even get dedicated Video CD Players. These tend to be much more popular in the East than the West, but many DVD authoring applications allow you to produce Video CDs just as easily as DVDs. The interface of a video CD isn't as flexible as a DVD but it is fundamentally the same principl.
MPEG2 is a Standard Definition compression system for video that is much more widely used than you might imagine. MPEG2 is the basic video format that DVDs require and consequently, good quality MPEG2 compressors are hotly in demand. You may not know though, that Sky Digital is broadcast using a form of MPEG2 combined with a special compressing transmission protocol called QUAM (not entirely sure about the spelling of that). MPEG2 is a lossy format, which means that you would not generally choose it as a mastering system. If you shoot on DV with a view to producing DVDs, always produce a DV master tape as well, as this is theoretically as good quality as the original source tapes. In fact, you can go about 6 generations with DV before you start to see a visible loss of quality (according to Richard Payne of Digital Video Computing).
There is such a thing as an S-VCD, a super video CD, which uses MPEG2 video but is recorded onto a CDROM rather than a DVD.
MPEG2 supports MPEG audio, which is a highly efficient compression system similar to MP3. When a DVD is authored, the MPEG2 media is stripped of its header information and turned into raw video with the format name 'VOB'. As such, it is very difficult to reclaim video from a DVD other than by simply recording the output of your DVD player. In fact, at the time of writing, it is not part of the licence agreement to be able to decrypt MPEG2 at all. I'm not sure if this makes it actually illegal in terms of criminality, but the owners of the relevant patents would certainly look at you funny if you tried.
This is Avid's system for combining effects on a single clip. In order to allow you to control the order in which effects are applied and to access them and make changes later, effects are treated much like containers. The clip is contained by an effect, that effect might be contained by another effect, and so on until you are at the timeline level. It's a little like Russian Dolls. The beauty of this system is that you can actually replace a clip inside a bunch of 'effect containers' without having to remake and set up the effects again from scratch.
Another way in which Avid uses Nesting is to allow you to 'collapse tracks' into one single 'container' which can be opened up into its own timeline. A good example would be if you had a complicated title sequence with 5 or 10 layers of video in it. Rather than have the timeline cluttered up with all these extra video tracks, you can collapse them all into one container. This keeps the timeline neat - which means more to you on complicated edits than you might imagine - and allows you to apply an effect to the whole collection of tracks very simply. If you expand them, referred to as 'stepping in', you see a timeline with nothing but these individual tracks on them. Alternatively, you can expand the separate tracks into a mini-timeline inside your main timeline. It all sounds very bizarre but it makes perfect sense when you see it.
Usually used to refer to unwanted background sounds. This idea is normally illustrated by way of a party. If you were having a conversation with someone at a party, the signal would be the words you are trying to communicate to each other. The background noise would be all the other people talking and music going on around you. The further apart these to are in volume, the easier it is to understand each other.
In terms of video and audio signals, the signal is the same - the bit that you want - but the background noise is usually electromagnetic interference, resistance in the cables, earth leakage or just poor quality electronics, which introduce background hiss on their own. Good quality sound work is a constant battle against the limitations of production (not being able to place microphones as close to the action as you would like, for example) and background noise. The more you turn up the gain on a microphone (effectively the volume), the more noise is introduced. This is why sound professionals constantly 'ride' the audio level to try and reduce the gain wherever possible.
Radio mics make a big difference, shortgun, or even shotgun mics do too but there's nothing like filming in a sound proof studio.
Video noise is often seen as tiny flecks of white or even fuzzy streaks across the picture. Apart from painting it out by hand, it is virtually unfixable. For this reason, it is worth using virgin tapes when you film something important and carefully maintaining your camera to make sure the heads are clean.
If sound cables run alongside power cables, the electromagnetic field emitted from the power cable and cause a hum on the recording. Some non-linear editing systems, and certainly professional sound editing systems, include a hum-remover to solve the problem. Of course, the best cure is preventative and if you must cross power cables with sound cables, make them cross at 90 degrees. This should prevent any hum in the first place.
Celluloid film is a non-linear editing system but editing video tape directly is not. The difference is that film will allow you to cut into a shot, add another shot and just carry on, thereby extending the total duration of your programme. Video does not allow you to do this. If you cut a shot into the middle of another shot, you inevitably have to record over the rest of the first shot. This means going back to that point in your programme and re-editing from then on. The duration is fixed because you can't cut the tape (well you can but it's messy) and join it together in a different way - something you could do ever so easily with film.
Word processing is non-linear typing for the same reasons. Compare word processing with typing a letter on an old-style mechanical type writer. Now, you can move paragraphs of text around, make the text automatically flow around images and so on.
Non-linear video editing systems are much like word processing. They offer more flexibility than film editing and a lot less grief. They are also non-destructive, unlike film. You can chop shots up as much as you like and then change your mind and go back to an original un-modified version. This just isn't possible when editing film, unless you went through the very costly process of having duplicate prints produced. Consider the additional complications in managing your source material if you did that.
Nowadays, most film projects are edited using non-linear editing systems like Avid Film Composer or Lightworks. The information about the edit produced by the computer is used by the negative cutter in the laboratory to cut the original film negative.
National Television Standards Committee. This is the video format used primarily in w:st="on">America. It is often comically referred to as Never The Same Colour because it tends towards brown. NTSC is a different shape, different frame rate and different number of fields to PAL video. It even has two kinds of timecode you can use - drop-frame and non-drop frame. The difference between these two is probably far too boring to explain here. It is important to note though, that PAL and NTSC are so different that you can't simply play one and record on the other. You have to go through some kind of conversion process to copy from one to the other. You are better off picking a format in the first place and sticking to it if you possibly can.
NTSC DVDs, by the by, only support AC3 audio, which is a Dolby system of compressed sound. It is very nice quality sound but tends to be costly compared with MPEG audio.
This just means anything other than your finished master tape. Any editing decisions, and any video material that is not ultimately going into your finished On-line, is counted as off-line. You will often hear references to Off-Line resolutions. That simply means the picture quality is not good enough to broadcast. Often, to save space on hard drives, video is recorded at a lower quality (consequently producing smaller files) until you are ready for the full quality on-line. At that point, you can dispose of all the material you don't need and just re-record the video that is used on the on-line.
Your finished quality video or sound. An on-line edit is one where you apply all the finishing touches to your programme ready for the Master Tape. On-line editing systems usually have support for fully uncompressed video, external monitoring equipment to check colour levels and sound, and a handful of very nice special effects. As off-line quality editing systems become more and more sophisticated, the amount of time spent in the on-line edit is getting shorter and shorter. There are off-line quality systems available now with perfectly good colour correction tools (one of the prime functions of the on-line editing system), for example.
On-line quality is whatever quality you are working to. If your master tape is DV, then your on-line is DV quality. If your master tape is Digital BetaCam, then that is you on-line quality.
Phase Alternating Line. This is the video system used in the UK. Totally different to NTSC in that it actually looks quite good.
Windows that float around inside software such as Photoshop, Avid, or Adobe Premiere, are referred to as pallets. Pallets can often be 'docked' together into a layered pallet. Each pallet will tend to have a particular type of tool or option in it. For example, Avid has an effects window with the various effects in it and an effects editor pallet to change the settings of your effects.
1. The measure of how much an audio signal is routed to the Right channel or the Left channel. A single audio signal panned all the way to the left, will only output to the first, left channel. A mono signal panned in the centre will play equally in the first and second channels.
Stereo audio is really just two mono signals. The first, is routed to a speaker or headphone on the left and the second to a speaker or headphone on the right. I have yet to see a system where the numbering and positioning of speakers is reversed. A stereo microphone is simply two microphones stuck together. The microphones act like your ears, being a small distance apart, so that a particular sound source will arrive at one slightly before the other. We detect stereo positioning based on a combination of volume and timing. Both are easily emulated by controlling the way sounds arrive at a playback speaker. Controlling the volume of a sound is very easy but controlling the time will take special hardware or software. Adobe Audition includes filters to control the timing as well as the volume when positioning a sound in 3D space.
A sound signal also has a phased - the result of the positive and negative charge on the microphone alternating to produce a current. Both speakers should be 'in phase' meaning they beat the air at the same moment. If they are out of phase, they can negate each other. You can produce some interesting effects by playing with audio phase, cycling through cancelling out parts of the sound.
2. A camera movement where you move the angle of the camera along a horizontal plane. A classic example would be filming cars driving past. You put the camera on a tripod and follow the cars one after another, turning the camera to follow the action. It is generally a bad idea to make a video edit during a pan. For this reason, always plan to start and stop a pan on a worthwhile shot when filming. If you don't, you may never understand why your excellent panning shots are never used. The same goes for tilting and zooming too.
A type of plug and socket that has a central pin and an outer ring. They are extremely well used, particularly for audio signals but also for composite video. Unlike XLR audio, Phono cables are unbalanced - meaning they have no earth wire inside to protect against signal interference. This makes the signal very much the same as a Jack plug or cable. The difference is that Phono plugs (sometimes also called RCA plugs) are better designed and less prone to damage through wear and tear.
Adobe's very popular image manipulation software. Imagine anything you could do with a still image in the digital domain. It's already been done, it's a feature inside Photoshop. Fantastically flexible and rather fun. Used by graphic designers the world over and supports a range of kooky plug-ins.
Photoshop integrates well with a wide range of other applications. It has become such a standard in the media industry that other manufacturers strive to make things compatible with it. This is similar to the way Creative Labs managed to become the standard by which other domestic sound cards are measured.
The individual building block of digital images. Pixels are the letters with which poems are written, in the world of images. While we tend to think of video or graphics in terms of whole images, computers largely organise them in terms of individual pixels that happen to have a relative position.
Understanding this difference can help to explain an awful lot about the way design works in the digital world. Graphics are often described in terms of their bit depth or number of supported channels.
This means, roughly, the subtlety of the gradation the individual pixels have - how many steps between completely dark and completely light - and whether the format supports only Red, Green or Blue channels or also an Alpha channel - see-through-ness.
Most software applications work on their own. They don't depend on anything other than appropriate hardware (i.e. you need a sound card to work with sound software) and an operating system, i.e. Microsoft Windows or the Mac OS. However, many of the more advance software applications such as Avid, Premiere and Photoshop, support Plug-in software, to be used inside of them.
Plug-ins can't work independently, they only work inside other software. Most commonly, plug-ins are effects you can use in addition to the ones already available inside your software. Plug-ins can be expensive but they usually give you the more profoundly wonderful tools to work with. There are many manufacturers working on plug-ins and some even come as standard. For example, Premiere comes with some additional video filters which were originally part of After Effects and Avid Xpress Pro now includes the Illusion FX plug-ins as standard.
There is a new breed of plug-in coming, which might be referred to as 'universal'. These plug-ins are not made for a particular application but can be used by any compatible software available. The first I have seen are Directx Audio filters, which are supported by applications like Adobe Premiere or Audition. However, some filters are being developed for video applications as well. This is truly a good thing because it allows the plug-in manufacturers to spend all their time developing cool filter effects rather than worrying about compatibility issues.
Standardising the way computer software interacts is a fairly recent development and it can only be a good thing for users. Think what the world would be like if every single electrical device we used had a different shaped plug on it… I'd like to see all computer applications sharing a universal language to allow much more choice for users.
When video editing software renders an effect, it produces an alternative video clip to play back that looks like your original source video combined with any effects you have applied. The result is less work for the computer when playing back. 'Precompute' is the name Avid uses to describe that altered video clip. This process is non-destructive and the original file remains available if you decide to undo your changes.
Avid does not automatically delete Precompute files, as it assumes you may want to go back to an earlier version of your edit at any time. Therefore, you need to habitually delete unnecessary Precomputes from time to time to free up space on your hard drives. Assuming each gigabyte of storage space only gives you about four and a half minutes of video, it is surprising how much space is saved by regularly getting rid of these files.
Adobe Premiere just called these Temporary Files, like most software applications. Many applications use these temporary files, including Photoshop, After Effects and Audition.
Adobe's popular and flexible editing system which came to power when hardware manufacturers like Canopus and Matrox started producing specialist video capture cards to work with it. The result was powerful real-time effects in a user-friendly interface.
When you have configured a group of settings, such as a video effect or layout for your various windows, you may want to store these settings so you can re-use them automatically. This is a Preset. A good example of a preset would be setting up a standard colour correction to correct an orange colour cast on a second camera. Then, every time you use a shot from that camera, you can apply the same colour correction to balance it out, without having to manually go through all the settings again.
Unlike Interlaced video, progressive video is made of complete frames, rather than individual fields. Film, as a medium, always produces full frame images and the newer video formats are finally able to copy it in this way. Progressive video is generally lovely and should be approved of by all and sundry.
The work you produce inside your non-linear editing software. There is a distinction between the project file and the sequence you are producing. The Sequence is the theoretical end product of your endeavour while the Project is the environment in which you work. In Avid, the Project is as much a folder or container as it is a file and in a very real sense, it contains the references to all of the media you use in your final product.
The Project contains the important references to your media and it is what truly contains the fruits of your efforts - the creative choices you have made. Because most non-linear editing systems are non-destructive, what you are work on is effectively a list of creative choices which are combined to produce your work. This is in contrast to linear editing systems where you are working directly on the finished master tape. Think of the distinction between what you see on-screen in word processing software, and the finished piece of paper you produce from it. That is the distinction between a sequence inside your project and your finished Master Tape.
This is an alternative name for Phono leads / plugs. Very widely used and described further under 'Phono' in this glossary.
Real Networks were probably the first to come up with a really usable audio codec for streaming from the Internet. Since then, they have developed one of the major systems for streaming video and sound. Like all streaming and heavily compressed video and sound formats, Real Media allows for very low bit-rates while maintaining a perceptually acceptable picture and sound quality. There is a good deal of debate about which is the best format but in generally, you are safer producing versions of your films in at least Real, Windows Media Video and QuickTime for distribution on a website. This will ensure that pretty much everyone will be able to access your media, regardless of their preference for player.
Until Realtime systems came along, any effects you applied to your video or sound on a non-linear editing system had to be rendered - effectively making a new piece of video or sound that included the effect, as it if had been originally recorded that way. Realtime systems are able to apply effects live, without rendering, playing back the original source video and adding an effect to it afterwards. This is supremely flexible as an editing solution because it allows you to experiment without suffering a delay while the computer renders before you can look at the results.
Some systems allow realtime effects only on the computer screen, others allow full quality output so you can view your work on a proper client monitor (television). This is very important because you can't really assess the picture content or colour rendering on a computer monitor.
Some systems include an option to use the RAM on your computer as a buffer for realtime effects, which it fills when playing a very simple section of video and eats into when dealing with something more complex. The net effect is that you can often achieve more in realtime with the same power computer.
When a computer renders an effect, it is creating a new piece of media that looks exactly like the old piece of media but with the effect applied to it. The result is that the computer no-longer needs to worry about creating the effect when playing back that part of your movie. Instead, it just plays back the newly created media that looks affected. Because this system leaves the original media untouched, you can remove the effect and go back to the clean original without fear. Nothing is lost, except your time. Rendering is fine but Realtime is the way forward, even if you can only get realtime previews on the computer screen - this is a significant advantage.
Rendered media may or may not be removed automatically from your hard drive when it stops being required. For this reason, it is worth keeping an eye on them because they can quickly take up enormous amounts of space. Five minutes worth of rendered video is over a gigabyte of storage space.
This is clicking once with the right-hand mouse button. Mac users have to hold down the Control key to get the same effect, which is to bring up a contextual menu - that is, a menu of options which are relevant to the thing you clicked on. For example, in Avid, you are given the option to add video or audio tracks if you right-click on the timeline. Right-Clicking tends to bring up options you are likely to want to access often and if you want to achieve something in any software, it's a good idea to try right-clicking before you look for the relevant option in a menu.
Right clicking often allows you to access the properties of a particular item, which is very useful.
Actually, this is not as easy to define as you might imagine. A scene in a film is a passage of time that seems connected by continuity. So a good example would be the duration of a conversation in one room - this is a clearly identifiable scene - but equally, you could have a montage sequence where a character travels over a significant distance and interacts with a variety of strange people on the way. Depending on how it was shot, this could also be one scene.
'Scene' is often used to refer to a particular location.
Any series of video clips. Some video editing applications allow you to create multiple sequences inside the same projects and to combine them in various ways. You can copy and paste from one into another, for example. The sequence you produce in non-linear editing software is like the screen you look at when working in word processing software. The finished master tape or compressed media (such as a QuickTime movie for the Web) is the equivalent of the printed paper after the word processing has finished.
Sequences don't actually contain any media - just as the project file contains no media. Instead, a sequence contains copies of your master clips or media shortcuts, which tell the computer which bit of original media to play next.
A shortcut is usually an icon that, when you double-click on it, activates some other item. Normally icons represent the original item. For example, if you produce a Word document, that document will have its own icon and if you double-click on it, the document will open. However, there is nothing to stop you creating a shortcut to the original document item instead. That way, when you double-click on the shortcut, it will be like double-clicking on the original document and it will open.
You can create shortcuts to most things and in a sense, the FX templates in Avid are like shortcuts to a group of settings.
The value of shortcuts is that they are tiny compared with the original file, so it's OK to have them on your computer desktop (too much information stored on your desktop can slow the computer down) and they make organisation easier. You can store all of your documents or pictures in folders on your storage hard drive and just have shortcuts to those folders on your desktop, for example.
The Start Menu on a Windows-based computer is effectively a series of shortcuts in folders that you access via a menu. You can add and remove shortcuts to the Start menu, allowing you even more control over the way the computer works.
Any particular camera angle on a particular section of action. For example, if I film an actor in a head and shoulder shot saying a particular line, that is a shot. If I record the same shot and the same line 20 times (because I've somehow forgotten how to record video properly) each successive recording is called a separate take. So a shot is the angle and the action, a take is a particular go at that angle and action.
Sometimes, people refer to a whole camera setup as a shot, where no matter how many different parts of the action are recorded, it is still just one shot. These are silly people though and shouldn't be listened to. Because when you get to the edit suite and need to know how many takes you have covering a particular piece of action, you need to know which shots cover which lines. Otherwise you'll spend all sorts of amounts of time just finding out what media you have. Either that or your assistant editor will… A thankless job.
The information you want when you are transmitting information from one place to another or recording information, as distinct from 'noise' which is the background hiss, or electromagnetic interference, or picture interference of one kind or another. Good transmission is a constant battle to achieve the greatest possible signal to noise ratio. That is, the difference between the intensity of the signal and the intensity of the noise. If we take a sound signal as an example, this would be the difference between the background hiss on your tape and the volume of the recorded voices of your actors. Background hiss can often be reduced with filtering but sounds like an echo or certain background source noises cannot be removed if they are the same frequency and volume as the signal.
Filtering systems can filter out sounds based on volume (so anything below a certain volume is just cut out - often this is safe to do because background hiss is very quiet - using a low-pass filter) or based on frequency (so anything higher or lower pitch, or anything with a particular pitch is removed or reduced - again this often works with background hiss because it tends to be much higher pitched than speech). However, echoes can be the same approximate volume and the same pitch making it nearly impossible to remove. With a significant amount of effort (usually only out of desperation and more money than sense) you can attempt to edit the wave form manually at the sample level if necessary to remove the echo but it is worth remembering that it is probably the worst enemy of your sound recordist.
If necessary, it is worth pinning cushions and foam to the walls, floor and ceiling to avoid it.
Equally, if a car engine is running nearby, depending on the size of the engine etc, this can often be the same approximate volume and pitch as the speech coming from your actors. Highly directional microphones will often help with this.
There are some pretty powerful systems available now to allow you to subtract the background noise from your audio but it is harder with video. Adobe Audition has a lovely noise reduction plug-in.
Making your source video play back slowly can be difficult. The technology is improving, allowing systems to apply very clever field interpolation to work out what would be in the extra frames if they had been filmed.
Here's the basic issue with producing slow motion with video: When you want to produce slow motion with celluloid film, you simply (well not that simply but the idea is simple) speed up the recording. If you record at 50 frames a second rather than 25, everything will appear to go at half speed when you play back at 25 frames a second. Because the film camera is recording 50 whole frames of information per second and then simply playing those frames back at the normal speed, you get all the beautiful detail you would expect if it was recording at a normal speed originally. You can go faster and faster with film, up to thousands of frames a second if you want to record bullets flying through the air (although these cameras are extremely expensive - they have no shutter, just a prism that rotates at the right speed to allow the light to hit the film surface just at the right moment). All you need to do is play back at the normal speed and bingo, you have slow motion.
There are very few video cameras that can record fast though. No-one has been able to explain to me why there is a problem with recording at higher frame rates so we'll just have to accept it - there's a problem. This means the only way you can get reasonable slow motion from a video camera is by asking a computer to work out what it would look like if it was actually moving at half the speed. The computer just makes up the in-between frames based on a comparison of the before and after frames. Sometimes this looks just beautiful, especially if your original material did not have much motion in it - making it easier to work out which pixels are connected together. However, if your original material involves a lot of fast motion, particularly motion that is picked up by only one field and not the other, you can get some pretty dodgy looking slo-mo.
There is no way around it, video slow motion can look awful, but well planned and with the right source, it can be a life saver.
One technique I am particularly fond of both with Avid and Adobe Premiere, is using timeline-based stretch options to extend or contract action by fairly small amounts. Because the change occurs on the timeline, you can make the duration of a shot stretch to match other shots, or fill a gap and if the chance is only up to about 10%, you'll often find the change undetectable. This can be a real life-saver if your source material is just a touch too short to hit the beat in your music or allow for natural continuity of your action.
Steadycam is a system for carrying a camera which is much more stable than just putting it on your shoulder. It involves a kind of body harness with an articulated arm attached, at the end of which sits the camera. Steadycam is truly the most lovely thing and allows for much greater flexibility when positioning and moving the camera.
There are different sizes (and prices) for Steadycams, ranging from very small ones for lightweight DV cameras right up to full-blown 35mm items. Coincidentally, for the film 'Aliens', every gun carried by the soldiers was a specially modified Steadycam. Each one was specially 'broken' to attach the gun to it.
Operating a Steadycam can be very dangerous and can result in spinal injuries as the wrong posture can put an enormous amount of pressure on just the wrong spot of your back. If you don't know how to use one, don't bother hiring one. Hire an operator with their own Steadycam. They will know their tool inside and out (hopefully) and you can concentrate on the shot, not the hospital
Steadycam is so stable it can replace a tripod without distracting the audience with wobbly movement. The armature dampens movement (a little like a fluid head on a tripod) and ensures the camera maintains its upright position.
Steadycam is lovely, mmmm, Steadycam, I want you…
A series of pictures meant to represent each camera angle in a scene, much like a comic strip. Storyboards are often the most efficient way to communicate the contents of a scene because simply identifying the camera position for each section of the script tells the various departments of the crew what they need to contend with when doing their job. For example, the Sound Crew need to know how close they will be able to get their mics.
Another useful tool is blocking diagrams made with simple stick figures, or even 'X marks the spot' for the various components of a scene, where the camera is going to sit, where the lights can go and so on. Storyboards are often forgotten on low budget films, which is a pity because the one thing you can't afford on a low budget is poor communication. It's one of the few things that comes free but costs a lot if you leave it behind.
Each individual length of video recorded onto a non-linear editing system is a clip. A clip consists of a big media file sitting on one of your video hard drives and a Master Clip or Shortcut to it inside a bin in your non-linear editing software. A Subclip is a shorter section of the master clip which still points to the same media file. They are immensely useful because they allow you, the human being, to get a better grip on the source media available to you. The extra feature of Subclips is that because they point to the whole media file, rather than just a part of it, you can Trim out beyond the ends of it, right to the ends of the original file. For example, if your Subclip just represents the middle ten seconds of a one minute clip, you can trim right out to the ends of the one minute clip on your timeline from it, without having to root out the full master clip. The net effect is that it is perfectly safe to completely edit your sequence using only Subclips, if that helps you to organise yourself because you lose none of the flexibility and choice you had in the first place.
Subclips remain connected with the Master clip inside Avid and can be located based on the Master Clips they are associated with. This can be handy for organising yourself.
As you may notice, a great many of the features non-linear editing tools have to offer don't appear on the surface to be especially important but when it's two in the morning and you just need to find that one clip you were using earlier and you can't remember where it was…. Believe me, they are VALUABLE.
S-Video is the next up standard for transmitting a video signal from Composite Video. Composite video combines all the parts of the video signal into one for transmission, resulting in the colour information interfering with the picture information, which should be illegal. S-Video, also referred to as Y-C video, splits the picture information (the luminance) from the colour information, and transmits them separately. The result is a much cleaner signal at the other end. S-VHS and Hi-8 video formats support S-Video natively. You will find that the higher you go in terms of recording medium, the more of these connections become available and it is worth knowing the difference when choosing a cable to plug into the back of your recording deck. The difference between the signal coming through a Composite cable and an S-Video cable is significant. The next analogue video standard up, is RGB or YUV component video, which splits the video into individual colours, or luminance and individual colours.
These transmission systems (for sending video down a cable) are distinct from digital transmission systems which deal primarily with bandwidth - how much information is used to record the picture and sound information. Since Digital Systems deal with binary (ones and zeros), they don't generally suffer from the type of signal degradation that analogue systems have to contend with. Without going into too much detail, this is because a weak '1' is still a '1' and a weak '0' is still a '0'. So the computer receiving the signal has little doubt about the signal it is receiving. However, analogue signals are always on a grey-scale by design so the receiver has to assume that a weaker signal is the whole signal and can't rebuild anything from it. All it can do is boost what it receives and attempt to filter things out.
If you record the same shot with the same action three times, each new recording is one take. For this reason, shot logs should refer to shot numbers and take numbers. A shot number is an individual camera angle with particular action in it. So 11/3 would be shot 11, take 3.
Many would argue that lighting is an art in its own right, but there are some very simple rules to follow when setting up a scene and a little research can make a big difference to the effects you can achieve. A standard lighting set-up is called Three Point Lighting. According to this system, you have one 'Key Light' which is the primary source of light in the scene. It is usually behind and to one side of the camera, and above the subject to try to look as natural as possible. The second light is the 'Fill Light', which is usually from a lower angle than the 'Key Light' and from another direction (usually the other side of the camera), filling in and softening some of the shadows produced by the 'Key-Light'. On set, this may not look entirely natural but on camera it helps to produce a look that is more like our own eyesight. The third light is the 'Back Light', which is usually from high-up over the head of the subject and aimed from behind them. This adds a gentle glow or outline around the subject, enhancing the illusion of depth on the screen.
Obviously there is enormous variation even within this simple set-up, let alone introducing other lights such as 'Practicals', 'Reflectors' or 'Pin-Lights'.
Tagged Image File Format.
Tiff is a graphics format which is used a lot in printing. However, it is also handy for Video because it can support an Alpha Channel (an extra layer of information which defines which pixels should be transparent, or partially transparent). As with all image formats, this one has strengths and weaknesses but it is very widely used and is compatible with both Mac and PC systems.
If the camera moves so that it looks up or down, as if you turned your head up to look at the sky or down the look at the floor, that is a tilt. Nothing to do with Pinball in this Glossary.
Film has sprocket holes which the camera uses to keep hold of it and ensure it moves smoothly through the mechanism. The sprocket holes are very carefully positioned and allow the claws in the camera to hold it very steadily.
Video has a similar feature but rather than having holes in the tape, it has magnetic 'blips' called control track. Control track serves the same purpose as sprocket holes in film. However, it is produce when you record video. The distinction between an assembly edit and an insert edit is that assembly edits re-write the control track while insert edits only re-write the picture and sound information (and sometimes the Timecode information).
Computers are not able to identify the contents of a picture and so they need some other system for identifying the different parts of your film. What computers can work with very well is any form of number system. Timecode is a numbering system for video, allowing computers to control the movement of a video tape and manage time effectively for you, rather than just counting the number of individual frames.
Coincidentally, many film labs now work purely by frame counts rather than using a timecode system and edge numbers… if you know what they are.
Timecode measures the frames of video in terms of hours, minutes, seconds and frames. This eight digit number is pretty much universally standard. The number of frames per second can vary from system to system. For example, regular PAL Standard Definition video is 25 frames per second and NTSC is either 30 frames per second or 29.97 frames per second. The reason for these different frame rates for NTSC is so silly, I don't think I'll go into it. The 29.97 frames per second version is called Drop Frame timecode though.
In order for computers to find their way around your source tape properly, you MUST have continual, unbroken timecode on it. You create timecode by simply pressing record on most professional and DV cameras. However, if you turn the camera off and on, it will often start recording at zero again, and this is bad because the computer will get lost.
If you ever want a computer to be able to automatically locate anything on a source tape, it must have solid timecode to work with. There are ways around timecode problems but all of them are best avoided if possible.
This is the window in which you produce your sequence in non-linear editing software such as Adobe Premiere or Avid or your audio session in multi-track sound editing software such as Adobe Audition. You build your project with sections of video and sound looking like building blocks.
Time is indicated passing from left to right, so the longer your project is horizontally, the longer it will take to play. Non-linear editing systems use different systems to indicate the way that multiple layers of video will interact - which ones will be in front and how they will be blended together. Most systems are designed so that the higher the video track on the timeline, the more in front it is. So Titles would normally go on the highest tracks you are using. Most systems have more video tracks available than you would be likely to ever need. Compositing software often includes options for applying 'Blend Modes' to your video layers, much like Photoshop. This can result in some truly fantastic results that could never be achieved with simple transparency.
Generally referred to as words coming up on screen to indicate what you are looking at but can also mean just about any graphic providing information.
Computer software is designed to work like real-world tools to make it more accessible for human beings. To this end, systems make reference to a number of Tools. Avid is designed in quite a modular fashion, allowing you to access a different tool for working with audio, or titles or even accessing rare controls. Avid even includes 'Toolsets' which is much like a Workspace, including a batch of tools for a purpose. There is an FX toolset that gives you access to the FX tools and automatically lays out your workspace in a convenient way. The idea, I suppose, is that these toolsets should be like a plumbers bag, or an electricians bag with all the appropriate tools available and easily reconfigurable. Adobe Premiere also uses tools on the timeline in a similar way to Photoshop, changing the way your mouse pointer works depending on the tool you select.
A set of tools and a particular layout which you can select all at once inside Avid. A prime example would be the Correction Toolset. In fact, you can only access the full Colour Correction tool by using the appropriate toolset - unlike all the other tools which are accessible individually. This is probably a good thing because colour correction is always going to require a group of tools altogether. By choosing a particular Toolset, you are asking the system to close which ever tools and windows you have open (except for bins) and then open whichever tools are part of the Toolset, then lay them out in a particular way to make them more accessible.
You can connect Toolsets with other settings so using the Interface Settings the colours could all become muted in the interface while you were working on colour correction to avoid distraction.
An individual layer of video or sound. Each video track interacts with the other video tracks by being in front of it or behind it in the stack. Upper tracks are in front of lower tracks.
Sound tracks always play altogether, although the power of your computer system may limit the number of tracks you can play simultaneously and you can sometimes priorities certain tracks.
In Avid, each individual sound track is mono and can be panned (directed) left or right. So odd numbers are panned left by default and even numbers are panned right. This can be changed very easily on a clip by clip or a track by track basis. In Adobe Premiere, up to version 6.5, individual audio tracks are actually stereo pairs, and you control the left and right pan for both channels (a channel is just one audio signal). In Premiere Pro, an audio track can be Mono, Stereo or 5.1 Surround. By setting the audio track type before you use it, Premiere can give you the right controls for making your adjustments.
Individual tracks can be switched on or off easily.
A piece of hardware, usually with its own electricity supply, which you use to convert video from one format to another. There are many transcoders available for converting analogue video signals (Composite, S-Video and Component Video) to and from standard DV via firewire or SDI (the much higher-end equivalent which Digital BetaCam uses to transmit information).
The quality of a transcoder can vary enormously. Avid have an item called a Mojo, which provides transcoder functions and, more excitingly, realtime output.
If you should be working on an editing system which only has a firewire connection, you are likely to want some kind of analogue converter to see your work on a regular television monitor before you produce your master tape. This is because, apart from the size of the picture making it easier to assess things, the colours are displayed differently on a television screen (it uses YUV colour spacing and computers use RGB, for what it's worth) and the picture is cropped at the edges while the computer screen shows the whole signal.
You might discover you can get away with a boom microphone just edging slightly into the frame if you look on a television screen because of the masking effect at the edges. On a computer screen, you'd always see the microphone - unless you applied an effect to zoom in slightly, which will usually soften the image.
When your video or sound changes from one thing to another, it's a transition. The most common transition is a cut and the next most common is a dissolve. From there on, it gets sillier and sillier until you are making your video curl up into a butterfly, do a little dance and fly off into the background of the next clip.
Making effective use of transitions is important because it is one of the few ways the editor connects directly with the audience. You can opt for 'invisible' editing, where you try to avoid attracting the audience's attention to the construction of the programme, or you can be blatant and punch them on the nose with your cuts and spins. There is no right or wrong, of course, just creation creation creation.
Once you have put your rough cut together, placing your individual clips in the individual order they should appear on the timeline, you can begin to make fine adjustments to the beginnings and ends of your clips. Trimming effectively means changing your mind about the point at which you want your clip to begin or end, either in terms of its position on the timeline or in terms of its own start and end. It is so fabulously easy to trim once you have got the gist.
The easiest way to picture trimming is probably to imagine every clip you have on your timeline is a partially unrolled length of carpet. It is rolled up slightly at both ends so if you want to you can unroll a bit more or roll some back up. The way you roll the ends will change the amount you see of the flat part and also which bit of the pattern. If you want to keep it the same length but see a different part of the pattern, unroll one end and roll the same amount up at the other end. This is exactly what you do when trimming. In fact, Avid even calls the ends 'Rollers' while you are in the trimming mode.
Both Avid and Adobe Premiere allow you to simply click on the end of a clip and drag to adjust the trimming position, or if you prefer you can uses keys or buttons to adjust the ends.
In film editing, a trim is a short end that has been cut off and hung up in case it is needed again. Film edit suites have a great many hooks on which to hang (from one of the sprocket holes) bits of trim, with numbers to identify them. If you decide you want to use a trim again in a film edit, you have to splice it back onto the original clip, which is a pain. With non-linear editing, you just have to drag a mouse - how cool is that?
Video Compact Disk. Video CDs are similar to Video DVDs except that they use a flavour of MPEG1 video as the source (roughly comparable to VHS video quality) and will only play in certain players. They also have a simpler specification than DVDs, so you can't include as much interactivity and funky menus. Video CDs are very popular in some countries and virtually unheard of in others. With the prevalence of DVDs now, they are probably starting to slide into oblivion.
Video CDs should not be confused with multi-media CDs. Multimedia CDs do not meet the White Book Standard to work in a regular Video CD Player. Instead, they will generally only work on a PC or Mac. Multimedia CDs are completely interactive (if you design them to be) and work very much like websites without the bandwidth problems contingent to them. You can have VHS quality full screen video play from a CD-ROM with ease. Popular software for producing Multimedia CDs is Macromedia Director. This is the real master of the world of Multimedia but there are other, lower cost options which work pretty well. A good example is Matchware Mediator.
Your eye does a pretty good job of automatically adjusting the way it receives colour by using pigment to filter the colours it receives. So does your other eye. Cameras don't do such a good job.
The most common colour cast problems you are likely to encounter are connected with filming both indoors and outdoors. Because of the colour of the filaments in standard light fittings, it gives of a slightly orange light. Daylight, by contrast has much more blue in it.
To compensate for this, you can buy indoor or outdoor film stock, which is designed with the appropriate adjustment built in. Video cameras have a White Balance feature, to allow you to compensate in the camera instead (you don't buy different tapes for indoor and outdoor filming). Cameras usually have an automatic White Balance feature, which does an OK job of detecting the colour cast in the air and compensating for it. Much better though, is to use the manual White Balance option (which not all domestic cameras include). To use this, simply fill the view with something white (a piece of paper for example) and press the appropriate button. The camera can then make the adjustments necessary to ensure that white looks white in that environment. The reason white is used as the reference is that it contains all the other colours. So if it looks right, the other colours should also look right.
When you are filming on set, the environment has a sound all of its own. This is the atmospheric sound of a location which is a product of both the space you are in and the total effect of what's in it. That is, your bodies, your equipment, the furnishings and direction of the wind, all have an impact on the ambient sound of the location.
As an editor, you'll find there are often times where you need to remove a bit of sound - perhaps a noise off-camera - and to do it, you will need to fill the gap with background noise. You can do this by searching for a pause in the action and copying the sound from there but the best option of all is to have a minute of original clean atmos, or wild track.
Wild track can be just the atmosphere of a room or it could be a particular on-going sound. Not so often things like a gunshot or door slam - these would usually be referred to as Spot Effects - but perhaps the sound of rushing water in a stream, trees rustling and so on.
It is very important that wild track is recorded with everything in the same place as when the action is being recorded so it is perfectly right and proper for the Sound Engineer on set to request that everyone stands exactly where they are and waits for one minute to get a recording. Everyone must be SILENT when this happens. The Sound Engineer is likely to crank up the record level a little to give the editor a reasonable quality pick-up of the sound and anyone murmuring or sniffing will be much more audible than usual.
Windows Media is a compression system produced by Microsoft primarily for video and sound distributed on the Internet. It is a pretty efficient system. The latest version of Windows Media Video includes frame interpolation. The idea of this is that although you want the viewer to see 25 frames per second, say, you can only send 20 frames per second to the computer. The computer then uses redundant CPU cycles (the processing power of the computer) to work out what the missing 5 frames per second would look like. The result is much smoother playback of your video than you even supplied. Clever.
The three main compression systems for distribution on the internet are Windows Media, QuickTime and Real Systems. Each has strengths and weaknesses.
A workspace is a particular layout of your various windows and tools inside software. Both Avid and Adobe Premiere support multiple workspaces, so you can set up a workspace for doing your effects, a different one for working on sound and so on. The benefit of this is that it allows you to get the perfect layout for the way you like to work, with exactly the tools you want. It's a little like a button for positioning your rear-view mirror exactly where you want it.
XLR cables, also called Canons, are a three pin cable that carries the signal on two pins (positive and negative) and an earth connection on the third pin. The benefit of the earth connection is that electromagnetic interference will tend to be drawn to the earth wire rather than getting into the important signal-bearing wires. The result is that you can make XLR cables much longer than Phono or Jack cables (which lack the earth connection) with less fear of a degraded signal to noise ratio (more background hiss).
XLRs are also a very robust plug, which should not be underestimated because film and video equipment gets treated pretty contemptuously sometimes.
All plugs are referred to as Male or Female. The male plugs have an outy bit and the female ones have an inny bit. If you don't know why, you need to speak to someone. I don't know who but you need to speak to them soon and find out a whole bunch of stuff. And while you're at it, slap yourself.