Some people still have old audio or video recorders that they use, whether it is because of sentiment, or they have just not gotten around to purchasing new, modern equipment. Nevertheless, before any analog video or audio can be posted to the internet or sent by email to someone, it must be transferred to digital format. There are two way to do this, either professionally, or you can do it yourself. Either way you choose to do it, you should know some things about how the process works first.
Analogue to Digital Audio
Analogue audio is a sound that, when entered into the microphone, is represented by an electronic pulse, or sound wave, and each sound is represented by its own amplitude, used as a time function. This analogue audio signal, such as the ones that are recorded onto a cassette tape, can be transferred to a digital format, like a CD or onto a PC, and it is easy enough to do. When you digitize audio, you should keep some things in mind. For example, the digital version of the audio will not sound exactly as the original, but this is because of the method that is used to transfer the audio.
Using the Nyquist theorem, when we use a signal’s highest frequency, which for example is an “f,” then that single signal should be sampled at maximum of 21 times in one second of digital recording. Someone’s voice, a collective signal using the same theorem, would use a sampling rate of 8,000 for each second, which would use 8 bits for each sample. Once this happens, it will result in the sample being 64 KB/s (kilobytes per second) in its digital format. In all, music samplings would take a sample rate of 44,100 for each second using up to 16 bits for each sample, which means its digital counterpart will be 705.6 KB/s if in mono, and 1.477 MB/s (megabytes per second)for stereo digital sound.
Analogue to Digital Video
While digitizing video uses the same principle, it is done differently because it is a different analogue format than audio is. When people shoot analogue and digital video, what they are doing is taking so many still shots, per second, called “frames,” which are much like what happens when drawing a motion pad animation. If the still pictures move fast enough, then you have a motion video or picture. The more frames a video contains per second, the more information it contains, and the better quality video you watch, even though no standard frame rate exists. However, the typical frame rate is about 30 per second, when suing consumer quality recording devices.
Something to keep in mind is something called “flickering.” This is what happens when a frame in a video needs refreshing, and is why television studios will often use 50 or 60 frames per second, so that the frame can refresh without losing quality. Each of these frames used will eventually be divided into various grids, which are typically called pixels. These pixels will typically be equal in size, and the more pixels there are per square inch, the better a resolution is.
On a television that is black and white, the pixels represent 256 different grey, black and white shades, each of them being 8 bits per pixel. Color televisions use pixels that are 24 bits in size for each of the primary colors. A very low-resolution frame might consist of 1024 pixels wide by 768-pixels high, which is a landscape version.
Using this pixel setting would take 2 times 25-frames by a 1024 pixel by 768 pixel resolution times 24 bits, which equals a data rate of 944 MB/s, which is the standard video size. This would require the use of SONET because of the high data rate, and is the reason video gets compressed.