Computers Windows Internet

Audio information processing and playback systems. Installing the Windows 7 Operating System Sound System Features

Sound system a personal computer serves to reproduce sound effects and speech accompanying the reproduced video information, and includes:

  • recording / reproducing module;
  • synthesizer;
  • interface module;
  • mixer;
  • speaker system.

The components of the sound system (excluding the speaker system) are structurally designed as a separate sound card or are partially implemented as microcircuits on the computer's motherboard.

Typically, the input and output signals of the recording / reproducing unit are analog, but the processing sound signals happens digitally. Therefore, the main functions of the recording / reproduction module are reduced to analog-to-digital and digital-to-analog conversions.

For this, the input analog signal is subjected to pulse-code modulation (PCM), the essence of which is the sampling of time and the representation (measurement) of the amplitudes of the analog signal at discrete times in the form of binary numbers. It is necessary to select the sampling rate and bit depth so that the accuracy of the analog-to-digital conversion meets the requirements for the quality of sound reproduction.

According to Kotelnikov's theorem, if the time sampling step separating adjacent samples (measured amplitudes) does not exceed half the oscillation period of the higher component in the frequency spectrum of the converted signal, then time sampling does not introduce distortions and does not lead to information loss. If, for high quality sound, it is sufficient to reproduce a 20 kHz wide spectrum, then the sampling rate should be at least 40 kHz. In personal computer (PC) sound systems, the sampling rate is usually 44.1 or 48 kHz.

The limited bit width of the binary numbers representing the signal amplitudes causes the signal magnitudes to be discretized. In sound cards, in most cases, 16-bit binary numbers are used, which corresponds to 216 quantization levels or 96 dB. Sometimes 20- or even 24-bit A / D conversion is used.

It is obvious that improving the sound quality by increasing the sampling frequency f and the number k of quantization levels leads to a significant increase in the volume S of the resulting digital data, since

S = f t log2k / 8,

where t is the duration of the sound fragment, S, f and t are measured in MB, MHz and seconds, respectively. In stereo sound, the data is doubled. So, at a frequency of 44.1 kHz and 216 quantization levels, the amount of information for representing an audio stereophonic fragment with a duration of 1 min is about 10.6 MB. To reduce the requirements for both the memory capacity for storing audio information and the bandwidth of data transmission channels, compression (compression) of information is used.

The interface module is used to transfer digitized audio information to other PC devices (memory, acoustic system) through the computer buses. The bandwidth of the ISA bus, as a rule, is not enough, so other buses are used - PCI, a special MIDI interface for musical instruments, or some other interfaces.

Using the mixer, you can mix sound signals, creating polyphonic sound, add musical accompaniment to speech accompanying multimedia fragments, etc.

The synthesizer is designed to generate sound signals, most often to simulate the sound of various musical instruments. Frequency modulation, wave tables, mathematical modeling are used for synthesis. Input data for synthesizers (note codes and instrument types) are usually presented in MIDI format (MID extension in file names). So, when using the frequency modulation method, the frequency and amplitude of the summed signals from the main generator and the overtone generator are controlled. According to the wave table method, the resulting signal is obtained by combining digitized sound samples from real musical instruments. In the method of mathematical modeling, instead of experimentally obtained samples, mathematical models of sounds are used.

Professional sound cards allow performing complex sound processing, provide stereo sound, have their own ROM with hundreds of sounds of various musical instruments stored in it. Sound files are usually very large. Thus, a three-minute audio file with stereo sound takes about 30 MB of memory. Therefore, Sound Blaster cards provide automatic file compression in addition to their core functionality.

Board components

The sound card of a personal computer contains several hardware systems associated with the production and collection of audio data, two main audio subsystems designed for digital "audio capture", synthesis and playback of music. Historically, the music synthesis and playback subsystem generates sound waves in one of two ways:

  • through the internal FM synthesizer (FM synthesizer);
  • playing sampled sound.

The digital audio recording section of the sound card includes a pair of 16-bit converters - digital-to-analog (DAC) and analog-to-digital (ADC) and a programmable sample rate generator that synchronizes the converters and a controlled central processor. The computer transmits the digitized audio data to or from the converters. The conversion frequency is usually a multiple (or part of) 44.1 kHz.

Most boards use one or more channels direct access to memory, some boards also provide direct digital output using an optical or coaxial S / PDIF (Sony / Philips Digital Interface) connection.

The onboard sound generator uses a Digital Signal Processor (DSP) that plays the desired musical notes by combining readings from different regions of the sound table at different rates to produce the desired pitch. The maximum number of available notes is related to the power of the DSP processor and is called the "polyphony" of the board.

DSPs use sophisticated algorithms to create effects such as reverb, chorus and delay. Reverb gives the impression that the instruments are playing in large concert halls. The chorus is used to give the impression that several instruments are playing together, when in fact there is only one. Adding a delay to a guitar part, for example, can give the effect of space and stereo sound.

Frequency modulation

The first widespread technology used in sound cards is frequency modulation (FM), which was developed in the early 1970s by J. Chowning (Stanford University). An FM synthesizer (FM synthesizer) produces sound by generating a pure sine wave (carrier) and mixing it with a second signal (modulator). When the two waveforms are close in frequency, a complex waveform is created. By controlling the carrier and modulator, you can create different Voices, or Instruments.

Each FM synthesizer voice requires a minimum of two signal generators, commonly referred to as "operators". Different designs of the FM synthesizer have different degrees of control over the operator parameters. Complex FM systems can use four or six operators per voice, and operators can have adjustable parameters that allow you to adjust the fade-in and fade-out rates.

Yamaha was the first company to invest in research on Chowning Theory, which led to the development of the legendary DX7 synthesizer. Yamaha soon realized that mixing a wider range of carriers and modulators would create more complex voices, resulting in realistic-sounding instruments.

Although FM systems were implemented in analog performance on early keyboard synthesizers, later FM synthesis performance was done in digital form. FM synthesis techniques are very useful for creating expressive new sounds. However, if the goal of a synthesizing system is to reproduce the sound of an existing instrument, it is best to do so digitally from sampled signals, as in WaveTable synthesis.

WaveTable synthesis

To create the sound, the sound table does not use carriers and modulators, but samples of sounds from real instruments. Sampling is a digital representation of the shape of the sound produced by the instrument. Boards using ISA usually store samples in ROM, although newer PCI products use the main RAM of the personal computer, which is loaded when the operating system (for example, Windows) starts up and can include new sounds.

While all FM sound cards sound similar, sound table cards vary considerably in quality. The sound quality of instruments includes factors:

  • the quality of the original recording;
  • the frequency at which the samples were recorded;
  • the number of samples used for each instrument;
  • compression methods used to preserve the sample.

Most of the instrumental samples are written in the standard

16 bits and 44.1 kHz, but many manufacturers compress the data so that more samples or instruments can be written to a limited amount of memory. However, compression often results in a loss of dynamic range or quality.

When a cassette tape is played too quickly or too slowly, its pitch changes, and this is also true for digital sound recordings. Playing the sample at a higher speed than its original results in a higher reproducible sound, allowing the instruments to play more than a few octaves. However, if some Voices are played quickly, they sound too weak and thin; likewise, when a sample is played too slowly, it sounds dark and unnatural. To overcome these effects, manufacturers divide the keyboard into multiple regions and apply appropriate samples of instrument sounds in each region.

Each instrument sounds with a different timbre, depending on your playing style. For example, when playing softly on the piano, you cannot hear the sound of hammers hitting the strings. As you play more intensely, not only does the sound become more obvious, but changes in tone can also be noticed.

For each instrument, many samples and their variations must be recorded in order for the synthesizer to accurately reproduce this range of sound, and this inevitably requires more memory. A typical sound card can hold up to 700 instrument samples within a 4 MB ROM. Accurate reproduction of solo piano, however, requires 6 to 10 MB of data, which is why there is no comparison between synthesized and real sound.

Updating your sound table does not always mean you have to buy a new sound card. Most 16-bit sound cards have a connector that can connect to an optional daughterboard. The sound quality of the instruments these boards provide varies greatly, and this usually depends on how much memory is on the board. Most boards contain 1 to 4 MB of samples and offer a range of digital sound effects.

Sound card connectors

In 1998, Creative Technology released the highly successful SoundBlaster Live! Sound card, which later became the de facto standard.

The Platinum 5.1 version of the Creative SoundBlaster Live! Card, which appeared towards the end of 2000, had the following jacks and connectors:

  • analog-digital output: either a compressed signal in Dolby AC-3 SPDIF format with 6 channels for connecting external digital devices or speakers of digital systems, or an analog 5.1 speaker system;
  • line input - connects to an external device such as a cassette, digital tape recorder, player and others;
  • microphone jack - connects to external microphone for voice input;
  • line out - connects to speakers or external amplifier for audio output or headphones;
  • Joystick / MlDI connector - connects to a joystick or MIDI device and can be configured to connect to both at the same time;
  • CD / SPDIF connector - connects to the SPDIF (digital audio) pin located on the DVD or CD-ROM drive;
  • additional audio input - connects to internal audio sources such as tuner, MPEG or other similar cards;
  • Audio CD connector - connects to the analog audio output on a CD-ROM or DVD-ROM using an audio CD cable;
  • answering machine connector - Provides monaural communication with a standard voice modem and transfers microphone signals to the modem.

  • a - audio payment;
  • b - block Live! Drive.

Audio expansion (digital I / O) - connects to a digital I / O board (located in a free 5.25-inch drive niche that goes to the front of the computer), sometimes called Live! Drive. Provides the following connections:

  • RCA SPDIF jack - connects to digital audio recorders such as digital tape and mini-discs;
  • headphone jack - connects to a pair of high quality headphones, mutes speaker output;
  • headphone level control - controls the volume of the headphone signal;
  • second input (line / mic) - connects to a high quality dynamic microphone or audio source (electric guitar, digital audio, or minidisc);
  • switch for the second input (line / microphone);
  • MIDI connectors - connect to MIDI devices via Mini DIN-Standard DIN cable;
  • infrared port (sensor) - allows you to organize remote control of a personal computer;
  • auxiliary RCA jacks - connect to equipment consumer electronics(VCR, TV or CD player);
  • optical input-output SPDIF - connects to digital audio recorders such as digital tape or mini-discs.

Modern audio cards also support a number of standard audio modeling, generation and processing capabilities:

  • DirectX - a system of commands for controlling the positioning of a virtual sound source proposed by Microsoft (modifications - DirectX 3.5, 6);
  • A3D - Developed in 1997 by NASA (National Aeronautics and Space Administration) and Aureal for use in flight simulators, a standard for generating effects such as thick fog or underwater sounds. A3D2 allows you to simulate the configuration of a room in which sounds are heard and spread, calculating up to 60 sound reflections (both in the hangar and in the well);
  • EAX (Environmental Audio Extensions), a model proposed by Creative Technology in 1998 for adding reverberation to the A3D, taking into account sound obstacles and sound absorption;
  • MIDI (Musical Instrument Digital Interface), developed in the 1980s. Commands via the standard interface are transmitted according to the MIDI protocol. A MIDI message does not contain a recording of the music per se, but references to notes. In particular, when a sound card receives such a message, it is decoded (which notes of which instruments should sound) and processed in the synthesizer. In turn, the personal computer can control various “interactive” instruments via the MIDI interface. On Windows, MIDI files can be played by the dedicated MIDI Sequencer software. This area of ​​sound synthesis also has its own standard. The main one is the MT-32 standard, developed by Roland and named after the sound generation module of the same name. This standard also applies to LAPC sound cards and defines the basic means for controlling the placement of instruments, voices, and for dividing into instrument groups (keyboards, drums, and so on).

Audio compression format MP3

Based on the original MPEG-1, the MP3 standard (short for audio MPEG, layer 3) is one of three coding schemes (Layer 1, Layer 2 and Layer 3) for compressing audio signals. The general structure of the coding process is the same for all levels. Each level has its own bitstream recording format and its own decoding algorithm. The MPEG algorithms are generally based on the studied properties of the perception of sound signals by the human hearing aid (that is, the encoding is performed using the so-called "psychoacoustic model"). Since the human hearing is not perfect and the hearing sensitivity at different frequencies is different in different compositions, this is used when building a psychoacoustic model, which takes into account which sounds, frequencies, can be excluded without harming the listener of the composition.

The input digital signal is first decomposed into frequency components of the spectrum. The MP3 standard divides the frequency spectrum into 576 frequency bands and compresses each band independently. Then this spectrum is cleared of the obviously inaudible components - low-frequency noise and the highest harmonics, that is, it is filtered. At the next stage, a much more complex psychoacoustic analysis of the audible frequency spectrum is performed. This is done, among other things, with the aim of identifying and removing "masked" frequencies (frequencies that are not perceived by the ear due to their damping by other frequencies). If two sounds occur at the same time, the MP3 will only record the one that will actually be perceived. Quiet sound immediately after the loud one can also be removed, since the ear adapts to the loudness. If the sound is identical on both stereo channels, this signal is saved 1 time, but played on both channels when the MP3 file is decompressed and played back.

Then, depending on the level of complexity of the algorithm used, a signal predictability analysis can also be performed. To top it all off, the ready-made bit stream is compressed by a simplified analogue of the Huffman algorithm, which also significantly reduces the volume occupied by the stream.

As mentioned above, the MPEG-1 standard has three layers (Layer 1, 2 and 3). These levels differ in terms of the compression ratio provided and the sound quality of the resulting streams. Layer 1 allows storage of 44.1 kHz / 16-bit signals without noticeable quality loss at a 384 Kbps stream rate, which is a 4-fold gain in the occupied space; Layer 2 provides the same quality at 194 Kbps, and Layer 3 - at 128. Layer 3 has an obvious advantage, but its compression speed is the lowest (it should be noted that this limitation is already invisible at modern processor speeds).

Surround sound reproduction systems

The reproduction of the sound environment began with stereo recordings and VHF FM radio. Tape recorders and FM stereo tuners with high quality two-channel sound were widely used. In theaters, audiences could enjoy Dolby Stereo Optical sound. The first videotapes assumed only monophonic sound of mediocre quality, however, cassettes with two-channel sound soon began to be replicated. At first, they were simply separate audio tracks, then Hi-Fi technology. Laser discs have been produced from the very beginning with high quality two-channel stereo sound. Soon, most broadcast television standards were adapted for the transmission of video with two-channel soundtrack over the air and in the cable. This is how the popular 2-channel audio format became a trivial option for home video. The first to appear on the market were simple Dolby Surround decoders, which made it possible to select and listen to a third, spatial channel, a surround channel, on home equipment. Subsequently, a more intelligent decoder, Dolby Surround Pro Logic, was developed, which emphasized and central channel- center channel. The result is a "home theater" - a set of equipment for high-quality sound and video playback with a Dolby Pro Logic Surround Sound decoder.

Unlike quad equipment, Dolby Surround equipment has been and is being produced on a massive scale and is constantly being improved. Firstly, Dolby Pro Logic technology successfully combines the optimal configuration of spatial channels (R, L, C, S) with the recording and transmission capabilities (two physical channels), which are possessed by almost all consumer equipment. Secondly, the capabilities and quality of Dolby Pro Logic meet the current requirements of the modern user. And thirdly, uniform standards for hardware and software are used.

The Dolby Surround encoder is not designed to transmit four independent sound signals, each of which must be listened to separately (for example, the sound of one TV program on different languages). In this case, the isolation between any two channels would have to be maximum, and the amplitudes and phases of the signals could be completely unrelated to each other. On the contrary, the task of Dolby Surround is to transmit four channels of sound (soundtrack), which will be listened to simultaneously and at the same time recreate a spatial sound picture (soundfield) in the listener's mind. This picture is composed of several sound images (sound images) - sounds that the listener perceives associated with visual images on the screen. The sound image is characterized not only by the content and power of sound, but also by the direction in space.

At the input of the Dolby Surround encoder there are signals of four channels - L, C, R and S, and at the outputs - two channels L, (left total) and R, (right total). The word "total" means that the channels contain not only their own signal (left and right), but also encoded signals of other channels - C and S. The functional diagram of the encoder is shown in the figure.

The L and R channel signals are sent to the L and R outputs without any modification. The signal of channel C is divided equally and added to the signals of channels L and R. The signal C is preliminarily attenuated by 3 dB (to keep the acoustic power of the signal unchanged after adding its “halves” in the decoder matrix). The S channel signal is also attenuated by 3 dB, but, in addition, before being added with the L and R signals, it undergoes the following transformations:

  • bandwidth limited by a bandpass filter (BPF) from 100 Hz to 7 kHz;
  • the signal is processed by a noise suppressor - a Dolby B-type Noise Reduction processor;
  • the S signal is shifted in phase by +90 and - 90 yrs, so that the components of the S signal intended to be added to L and R are in antiphase with each other.

It is quite clear that the L and R signals do not influence each other, they are completely independent. At first glance, it is not so obvious, but the fact is that the isolation between signals C and S is theoretically ideal as well. Indeed: in the decoder, the signal S is obtained as the difference between the signals L and R. But these signals contain exactly the same components of the signal C, which, when subtracted, cancel each other out. On the contrary, the signal C is extracted by the decoder as the sum of L and R. Since the components of the signal S present in these signals are in antiphase, when they are added, they also cancel each other out.

Such coding makes it possible to transmit signals S and C with a high degree of isolation under one condition: if the amplitude and phase characteristics of the physical channels through which the signals L and R are transmitted are absolutely identical. If there is some imbalance between the channels, the isolation is reduced. For example, if the components of the C signal in the R and L channels are not the same due to the different characteristics of the transmission channels, there will be an unwanted crosstalk of part of the C signal into the S channel.

PC sound system as sound card appeared in 1989, significantly expanding the capabilities of the PC as a technical means of informatization.

PC sound system- a complex of software and hardware that performs the following functions:

· Recording audio signals from external sources, such as a microphone or tape recorder, by converting the input analog audio signals into digital ones and then saving them to the hard disk;

· Playback of recorded audio data using an external speaker system or headphones (headphones);

· Playback of audio CDs;

· Mixing (mixing) when recording or playing back signals from several sources;

Simultaneous recording and playback of sound signals (mode Full Duplex);

· Processing of sound signals: editing, combining or dividing signal fragments, filtering, changing its level;

Processing of a sound signal in accordance with the algorithms of surround (three-dimensional - 3D-Sound) sound;

· Generating with the help of a synthesizer the sound of musical instruments, as well as human speech and other sounds;

· Control of the work of external electronic musical instruments through a special MIDI interface.

The sound system of a PC is constructively sound cards, either installed in the motherboard slot, or integrated on the motherboard or expansion card of another PC subsystem, as well as audio information recording and playback devices (acoustic system). Separate functional modules of the sound system can be implemented in the form of daughter cards installed in the corresponding connectors of the sound card.

A classic sound system, as shown in fig. 4.23, contains:

Sound recording and playback module;

Synthesizer module;

Interface module;

Mixer module;

Acoustic system.

Rice. 4.23. The structure of the PC sound system.

The first four modules are usually installed on a sound card. Moreover, there are sound cards without a synthesizer module or digital sound recording / reproduction module. Each of the modules can be made either as a separate microcircuit, or be part of a multifunctional microcircuit. Thus, a Chipset of a sound system can contain both several and one microcircuit.

The design of the PC sound system is undergoing significant changes; there are motherboards with a Chipset installed on them for sound processing.

However, the purpose and functions of the modules of a modern sound system (regardless of its design) do not change. When considering functional modules of a sound card, it is customary to use the terms "PC sound system" or "sound card".

Sound devices are becoming an integral part of every personal computer. In the course of the competition, a universal, widely supported standard for sound software and hardware... Audio devices have evolved from expensive exotic add-ons into a familiar part of a system in almost any configuration.

In modern computers, hardware support for sound is implemented in one of the following forms:

  • an audio adapter that fits into a PCI or ISA bus connector;
  • a microcircuit on a motherboard manufactured by Crystal, Analog Devices, Sigmatel, ESS, and others;
  • audio devices integrated into the baseboard chipset, which includes the most advanced chipsets from Intel, SiS, and VIA Technologies designed for low-cost computers.

In addition to the main audio device, there are many additional audio devices: speakers, microphone, etc. This chapter describes the functionality and features of all components of the computer audio system.

The first sound cards appeared in the late 1980s. powered by AdLib, Roland and Creative Labs, and were used only for games. In 1989, Creative Labs released the Game Blaster stereo sound card; later, the Sound Blaster Pro board was introduced.

For the stable functioning of the board, certain software (MS DOS, Windows) and hardware resources (IRQ, DMA and I / O port addresses) were required.

Due to problems arising in the process of using sound cards that are not compatible with the Sound Blaster Pro system, in December 1995 appeared new development Microsoft's DirectX, which is a series of Application Program Interfaces (APIs) for directly interacting with hardware devices.

Almost every computer today is equipped with some type of audio adapter and a CD-ROM or

CD-ROM compatible drive. After the adoption of the MRS-1-MRS-3 standards, which define the classification of computers, systems equipped with a sound card and a CD-ROM-compatible drive were called Multimedia PCs. The first MPC-1 standard was introduced in 1990; the MRS-3 standard, which replaced it in June 1995, defined the following minimum requirements to hardware and software:

  • processor - Pentium, 75 MHz;
  • RAM - 8 MB;
  • hard drive - 540 MB;
  • CD-ROM drive - four-speed (4x);
  • VGA resolution - 640 x 480;
  • color depth - 65,536 colors (16-bit color);
  • the minimum operating system is Windows 3.1.

Any computers built after 1996 that contain

sound adapter and CD-ROM-compatible drive, fully meet the requirements of the MPC-3 standard.

Currently, the criteria for a computer to belong to the multimedia class have changed somewhat due to technical advances in this area:

  • processor - Pentium III, Celeron, Athlon, Duron or any other Pentium-class 600 MHz processor;
  • RAM - 64 MB;
  • hard drive - 3.2 GB;
  • floppy disk- 1.44 MB (3.5 "high-density disk);
  • CD-ROM drive - 24-speed (24x);
  • audio sampling rate - 16-bit;
  • VGA resolution - 1024 x 768;
  • color depth - 16.8 million colors (24-bit color);
  • input-output devices - parallel, serial, MIDI, game port;
  • the minimum operating system is Windows 98 or Windows Me.

Although speakers or headphones are not technically part of the MPC specification or the above list, they are required for sound reproduction. In addition, a microphone is required to enter voice information used for sound recording or computer speech control. Systems equipped with a sound adapter usually also contain inexpensive passive or active speakers (they can be replaced by headphones providing the required quality and frequency characteristics of the reproduced sound).

A multimedia computer equipped with speakers and a microphone has a number of capabilities and provides:

  • adding stereo sound to entertainment (game) programs;
  • increasing the effectiveness of educational programs (for young children);
  • adding sound effects to demos and tutorials;
  • making music using hardware and software tools MIDI;
  • adding sound comments to files;
  • implementation of sound network conferences;
  • adding sound effects to operating system events;
  • sound reproduction of text;
  • playing audio CDs;
  • playing files in .mp3 format;
  • playing video clips;
  • playback of DVD movies;
  • support for voice control.

Audio system components. When choosing an audio system, you must take into account the parameters of its components.

Sound card connectors. Most sound cards have the same miniature (1/8 ") connectors that feed signals from the board to the speakers, headphones, and stereo inputs, and connect a microphone, CD player, and tape recorder to the same connectors. Figure 5.4 shows the four types connectors that must be installed on your sound card as a minimum The color coding for each connector type is defined in the PC99 Design Guide and varies for different sound adapters.

Rice. 5.4.

Let's list the most common connectors:

  • line output of the board. The signal from this connector is fed to external devices - acoustic systems, headphones or to the input of a stereo amplifier, with which the signal is amplified to the required level;
  • line-in board. Used when mixing or recording an audio signal from an external audio system to a hard drive;
  • speaker and headphone jack. Not available in all boards. Speaker signals are fed from the same jack (line out) as the stereo amplifier input;
  • microphone input, or mono signal input. Used to connect a microphone. Microphone recording is monaural. The input signal level is kept constant and optimal for conversion. For recording, it is best to use an electrodynamic or condenser microphone rated for a load impedance of 600 ohms to 10k ohms. Some cheap sound cards connect the microphone to the line-in;
  • Joystick connector (MIDI port) It is a 15-pin D-shaped connector. Its two pins can be used to control a MIDI device such as a keyboard synthesizer. In this case, you need to purchase a Y-shaped cable;
  • MIDI connector. Plugs into the joystick port, has two round 5-pin DIN connectors used for connecting MIDI devices, and a joystick connector;
  • internal pin connector - A dedicated connector for connecting to an internal CD-ROM drive. Allows you to play sound from CDs through speakers connected to the sound card. This connector differs from the connector for connecting a CD-ROM controller to a sound card, since data is not transferred through it to the computer bus.

Additional connectors. Most modern audio adapters support DVD playback, audio processing, etc., and therefore have several additional connectors, the features of which are given below:

  • MIDI input and output. This connector, not mated to the game port, allows you to use both a joystick and external MIDI devices at the same time;
  • SPDIF input and output (Sony / Philips Digital Interface - SP / DIF). The connector is used to transfer digital audio signals between devices without converting them to analog form. SPDIF is sometimes referred to as Dolby Digital;
  • CD SPDIF. The connector is designed to connect a CD-ROM drive to a sound card using the SPDIF interface;
  • TAD input. Connector for modems with support for an answering machine (Telephone Answering Device) to the sound card;
  • digital output DIN. The connector is designed to connect multichannel digital speaker systems;
  • the entrance of Aih. Provides connection to the sound card of other signal sources, such as a TV tuner;
  • I2S input. Allows you to connect the digital output of external sources such as DVD to your sound card.

Additional connectors are usually located directly on the sound card or connected to an external unit or daughter card. For example, Sound Blaster Live! Platinum 5.1 is a two-piece device. The audio adapter itself is connected via a PCI slot, and additional connectors are connected to the external LiveDrive IR breakout block, which is installed in an unused drive bay.

Volume control. V some sound cards provide manual volume control; on more complex boards, the volume control is carried out programmatically using key combinations, directly during the game in Windows system or in some application.

Synthesizers. Currently, all manufactured boards are stereo, supporting the MIDI standard.

Stereo sound cards simultaneously play (and record) multiple signals from two different sources. The more signals the adapter provides, the more natural the sound. Each synthesizer chip located on the board, most often from Yamaha, allows you to receive 11 (YM3812 or OPL2 chip) signals or more. To simulate more than 20 signals (YMF262 or OPL3 microcircuit), one or two frequency synthesizer microcircuits are installed.

Wavetable sound cards use digital recordings of real instruments and sound effects instead of synthesized sounds generated by an FM chip. For example, when such an audio adapter plays a trumpet sound, the trumpet sound is heard directly, and not its imitation. The first sound cards supporting this function contained up to 1 MB of sound bites stored in the adapter memory chips. But as a result of the emergence of a high-speed PCI bus and an increase in the amount of computer RAM, most sound cards now use the so-called programmable wave-table method, which allows loading 2-8 MB of short sound fragments of various musical instruments into the computer's RAM.

In modern computer games ah MIDI sound is practically not used, but, despite this, the changes made in the DirectX 8 sound card make it an acceptable option for game soundtracks.

Compression of data. V most boards sound quality similar to CDs with sampling frequency

44.1 kHz, when for every minute of sound when recording even a normal voice, about 11 MB of disk space is consumed. In order to reduce the size of audio files, data compression is used in many cards. For example, the Sound Blaster ASP 16 card compresses audio in real time (while recording) with a compression ratio of 2: 1, 3: 1, or 4: 1.

Because audio requires a lot of disk space, it is compressed using Adaptive Differential Pulse Code Modulation (ADPCM), which can reduce file size by about 50%. However, this degrades the sound quality.

Multifunctional signal processors. Many sound cards use Digital Signal Processors (DSPs). Thanks to them, the boards have become more "intelligent" and freed the computer's central processing unit from performing such time-consuming tasks as cleaning signals from noise and compressing data in real time.

Processors are found in many general purpose sound cards. For example, the EMU10K1 programmable digital signal processor (DSP) of the Sound Blaster Live! compresses data, converts text to speech and synthesizes so-called three-dimensional sound, creating the effect of sound reflection and choral accompaniment. With such a processor, the sound card turns into a multifunctional device. For example, in the IBM WindSurfer communications card, the digital processor acts as a modem, fax, and digital answering machine.

Sound card drivers. Most boards ship with generic drivers for DOS and Windows applications. Windows 9x and Windows NT already have drivers for popular sound cards; drivers for other boards can be purchased separately.

DOS applications usually do not have a wide selection of drivers, but PC games do support Sound Blaster Pro adapters.

In recent years, the demands on audio devices have increased significantly, which in turn has led to an increase in the power of the hardware. Modern unified multimedia hardware cannot fully be considered a perfect multimedia system, characterized by the following features:

  • realistic surround sound in computer games;
  • high quality sound in DVD movies;
  • speech recognition and voice control;
  • creation and recording of audio files in MIDI, MP3, WAV and CD-Audio formats.

Additional requirements for hardware and software required to achieve the above characteristics are presented in table. 5.3.

Table 5.3. Additional features and properties of audio adapters

Appointment

Necessary

possibilities

Additional hardware

Additional software

Game port; three-dimensional sound; audio acceleration

Game controller; rear speakers

DVD movies

Dolby 5.1 decoding

Speakers with audio adapter, Dolby 5.1 compatible

MPEG decoding software

Software Compatible Audio Adapter

Microphone

Dictating software

Creating MIDI files

Audio adapter with MIDI input

MIDI compatible

musical

keyboard

Software for creating MIDI files

Create MP3 files

Digitizing audio files

CD-R or CD-RW drive

Program for creating MP3 files

Creating WAV files

Microphone

Sound recording program

Creating CDAudio Files

External sound source

WAV or MP3 to CD-Audio Converter

Minimum requirements for sound cards.

Replacing the previous Sound Blaster Pro ISA audio adapter with a PCI sound card has significantly improved system performance, but it is advisable to use all the features of the sound cards, which in particular include:

  • 3D audio support implemented in the chipset. The expression "three-dimensional sound" means that sounds corresponding to what is happening on the screen are heard further or closer, behind the back or somewhere to the side. The Microsoft DirectX 8.0 interface includes support for 3D audio, but for this it is better to use an audio adapter with hardware-based 3D audio support;
  • Leverage DirectX 8.0 along with other 3D audio APIs such as Creative's EAX, Sensaura's 3D Positional Audio, and the now-defunct Aureal A3D technology;
  • ZO-sound acceleration. Sound cards with chipsets that support this feature have a fairly low CPU utilization rate, resulting in an overall increase in gaming performance. For best results, use the chipsets that can accelerate the most 3D streams; otherwise, the processing of three-dimensional sound by the central processor will be difficult, which ultimately affects the speed of the game;
  • game ports that support force feedback game controllers.

There are many mid-range sound cards out there today that support at least two of these features. At the same time, the retail price of audio adapters does not exceed $ 50-100. New 3D audio chipsets supplied by various manufacturers allow fans of 3D computer games to upgrade the system in accordance with their wishes.

DVD movies on your computer screen. To watch DVD movies on your computer, you need the following components:

  • digital disc playback software that supports Dolby Digital 5.1 output. One of the more acceptable options is PowerDVD;
  • an audio adapter that supports Dolby Digital input to the DVD drive and outputs data to Dolby Digital 5.1-compatible audio hardware devices. In the absence of the appropriate hardware, Dolby 5.1 input is configured for four speakers; in addition, you can add an S / PDIF ACS (Dolby Surround) input for four-speaker speakers;
  • Dolby Digital 5.1 compatible receiver and speakers. Most high quality sound cards that support Dolby Digital 5.1 are connected to a dedicated analog input receiver, but others, such as the Creative Labs Sound Blaster Live! Platinum, support speakers with digital input by adding an additional Digital DIN connector to the board.

Speech recognition. The speech recognition technology is still imperfect, but today there are programs that allow you to give commands to the computer by voice, call the necessary applications, open files and the necessary dialog boxes and even dictate to him texts that would have had to be typed earlier.

For the typical user, applications of this type are useless. For example, Compaq supplied computers with a microphone and an application for voice control for some time, and the application was very cheap. While it was fun to watch many users in the office talking to their computers, the performance didn't actually improve, but a lot of time was wasted as users were forced to experiment with software and besides, the office became very noisy.

However, for users with disabilities, this type of software may be of some interest, so speech recognition technology is constantly evolving.

As stated above, there is another type of speech recognition software that allows you to convert speech to text. This is an unusually difficult task, primarily because of the differences in speech patterns of different people, so almost all software, including some applications for issuing voice commands, includes a stage of "learning" the technology of voice recognition of a particular user. In the process of such training, the user reads the text (or words) running on the computer screen. Since the text is programmed, the computer quickly adapts to the speaker's manner of speech.

As a result of the experiments, it turned out that the quality of recognition depends on the individual characteristics of speech. In addition, some users are able to dictate entire pages of text without touching the keyboard, while others get tired of it.

There are many parameters that affect the quality of speech recognition. Let's list the main ones:

  • discrete and continuous speech recognition programs. Continuous (or coherent) speech, which allows for a more natural "dialogue" with a computer, is currently standard, but, on the other hand, there are a number of insoluble problems in achieving acceptable recognition accuracy;
  • trainable and non-trainable programs. "Training" the program for correct speech recognition gives good results even in those applications that allow you to skip this stage;
  • great active and general dictionaries. Programs with a large active vocabulary respond much faster to spoken language, and programs with a larger general vocabulary allow you to keep a unique vocabulary;
  • computer hardware performance. The increase in the speed of processors and the amount of RAM leads to a tangible increase in the speed and accuracy of speech recognition programs, and also allows developers to introduce additional features into new versions of applications;
  • high quality sound card and microphone: headphones with built-in microphone are not intended for recording music or sound effects, but for speech recognition.

Sound files. There are two main types of files for storing audio recordings on a personal computer. Files of the first type, called regular sound files, use the .wav, .voc, .au, and .aiff formats. An audio file contains waveform data, that is, it is a digital recording of analog audio signals suitable for storage on a computer. Three levels of sound recording quality used in Windows 9x and Windows Me operating systems are defined, as well as the quality level of sound recording with characteristics of 48 kHz, 16-bit stereo and 188 Kb / s. This level is designed to support playback of sound from sources such as DVD and Dolby AC-3.

To achieve a compromise between high sound quality and small file size, you can convert .wav files to .mp3 format.

Compression of audio data. There are two main areas in which audio compression is applied:

  • the use of sound bites on websites;
  • reducing the volume of high quality music files.

Dedicated audio editing programs such as RealProducer from Real or Microsoft Windows Media Encoder 7, allow you to reduce the volume of sound bites with minimal loss of quality.

The most popular audio file format is .mp3. These files are close to CD sound quality and are much smaller than regular .wav files. For example, a 5-minute .wav sound file with CD quality is about 50 Mb, while the same .mp3 sound file is about 4 Mb.

The only drawback of .mp3 files is the lack of protection against unauthorized use, that is, anyone can freely download such a file from the Internet (fortunately, there are a great many websites offering these "pirated" recordings). The described file format, despite the shortcomings, has become quite widespread and has led to the mass production of TR3 players.

MIDI files. A MIDI audio file differs from a .wav format in the same way a vector graphic differs from a raster. MIDI files have the extension .mid or .rmi and are completely digital, not containing a sound recording, but rather the commands used by the audio equipment to create it. Just as video adapters use commands to create images of 3D objects, MIDI sound cards work with MIDI files to synthesize music.

MIDI is a powerful programming language that became widespread in the 1980s. and is specially designed for electronic musical instruments. The MIDI standard has become a new word in the field of electronic music. With MIDI, you can create, record, edit, and play music files on your personal computer or on a MIDI-compatible electronic musical instrument connected to your computer.

MIDI files, unlike other types of audio files, require a relatively small amount of disk space. Recording 1 hour of stereo music stored in MIDI format requires less than 500KB. Many games use MIDI recording rather than sampled analog recording.

A MIDI file is actually a digital display of a musical score made up of several dedicated channels, each of which represents a different musical document or type of sound. The frequencies and duration of the notes are defined in each channel: as a result, a MIDI file, for example, for a string quartet, contains four channels, which represent two violins, an alto and a cello.

All three MPC specifications as well as PC9x provide support for MIDI in all sound cards. The General MIDI standard for most sound cards allows up to 16 channels in a single MIDI file, but this does not necessarily limit the sound to 16 instruments. One channel is capable of representing the sound of a group of instruments; therefore a full orchestra can be synthesized.

Since a MIDI file is composed of digital commands, editing is much easier than a .wav audio file. The corresponding software allows you to select any MIDI channel, record notes, and add effects. Certain software packages are designed to record music to a MIDI file using the standard music notation system. As a result, the composer writes the music directly to the computer, edits it as needed, and then prints out the sheet music for the performers. This is very convenient for professional musicians who have to spend a lot of time rewriting notes.

Playing MIDI files. Running a MIDI file on a personal computer does not mean playing back the recording. The computer actually creates music according to the recorded commands: the system reads the MIDI file, the synthesizer generates sounds for each channel in accordance with the commands in the file in order to give the desired tone and duration to the sound of the notes. To produce the sound of a specific musical instrument, the synthesizer uses a predefined pattern, that is, a set of commands that creates a sound similar to that played by a specific instrument.

A synthesizer on a sound card is similar to an electronic keyboard synthesizer, but with limited capabilities. According to the MPC specification, the sound card must have a frequency synthesizer that can simultaneously play at least six melodic notes and two drums.

Frequency synthesis. Most sound cards generate sounds using a frequency synthesizer; this technology was developed back in 1976. By using one sine wave to alter another, a frequency synthesizer creates an artificial sound that resembles a specific instrument. The MIDI standard defines a set of preprogrammed sounds that can be played with most instruments.

Some frequency synthesizers use four waves, and the sounds reproduced are quite normal, albeit somewhat artificial. For example, the synthesized sound of a trumpet is undoubtedly similar to its sound, but no one will ever recognize it as the sound of a real trumpet.

Table-wave synthesis. The peculiarity of frequency synthesis is that the reproduced sound, even at best, does not completely coincide with the real sound of a musical instrument. Inexpensive technology for more natural sounding was developed by Ensoniq in 1984. It allows you to record the sound of any instrument (including piano, violin, guitar, flute, trumpet and drum) and store the digitized sound in a special table. This table is written either to ROM chips or to disk, and the sound card can extract the digitized sound of the required instrument from the table.

With the help of a table-wave synthesizer, you can select an instrument, make the only necessary note sound and, if necessary, change its frequency (that is, play a given note from the corresponding octave). Some adapters use multiple tones of the same instrument to improve sound reproduction. The highest note on the piano is different from the lowest pitch, so for a more natural sound, select the sample that is closest (in pitch) to the synthesized note.

Thus, the quality and variety of sounds that the synthesizer can reproduce largely depends on the size of the table. The best quality wavetable adapters usually have several megabytes of memory on board for storing samples. Some of them provide the ability to connect additional cards to install additional memory and record sound samples into a table.

Connect other devices to the MIDI connector. The sound card's MIDI interface is also used to connect electronic instruments, sound generators, drums, and other MIDI devices to your computer. As a result, MIDI files are played by a high-quality musical synthesizer rather than a sound card synthesizer, and you can create your own MIDI files by playing notes on a dedicated keyboard. The right software will allow you to compose a symphony on a PC by recording the notes of each instrument separately into its own channel, and then allowing all channels to sound simultaneously. Many professional musicians and composers use MIDI devices to compose music directly on their computers, without traditional instruments.

There are also high quality MIDI cards that operate in bi-directional mode, that is, they play back pre-recorded soundtracks while recording a new track to the same MIDI file. A few years ago, this could only be done in a studio using professional equipment that cost hundreds of thousands of dollars.

MIDI devices connect to the audio adapter's two round 5-pin DIN connectors used for input (MIDI-IN) and output (MIDI-OUT) signals. Many devices also have a MIDI-THRU port that transmits signals from a device's input directly to its output, but sound cards usually don't. Interestingly, according to the MIDI standard, data is transmitted only through pins 1 and 3 of the connectors. Pin 2 is shielded and pins 4 and 5 are not used.

The main function of the sound card's MIDI interface is to convert (convert) a stream of bytes (i.e., 8 bits in parallel) of data that is transmitted by the computer's system bus into a serial data stream in MIDI format. MIDI devices are equipped with asynchronous serial ports operating at 31.25 Kbaud. When exchanging data in accordance with the MIDI standard, eight information bits are used with one start and one stop bit, and the serial transmission of 1 byte takes 320 ms.

In accordance with the MIDI standard, signals are transmitted over a special unshielded twisted pair cable, which can be up to 15 m long (although most cables sold are 3 or 6 m long). You can also daisy-chain multiple MIDI devices to combine their capabilities. The total length of a MIDI device chain is not limited, but the length of each individual cable must not exceed 15m.

Legacy-free systems do not have a game port (MIDI port) connector - all devices are connected to a USB type bus.

Software for MIDI devices. The Windows 9x, Windows Me and Windows 2000 operating systems come with the Media Player software, which plays MIDI files. In order to use all the possibilities of MIDI, it is recommended to purchase specialized software for performing various operations for editing MIDI files (setting the tempo of playback, cutting, and inserting various pre-recorded music).

A number of sound cards come with software that provides editing capabilities for MIDI files. In addition, many freeware and shareware tools (programs) are freely distributed over the Internet, but the really powerful software that allows you to create and edit MIDI files has to be purchased separately.

Recording. Almost all sound cards are equipped with an input connector, by connecting a microphone to which you can record your voice. Using the Sound Recorder software in Windows, you can play, edit and record a sound file in a special .wav format.

The following are the main uses for .wav files:

  • maintenance of certain events in the Windows system. To do this, use the Sounds option in the Windows Control Panel;
  • adding speech annotations using Windows controls OLE and ActiveX for documents of various types;
  • Entering accompanying text in presentations created using PowerPoint, Freelance Graphics, Corel Presentations, or more.

In order to reduce the size and further use on the Internet, .wav files are converted into .mp3 or .wma files.

Audio CDs. Using a drive CD-ROM you can listen to audio CDs not only through speakers, but also through headphones, while working with other programs. A number of sound cards come with programs for playing CDs, and such programs are often downloaded from the Internet for free. These programs usually have a visual display that simulates the front panel of a CD player for keyboard or mouse control.

Sound mixer (mixer). If you have multiple sound sources and only one speaker, you must use a sound mixer. Most sound cards are equipped with a built-in audio mixer (mixer) that allows you to mix sound from audio, MIDI and WAV sources, line-in and CD-player, playing it on a single line-out. Typically, on-screen audio mixing software interfaces look like a standard audio mixer panel. This makes it easy to control the sound volume of each source.

Sound cards: basic concepts and terms. In order to understand what sound cards are, you first need to understand the terms. Sound is vibrations (waves) propagating in air or other medium from a source of vibrations in all directions. When the waves reach the ear, the sensing elements located in it perceive the vibration and sound is heard.

Each sound is characterized by frequency and intensity (loudness).

Frequency - this is the number of sound vibrations per second; it is measured in hertz (Hz). One cycle (period) is one movement of the oscillation source (back and forth). The higher the frequency, the higher the tone.

The human ear perceives only a small range of frequencies. Very few people hear sounds below 16 Hz and above 20 kHz (1 kHz = 1000 Hz). The lowest note on a grand piano is 27 Hz and the highest is just over 4 kHz. The highest sound frequency that FM broadcasters can transmit is 15 kHz.

Volume sound is determined by the amplitude of vibrations, which depends primarily on the power of the sound source. For example, a piano string sounds soft when struck lightly, because its range of vibration is small. If you hit the key harder, then the vibration amplitude of the string will increase. Sound loudness is measured in decibels (dB). Rustle of leaves, for example, has a loudness of about 20 dB, ordinary street noise is about 70 dB, and a nearby thunderclap is 120 dB.

Assessment of the quality of the sound adapter. Three parameters are used to assess the quality of a sound adapter:

  • frequency range;
  • coefficient of nonlinear distortion;
  • signal-to-noise ratio.

The frequency response defines the frequency range in which the level of the recorded and reproduced amplitudes remains constant. For most sound cards, the range is 30 Hz to 20 kHz. The wider this range, the better the board.

The coefficient of nonlinear distortion characterizes the nonlinearity of the sound card, that is, the difference between the real curve of the frequency response and the ideal straight line, or, more simply, the coefficient characterizes the purity of sound reproduction. Every non-linear element causes distortion. The lower this ratio, the higher the sound quality.

Higher signal-to-noise ratios (in decibels) result in better sound reproduction.

Sampling. If the computer has a sound card, it is possible to record sound in digital (also called discrete) form, in which case the computer is used as a recording device. The sound card includes a small microcircuit - an analog-to-digital converter, or ADC (Analog-to-Digital Converter - ADC), which, when recording, converts an analog signal into a digital form that a computer can understand. Likewise, when played back, a Digital-to-Analog Converter (DAC) converts the audio into sound that our ears can hear.

The process of converting the original audio signal into digital form (Fig.5.5), in which it is stored for subsequent playback, is called sampling, or digitization. At the same time, the instantaneous values ​​of the sound signal are stored at certain points in time, called the choice


Rice. 5.5. Kami audio-to-digital conversion circuit. The more often samples are taken, the more closely the digital copy of the sound matches the original.

The first MPC standard provided for 8-bit audio. The bit depth characterizes the number of bits used to digitally represent each sample.

Eight bits define 256 discrete levels of the audio signal, and if you use 16 bits, then their number reaches 65,536 (of course, the sound quality is significantly improved). An 8-bit representation is sufficient for voice recording and playback, while 16-bit representation is required for music. Most older boards only support 8-bit audio, all modern boards support 16-bit or more.

The quality of the recorded and reproduced sound along with the resolution is determined by the sampling rate (number of samples per second). In theory, it should be 2 times higher than the maximum signal frequency (i.e., the upper frequency limit), plus a 10% margin. The hearing threshold of the human ear is 20 kHz. CD recordings correspond to 44.1 kHz.

Audio sampled at 11 kHz (11,000 samples per second) is more washed out than audio sampled at 22 kHz. The amount of disk space required to record 16-bit audio at a sampling rate of 44.1 kHz for 1 minute will be 10.5 MB. With 8-bit presentation, monaural sound and 11 kHz sampling rate, the required disk space is reduced by 16 times. This data can be checked using the "Sound Recorder" program: record a sound fragment at different sampling rates and look at the size of the resulting files.

Three-dimensional sound. One of the most challenging challenges for sound cards in a gaming system is handling 3D audio tasks. There are several factors that complicate the solution of problems of this kind:

  • different sound positioning standards;
  • hardware and software used to process 3D audio;
  • DirectX support issues.

Positional sound. Sound positioning is a common technology for all 3b sound cards and involves adjusting certain parameters such as reverb or reflection, equalization (balance), and indicating the "location" of the sound source. All these components create the illusion of sounds coming in front, to the right, to the left of the user, or even behind his back. The most important element of positional sound is the Head Related Transfer Function (HRTF), which determines the change in sound perception depending on the shape of the ear and the angle of rotation of the listener's head. The parameters for this function describe the conditions under which “realistic” sound is perceived completely differently when the listener's head is turned to one side or the other. The use of multi-speaker systems that surround the user from all directions, as well as sophisticated sound algorithms that complement the reproduced sound with controlled reverberation, make the computer-synthesized sound even more realistic.

3D audio processing. An important factor in high-quality sound is the various ways of processing three-dimensional sound in sound cards, in particular:

  • centralized (the central processor is used to process three-dimensional sound, which leads to a decrease in the overall performance of the system);
  • Sound card processing (3 D-acceleration) with powerful digital signal processor (DSP) processing directly in the sound card.

Sound cards that provide centralized 3D audio processing can be a major reason for the lower frame rate (the number of animation frames per second) when using 3D audio. In sound cards with a built-in audio processor, the frame rate remains almost unchanged when 3D audio is turned on or off.

As practice shows, the average frame rate of a realistic computer game should be at least 30 frames per second (frames per second). With a high-speed processor, for example, Pentium III 800 MHz, and any modern ZE-sound card, this frequency can be achieved quite easily. If you use a slower processor, say, a Celeron 300A running at 300 MHz, and a board with centralized 3D audio processing, the frame rate will be much lower than 30 fps. To see how 3D audio processing affects the speed of PC games, there is a frame rate tracking feature built into most games. Frame rate is directly related to processor utilization; higher resource requirements for the processor will lead to a decrease in the frame rate.

Technologies of three-dimensional sound and three-dimensional video images are of the greatest interest primarily for developers of computer games, but their use in a commercial environment is also not far off.

Connecting a stereo system to a sound card. The process of connecting a stereo system to a sound card is to connect them with a cable. If the sound card has an output for a speaker system or headphones and a stereo line-out, then it is better to use the latter to connect a stereo system. In this case, a better sound is obtained, since the signal arrives at the line output without going through the amplification circuit, and therefore is practically not subject to distortion, and only the stereo system will amplify the signal.

Connect this output to the auxiliary input of your stereo system. If your stereo does not have auxiliary inputs, you should use others, such as the input for a CD player. The stereo amplifier and the computer do not have to be placed side by side, so the connecting cable can be several meters long.

A number of stereo and radio receivers have a rear panel connector for a tuner, tape recorder and CD player. Using this connector, as well as the line in and out of the sound card, you can listen to the sound coming from the computer, as well as the radio broadcasts through the stereo speaker system.

Rule 2. Before connecting the device to the network, look at what is written on the back of the device.

Check the voltage at the output of the autotransformer at idle speed before connecting the device to it.

Check the voltage supplied to the machine while making copies.

After finishing work, disconnect the autotransformer plug from the network. Do not leave the autotransformer energized!

Rule 3. It is very important to consider the requirements for the installation of the copier. The device must be installed on a flat, horizontal surface. Leaning from the horizontal position redistributes the toner and media in the machine cartridge towards a slope. Accordingly, their mixing becomes difficult and the uniformity of the coverage of the magnetic roller with toner is disturbed.

Laboratory work. Learning how sound processing devices work

purpose of work

Examine the block diagram of the PC sound system that make up the sound system.

7.2 Work progress:

1) Get acquainted with the block diagram of the PC sound system.

2) Study the main components (modules) of the sound system.

3) Get acquainted with the principle of operation of the synthesizer module.

4) Get acquainted with the principle of operation of the interface module.

5) Become familiar with the principle of operation of the mixer module.

1) Topic, purpose, progress of work;

2) Formulation and description of the individual task;

7.4 Test questions

1) What are the main modules of the classic sound system?

2) What is the essence of synthesizing.

3) Name the phases of the audio signal.

4) What methods of sound synthesis do you know?

5) List the modern interfaces of audio devices.

Methodical instructions.

PC sound system structure

The PC sound system is constructively sound cards, either installed in the motherboard slot, or integrated on the motherboard or expansion card of another PC subsystem.

A classic sound system, as shown in Figure 23, contains:

1. module for recording and playing sound;

2. synthesizer module;

3. interface module;

4. mixer module;

5. speaker system.

Figure 23 - The structure of the PC sound system

Synthesizer module

The electronic music digital synthesizer of the sound system allows you to generate almost any sound, including the sound of real musical instruments. The principle of operation of the synthesizer is illustrated in Figure 24.

Synthesis is the process of recreating the structure of a musical tone (note). The sound signal of any musical instrument has several time phases. In Figure 24, a shows the phases of the sound signal that occurs when you press the ml of the grand piano. For each musical instrument, the type of signal will be unique, but three phases can be distinguished in it: attack, support and decay. The combination of these phases is called amplitude envelope, the shape of which depends on the type of musical instrument. The duration of an attack for different musical instruments varies from units to several tens or even hundreds of milliseconds. In a phase called support, the amplitude of the signal remains almost unchanged, and the pitch of the musical tone is formed during the support. The last phase, attenuation, corresponds to a section of a fairly rapid decrease in the signal amplitude.

In modern synthesizers, sound is created in the following way. A digital device using one of the synthesis methods generates a so-called excitation signal with a given pitch (note), which should have spectral characteristics as close as possible to the characteristics of the simulated musical instrument in the support phase, as shown in Figure 24. b. Next, the excitation signal is fed to a filter that simulates the frequency response of a real musical instrument. The signal of the amplitude envelope of the same instrument is fed to the other input of the filter. Further, the set of signals is processed in order to obtain special sound effects, for example, echo (reverberation), choral performance. Next, digital-to-analog conversion and signal filtering are performed using a low-pass filter (LPF).

Key features of the synthesizer module:

Sound synthesis method;

Memory;

Possibility of hardware signal processing to create sound effects;

Polyphony - the maximum number of simultaneously reproduced sound elements.

Sound synthesis method, used in the PC sound system determines not only the sound quality, but also the composition of the system. In practice, synthesizers are installed on sound cards that generate sound using the following methods.

Figure 24 - The principle of operation of a modern synthesizer: a - phases of the audio signal; b - synthesizer circuit

FM synthesis method ( Frequency Modulation Synthesis - FM synthesis) involves the use of at least two signal generators of complex shapes to generate the voice of a musical instrument. The carrier frequency generator generates a fundamental tone signal, frequency modulated by a signal of additional harmonics, overtones that determine the timbre of a particular instrument. The envelope generator controls the amplitude of the resulting signal. The FM generator provides acceptable sound quality, is inexpensive, but does not provide sound effects. Therefore, sound cards using this method are not recommended according to the PC99 standard.

Sound synthesis based on the wave table (Wave Table Synthesis - WT-synthesis) is produced by using pre-digitized sound samples of real musical instruments and other sounds stored in a special ROM, made in the form of a memory chip or integrated into a WT-generator memory chip. WT synthesizer provides high quality sound generation. This synthesis method is implemented in modern sound cards.

Memory on sound cards with a WT synthesizer, it can be increased by installing additional memory elements (ROM) for storing banks with instruments.

Sound effects are formed using a special effect processor, which can either be an independent element (microcircuit), or be integrated into the WT synthesizer. For the vast majority of cards with WT synthesis, reverb and chorus effects have become standard.

Sound synthesis based on physical modeling involves the use of mathematical models sound production of real musical instruments for generation in digital form and for further conversion into an audio signal using a DAC. Sound cards that use physical modeling are not yet widely used because they require a powerful PC to run.

Interface module

The interface module provides data exchange between the sound system and other external and internal devices.

ISA interface in 1998 it was supplanted in sound cards by the PCI interface.

PCI interface provides a wide bandwidth (for example, version 2.1 - more than 260 Mbps), which allows the transmission of audio streams in parallel. Using the PCI bus allows you to improve the sound quality, providing a signal-to-noise ratio of more than 90 dB. In addition, the PCI bus allows for cooperative processing of audio data, where processing and transmission tasks are shared between the audio system and the CPU.

MIDI (Musical Instrument Digital Interface- digital interface of musical instruments) is regulated by a special standard containing specifications for the hardware interface: types of channels, cables, ports through which MIDI devices are connected to one another, as well as a description of the order of data exchange - a protocol for exchanging information between MIDI devices. In particular, using MIDI commands, you can control lighting equipment, video equipment during the performance of a musical group on stage. Devices with a MIDI interface are connected in series, forming a kind of MIDI network, which includes a controller - a control device, which can be used as both a PC and a musical keyboard synthesizer, as well as slave devices (receivers) that transmit information to the controller via its request. The total length of the MIDI chain is not limited, but the maximum cable length between two MIDI devices should not exceed 15 meters.

Connecting a PC to a MIDI network is carried out using a special MIDI adapter, which has three MIDI ports: input, output and pass-through, as well as two connectors for connecting joysticks.

The sound card includes an interface for connecting CD-ROM drives.

7.5.4 Mixer module

The sound card mixer module performs:

Switching (connecting / disconnecting) sources and receivers of sound signals, as well as regulating their level;

Mixing (mixing) several audio signals and adjusting the level of the resulting signal.

The main features of the mixer module are:

The number of mixed signals on the playback channel;

Regulation of the signal level in each mixed signal;

Regulation of the total signal level;

Amplifier output power;

The presence of connectors for connecting external and internal receivers / sources of audio signals.

The sources and receivers of the audio signal are connected by the mixer module via external or internal connectors. External audio connectors are usually found on the back of the chassis. system unit: Joystick / MIDI- to connect a joystick or MIDI adapter; Mic In- to connect a microphone; Line in- line-in for connecting any sources of audio signals; Line out- line-out for connecting any receivers of audio signals; Speaker for connecting headphones (earphones) or a passive speaker system.

The software control of the mixer is carried out either by means of Windows or using the mixer program supplied with the sound card software

Compatibility of the sound system with one of the sound card standards means that the sound system will provide high-quality sound reproduction. Compatibility issues are especially important for DOS applications. Each of them contains a list of sound cards that the DOS application is designed to work with.

Sound Blaster Standard support DOS game applications in which the soundtrack is programmed for the Sound Blaster family of sound cards.

Windows Sound System Standard (WSS) Microsoft includes a sound card and software package focused primarily on business applications.

Examples of individual assignments

Model 1 - SB PCI CMI 8738 Sound Card

Figure 25 - External view of the sound card SB PCI CMI 8738

Description: Sound card with the ability to play sound in 5.1 format

Equipment type: Multimedia sound card

Chip: C-Media 8738

Analog Inputs: 2

Analog Outputs: 3

Connectors: External: Line In, Mic In, Front Speaker Out, Rear Speaker Out, Center / Subwoofer Out; internal: line-in, CD-in

Ability to connect 4 speakers: Yes

Dolby Digital 5.1 Support: Yes

EAX support: EAX 1.0 and 2.0

Interface: PCI

Ability to connect 6 speakers: Yes


Model 2 - SB PCI Terratec Aureon 5.1 PCI Sound Card

Figure 26 - External view of the SB PCI Terratec Aureon 5.1 PCI sound card

Description: 6-channel sound card.

3D sound: EAX 1.0, EAX 2.0, Sensaura, Aureal A3D 1.0, Environment FX, Multi Drive, Zoom FX, I3DL2, DirectSound 3D

Chip: С-media CMI8738 / PCI-6ch-MX

DAC: 16 bit / 48 kHz

ADC: 16 bit / 48 kHz

Number of speakers: 5.1

Analog inputs: 1x unbalanced miniJack connector, miniJack microphone input, internal connectors: AUX, CD-in.

Analog outputs: MiniJack audio outputs for connecting 5.1 speakers (front-out, rear-out, sub / senter-out).

S / PDIF: 16 ​​bit / 48 kHz

Digital I / O: Optical (TOSLINK) output, Optical (TOSLINK) input.

Sampling rate: 44.1, 48 kHz

System requirements (minimum): Intel PentiumIII, AMD K6-III 500 MHz 64 MB memory

Interface: PCI 2.1, 2.2