Saturday, April 15, 2006

 

Verification environment moves up to Stratix III

Aldec has announced System Verification Environment (SVE) support for Altera Corporation's new high-end Stratix III FPGA device family.

Aldec has announced System Verification Environment (SVE) support for Altera Corporation's new high-end Stratix III FPGA device family. SVE supports all aspects of system-level design development and verification. It includes an industry-leading common kernel HDL simulator, a set of online debuggers, code coverage, cross-probing tools and an industry-first integrated simulator server farm manager (SFM) for automatic verification of ultra-large system-level designs.

'Aldec and Altera engineering teams are working together to ensure Aldec's verification solutions are validated for Stratix III device support'.

'The integration of Altera's Quartus II design environment to Aldec's mixed-language verification solutions provide customers with a seamless migration path for validating Stratix III designs', stated Dr Stanley M Hyduke, CEO of Aldec.

'The relationship between our companies continues to grow and we look forward to supporting our mutual customers on the next generation of Stratix designs'.

'Engineers designing with Stratix III devices have a broad range of system-level requirements, including intellectual property integration and multilanguage support'.

'In addition to meeting these requirements, Stratix III devices can accommodate multiple processors, memories and peripheral devices', said Danny Biran, Vice President of Product and Corporate Marketing at Altera.

'Engineers can then use the Aldec SVE solution to accelerate the verification cycle for their Stratix III designs'.

To speed verification and debugging of Stratix III designs, SVE can also handle OVA, PSL and SVA (System Verilog) assertion languages.

Language templates and predefined test suites ease testing requirements for system-level designs.

The newest trend in design automation is the use of code coverage driven intelligent test benches.

However, such test benches require a considerably larger number of simulators than the traditional test benches.

To handle a large number of test vectors and simulation results, Aldec has developed a server farm manager for Stratix III FPGAs capable of handling thousands of simulators in a highly efficient manner over corporate networks.

The SFM performs numerous operations and functions on design files such as running complex flows on multiple machines, storing, managing and comparing verification results, providing error reports and statistical summaries, optimising license utilisation, automatic network reconfiguration in case of failed nodes, and optimising the usage of corporate computer power.

The SFM option runs on 64bit Linux simulation server farms and handles mixed designs and test benches written in VHDL, Verilog, SystemVerilog and SystemC.

'We place special attention to handling large designs in the most economical way to meet the needs of designers using Stratix III devices'.

'This is why we developed full automation of the design verification process based on a powerful simulation server farm manager', commented Dr Hyduke.

SVE cosimulates EDIF netlist blocks with HDL RTL blocks that allow use of legacy modules with Stratix III devices.

Such capability is unique to Aldec's common kernel HDL verification environment and allows unlimited switching of legacy FPGA designs to the newest silicon from Altera.

SVE will be available in January 2007, and will incorporate system-level verification products to facilitate validation of high-end Stratix III devices.

Friday, April 14, 2006

 

Partnership unites development environments

Vast Systems has joined the Wind River Systems partnership programme.

Vast Systems has joined the Wind River Systems partnership programme. Wind River is the global leader in device software optimisation (DSO) allowing companies to develop, run and manage device software faster, better, at lower cost and more reliably. Wind River's world-class partner ecosystem assures tight integration between Wind River's core technologies and Vast Systems' virtual prototyping solutions.

Vast's ultra-fast and cycle-accurate processor, bus and peripheral models - coupled with Vast's Comet and Meteor tools - provide a systems engineering environment for the design of embedded systems and system-on-chip (SoC).

With Vast solutions, users create, then simulate, analyse and optimise a model of their SoC in software, which results in an optimised architecture and a fast and accurate platform for software developers to use months before the actual hardware is available.

'System-level design touches all aspects of the design flow, from architecture to hardware design and software development'.

'Design flow-focused integrations enable our partners and customers to enjoy greater productivity for the entire design team', said Jeff Roane, Vice President of Marketing at Vast Systems.

'Wind River and Vast have had an evolving relationship for some time'.

'The integration of our technologies creates a complete pre- to post-silicon development environment covering architecture analysis and optimisation, firmware, middleware and application development'.

'Vast looks forward to aligning with Wind River's sales teams to create enhanced value for our mutual customers and our respective companies', said Erin Marshall, Vice President of North American Sales for Vast Systems.

Thursday, April 13, 2006

 

Distortion Measurement

1 Introduction


Distortion, in audio systems, is usually understood as meaning ‘non-linear distortion’, which is heard as a roughness and confusion of the sound. Non-linearity often refers to a lack of proper correspondence between instantaneous input signal voltage and instantaneous output signal voltage. In a linear system, one with a linear transfer function that is, twice the input voltage produces twice the output voltage, but in practical systems this may not be the case.

2 THD (total harmonic distortion)

Plotting output versus input to determine the transfer function is not a useful method for determining distortion in audio systems though, for two reasons. Firstly, they usually respond only to changing signals (not DC), in the audio frequency range. Secondly they may suffer varying time delays (phase shift) at different frequencies. Both of these effects would produce a non-linear response to, say, a voltage ramp, even in a system free from non-linear distortion. Testing with a sine-wave input conveniently avoids these problems, while allowing the distortion to be quantified in terms of ‘harmonics’ (new components appearing at the output with frequencies that are multiples of the input sine wave frequency). The term ‘Total Harmonic Distortion’ refers to the sum of all these components, measured rms (root mean square) as a percentage of the total (rms) output.

THD is commonly measured by using a notch filter to remove the input frequency from the output, allowing what is left to be measured, and in this case the measurment should strictly be referred to as THD+noise, since it includes any random noise on the output.

3 Crossover Distortion – how THD fails as a measure of audibility

In the early days of audio, one form of non-linearity dominated in all systems. Valves (tubes), tape recordings and transformers all tended to produce less output at high levels of input, in a symetrical manner (affecting positive and negative sides of the signal equally). This ‘soft limiting’ squashes the peaks of the sine-wave, producing a form of distortion known as ‘odd-order’ which contains only odd harmonics of the input (3rd, 5th etc). Processes that cause asymetric distortion generate only even harmonics (2nd, 4th etc) and are rarer. Odd order distortion products are sometimes considered more objectionable than even order, since they are not musically related in the way that even order products are (by octave intervals).

With the introduction of transistor amplifier though, a new form of distortion arose, known as ‘crossover distortion’ and caused by a kink in the transfer characteristic as the sine-wave crosses zero. This is a form of ‘high order’ distortion, and produces odd harmonics (3rd, 5th, 7th, 9th etc) which extend right up the frequency range, with little reduction in amplitude, and because the ear analyses sounds in terms of frequency components, and is most sensitive to frequencies in the 2kHz-8kHz region, it turns out that we are particularly sensitive to even small amounts of crossover distortion, when compared to the ‘low-order’ distortion of valves and tape, which generate mostly 3rd harmonic.

While early experiments had determined that 0.1% THD (-40dB) was the very minimum that could be heard, it soon became apparent that this was not the case for transistor power amplifiers, where 0.01% or less could be heard as harshness on the sound. Nevertheless, THD measurements continued to be quoted, and audio measurement itself got a bad reputation among the ‘hi-fi’ fraternity, who turned to listening tests as the only way to assess audio equipment.

4 Attempts at Weighted Distortion measurement

What was really need was a subjectively valid method of measurement. To simply assume that measurements of noise, distortion or anything else are meaningful in themselves is to miss the point. Audio measurement, if it is to give useful results, should never be about quantifying a system in purely engineering terms. It must be about using methods that have already been shown to correspond to subjective effect, and such methods must rely in the first instance on listening tests. So why not just listen? There are many good reasons for measuring, rather than listening. Our ears vary greatly, between individuals and from day to day (depending on what levels of sound we have been exposed to) so they are not reliable tools. They also have dificulty distinguishing one form of corruption (such as distortion) in the presence of another (such as noise, or flutter, or the reverberation of the listening room). Then there is the problem of the long signal chain, in which recordings pass through numerous items, for example from microphone to mixer to tape recorder to transmitter to broadcast receiver to loudspeaker. If we are to guarantee that what we hear will be indistinguishable from the original, then each part of the chain must contribute a lower level of distortion or noise than our ears could possibly detect, if their sum total is to be inaudible.

Many researchers have tried over the years to devise schemes for measuring distortion that gave subjectively valid results, yet they mostly seem to have got it wrong! A commonly quoted method, for example, involves multiplying the level of each harmonic according to its number and yet such schemes are clearly flawed as they take no account of the fact that our hearing responds less and less to frequencies above 10kHz and hardly at all to 15 or 20kHz, above which few people hear anything at all. They are further flawed by their continued use of rms measurement, which greatly underestimates crossover distortion (and digital distortions) as will be explained.

5 Distortion Residue – a meaningful measurement at last

There is in fact a relatively simple way to weight distortion, which works remarkably well across all devices from tape to power amps and digital systems, and has been incorporated into a standard (IEC ). Lindos Electronics recommends this, and has chosen to call the result ‘Distortion Residue’ for easy identification as distinct from THD. It involves nulling out the fundamental of a 1kHz test tone, and then measuring what remains just as if it were noise, using the ITU-R 468 weighted measurement method (which Lindos Electronics refer to as simply 468-weighted). This emphasises high-order harmonics around 6kHz by 12dB but attenuates those above 10Khz, and ignores those above 20kHz. Because it uses a Quasi-Peak rectifier, the method also gives proper temporal weighting to brief bursts that would be largely ignored by the averaging inherent in rms measurement.

6 Why the Distortion Residue method works

When we listen to speech and music, the content is mostly in the 300Hz to 3kHz region. Because real sounds are complex and constantly changing, any low order distortion products from this region will appear more as random noise than as individual harmonics tones, but predominantly in the 900Hz to 9kHz part of the spectrum (3rd harmonic predominantly). If they were tones, we could fairly assess them using the A-weighting curve, derived from equal-loudness contours for our hearing process, but for noise such weighting is not valid. Rather than being most sensitive in the 2kHz region, as the A-weighting curve would suggest, our ears are much more sensitive in the 6kHz region, for reasons to do with spectral density that are explained elsewhere. This is why the 468-weighting curve, which was designed to reflect our sensitivity to noise rather than tones, should be used, with its 12.2dB of emphasis at 6.3kHz.

If we now consider speech or music subject to crossover distortion in a power amplifier, another very important consideration arises. Typical speech and music waveforms do not cross zero very often. There may be violins or cymbals contributing significant high frequencies to the waveform, but most of the time these will be riding on top of bass notes such that zero-crossings are much less frequent than they would be for a pure tone. Where a relatively pure tone does arise from a violin note, it will not have many audible harmonics, whatever the distortion mechanism, because the 3rd harmonic of a 3kHz tone is at 9kHz, and the 5th is at 15kHz which will not be heard by most listeners. Most people are surprised to find that they can tell absolutely no difference between a 6kHz square wave and a 6kHz sine wave, but of course the 3rd harmonic is at 18kHz, and even listeners who can hear to 18kHz will usually have greatly reduced sensitivity at this frequency.

The distortion residue from a typical speech or music signal passed through a power amplifier is therefore better imagined as a series of ‘clicks’ occurring every time the waveform passes through zero. Their total contribution to a Fourier Analysis over a period of seconds, as shown on a spectrum analyser, is very low, hence the low THD figures traditionally obtained from ‘bad’ amplifiers, but our ears do not analyse over seconds. Each hair-cell in the cochlea responds as a narrow band filter over a period of milliseconds (the higher the frequency the shorter the response time) and so it can give a brief comparatively high level response, heard as a click, to each crossover event. Whether we hear a given even depends on this short-term response, not some average over a relatively long period, as is implicit in any rms (root mean square) measurement.

The Quasi-Peak rectifier used for 468-weighted measurement, was based on listening tests in which subjects were asked to rate the loudness of various clicks, tone bursts, and other sounds of various duration, and with various repetition rates, against a reference tone. It therefore reflects pretty well the temporal aspects of hearing, and does not diminish the effect of each click by averaging, though it gives less weight to very short bursts, which our ears do not have time to respond to fully, or to bursts with a low repetition rate which our brains give less importance to.

7 Distortion Residue Measurement in practice

Distortion Residue measurement works, and its use is to be encouraged (hence its adoption as the sole method of distortion measurement in the Lindos MS10). It will always give a result higher than the corresponding THD measurement, but this is an advantage, making for easier measurement. On tape machines for example, where distortion is mostly 3rd-harmonic it will give a result some 8dB worse than a corresponding THD measurement, and if significant modulation noise is present, as on compact cassette, then this too will be properly emphasised leading to an even higher figure. On power amplifiers, the result may be 10 or 20dB worse than a corresponding THD measurement, depending on the harshness (order) of the crossover characteristic, giving quite a reliable indication of audible performance. Where once it used to be said that 0.1% THD (-60dB) represented the threshold of audibility for distortion, except for power amplifiers, it is probably now fair to say that 0.3% Distortion Residue (-50dB) represents the threshold of audibility regardless of what equipment is being measured. Digital systems normally produce distortion that is more like noise than harmonics, and faulty digital convertors are likely to produce repeating ‘clicks’ rather than tones, in much the same way as crossover distortion does, so the method is very well suited to digital measurements.

It should be noted that attempting to measure distortion residue at frequencies other than 1kHz would not be useful. At low frequencies (100Hz) the effect of the weighting filter would be to severely attenuate all the low order harmonics. Whilst weighted measurement of low frequency distortion might be a useful concept, emphasising the enhanced audibility of harmonics over their fundamental, the weighting curve would have to be normalised to a low frequency (such as 100Hz) rather than 1kHz, and should probably be closer to A-weighting, since low frequency distortion is commonly noticed on sustained bass notes.

8 Other Types of Distortion Measurement

Intermodulation distortion measurement is a valid way of measuring the interaction between components of different frequencies, and three main methods have been used, as explained elsewhere. The quoting of figures for intermodulation distortion never seems to have caught on though, possibly because the results from tape machines for example are atrociously high. This, coupled with the fact that tape recordings can sound very good, lends weight to the assertion that intermodulation distortion is not in itself as significant as some would have us believe.

The same is true of ‘Transient distortion measurement’, a topic that gained popularity for a while in audiophile circles. Provided that recordings are limited to 20kHz, as they should be, all that is needed is for every part of the signal path to be able to handle 20kHz at maximum level, a statement that again gains support from the fact that some of the best recordings were made on analog tape recorders, which struggle to manage this, and certainly cannot record square waves!

This is a complex topic, but Lindos recommends that distortion residue measurement is probably all that is generally required to indicate whether a signal path is likely to give audible distortion on programme material.

 

Understanding Decibels (dB)

Understanding decibels (dB)

The decibel is widely used as a measure of the loudness of sound, but it is actually only a convenient way of specifying the ratio between two quantities, so a 6' 7" man could be said to be 1dB taller than a 6' man! An engineer might use it for such things in jest, but in general the dB is commonly used to express a wide variety of measurements in acoustics and electronics, where it helps in giving a manageable view of things that can cover huge range of values, where only geometric increase or decrease is important. It is especially useful in audio because our ears follow a roughly logarithmic law, referred to by psychologists as Weber's law. The smallest change in loudness that we can detect is about 1dB, or a ten percent change, regardless of whether the sound is very quiet or very loud, in other words we hear loudness 'geometrically' rather than 'arithmetically'.

The decibel is a dimensionless unit like percent and only has meaning when levels are specified relative to a reference level. In audio, the reference level is often specified with a suffix after the dB notation and a brief summary of the most commonly used standards follows:

dBm

dBm specifies a power level on a line, referenced to 1mW. In the early days of audio signals were passed along 'matched' transmission lines, meaning that both the source impedance and the load impedance were the same as the 'characteristic impedance' of the line, usually 600 ohms. This ensured that all energy sent down the line ended up in the load resistance, with non reflected back to cause frequency response anomalies or echoes.

dBu

dBu specifies that the voltage amplitude of an audio signal is referenced to 0.775 volts rms. This is the same voltage as would be needed to dissipate 1mW into a 600 ohm resistor, and is kept for historical reasons, though 600 ohm matched transmission is rarely used today. The standard for studio interconnection today uses low source impedance (<50 ohms) and high input impedance (10 or 20k ohm). The use of a 'solid' drive ensures more accurate levels, and reflections are not a problem over short paths (say <50m). The u is thought to stand for 'universal' or maybe 'unloaded'. 0dBu is the universal 'Alignment level' within many broadcast organisations and recording studios, with signal levels allowed into the 'headroom' region which may be 24dB, 18dB (EBU recommendation for programme interchange) or 8dB (EBU standard for radio and TV broadcasts and paths).
dBV

dBV specifies that the reference amplitude is 1V. This is sometimes used for consumer levels, but dBu are to be preferred because most audio instruments are calibrated in dBu.


dB SPL


This is a measure of sound pressure level, relative to 20 micropascals (µPa = 1×10^-6 Pa), an arbitrary figure chosen as being about the quietest sound a human can hear. This is roughly the sound of a mosquito flying 3 metres away!

dB FS

This means relative to full-scale or the point at which clipping occurs in a system. It is frequently used in referring to digital signal levels which do not in themselves correspond to any particular voltage, until they are converted using a D-A convertor.

dB AL (Recommended by Lindos)

This means dB relative to Alignment Level, which is a reference level or anchor point against which all else in an audio system is measured. Alignment level may correspond to different voltage levels in various part of the system or signal path (though commonly it will be 0dBu), and in a digital recording it may correspond with varying digital values, though commonly it will be -18dB FS (or -24 or -12) as defined in EBU recommendations. Importantly it is the reference at any point in a system above which Headroom is defined and relative to which noise is measured.

The great merit of this system is that a noise measurement made relative to Alignment Level will always be a reasonable guide to the true intrusiveness of the noise onto typical material, regardless of the headroom. Where less headroom is available for transmission, compression or limiting will usually be used to reduce brief peaks, but this has little effect on overall perceived loudness. A recording may start life with 24dB of headroom, and end up with only 9dB of headroom at the listener’s radio or television. Its ‘dynamic range’ has indeed been reduced, but it will not sound any noisier, assuming that it is the peaks that have been limited. Instead it will have lost ‘sparkle’ and impact. Specifying noise and headroom separately reflects these different qualities properly.

Decibels in Audio Recording and Reproduction

The dynamic range of the human ear is phenomenal, through a complex gain adjustment system a range of around 140dB is covered. Accurately capturing and reproducing the quietest and loudest sounds audible by humans is a formidable task.

The diagram below shows the range of levels handled by various stages in a typical audio chain, from live sound to loudspeaker. Note how most devices cannot cope with the full range without the use of compression or limiting.
Useful Conversion Tables

It is useful to remember that 6dB is approximately a factor of 2.0, 10dB a factor of 3, and 20 dB exactly 10 times, when referring to voltage levels. When levels are multiplied, dB are added. Thus 26dB represents 20 times.

Power levels, being proportional to the square of the voltage, have different ratios, so that 3dB is twice the power, 6dB is 4 times the power, and 10dB is ten times the power. Usually, these days, it is voltage ratios that are relevant, and the following table will be found useful to commit to memory:


0dB x 1
+1dB x 1.1
0dB x 1
+1dB x 1.1
+3dB x 1.414 (root 2)
+6dB x 2
+3dB x 3
+20dB x 10
+30dB x 30
+40dB x 100
+50dB x 300
+60dB x 1000 etc

and:

-1dB x 0.9
-3dB x 0.707
-6dB x 0.5
-10dB x 0.3
-20dB x 0.10
-40dB x 0.01
-60dB x 0.001 etc

 

Noise Measurement

Introduction to noise measurement

Noise in general refers to unwanted sound, often loud, but in audio systems it is the low-level hiss or buzz that intrudes on quiet passages that is of most interest. All recordings will contain some background noise that was picked up by microphones, such as the rumble of air conditioning, or the shuffling of an audience, but in addition to this every piece of equipment which the recorded signal subsequently passes through will add a certain amount of electronic noise, which ideally should be so low as to contribute insignificantly to what is heard.

Microphones, amplifers and recording systems all add some electronic noise to the signals passing through them, generally described as hum, buzz or hiss. All buildings have low-level magnetic and electrostatic fields in and around them emanating from mains supply wiring, and these can induce hum, commonly at 50 or 100Hz into signal paths. Screened cables help to prevent this, and on professional equipment, where longer interconnections are common, balanced signal connections (most often with XLR connectors) are usually employed. Hiss is the result of random signals, often arising from the random motion of electrons in transistors and other electronic components, or the random distribution of oxide particles on analog magnetic tape. It is predominantly heard at high frequencies, sounding like steam or compressed air.

Attempts to measure noise in audio equipment as rms voltage, using a simple level meter or voltmeter, do not produce useful results; a special noise-measuring instrument is required. This is because noise contains energy spread over a wide range of frequencies and levels, and different sources of noise have different spectral content. For measurements to to allow fair comparison of different systems they must be made using a measuring instrument that responds in a way that corresponds to how we hear sounds. From this, three requirements follow. Firstly, it is important that frequencies above or below those that can be heard by even the best ears are filtered out and ignored; by bandwidth limiting (usually 22Hz to 22kHz). Secondly, the measuring instrument should give varying emphasis to different frequency components of the noise, in the same way that our ears do; a process referred to as ‘weighting’. Thirdly, the rectifier, or detector, which is used to convert the varying alternating noise signal into a steady positive representation of level should take time to respond fully to brief peaks to the same extent that our ears do; it should have the correct ‘dynamics’.

The proper measurement of noise therefore requires the use of a specified method, with defined measurement bandwidth and weighting curve, and rectifier dynamics, and two main methods defined by standards are currently in common use: A-weighing, and ITU-R 468, formerly known as CCIR weighting.

A-weighting

A-weighting uses a weighting curve based on ‘equal loudness contours’ that describe our hearing sensitivity to pure tones, but it turns out that the assumption that such contours would be valid for noise components was wrong. While the A-weighting curve peaks by about 2dB around 2kHz, it turns out that our sensitivity to noise peaks by some 12dB at 6kHz. Another weakness of A-weighting is that it is usually combined with an rms (root mean square) rectifier, which measures mean power, with no attempt made to account for proper hearing dynamics.

ITU-R 468 weighting

When measurements started to be used in reviews of consumer equipment in the late 1960’s it became apparent that they did not always correlate with what was heard. In particular, the introduction of Dolby B noise-reduction on cassette recorders was found to make them sound a full 10dB less noisy, yet they did not measure 10d better. Various new methods were then devised, including one which used a harsher weighting filter and a quasi-peak rectifier, defined as part of the German DIN45 500 ‘Hi Fi’ standard. This standard, no longer in use, attempted to lay down minimum performance requirements in all areas for ‘High Fidelity’ reproduction.

The introduction of FM radio, which also generates predominantly high-frequency hiss, also showed up the unsatisfactory nature of A-weighting, and the BBC Research Department undertook a research project to determine which of several weighting filter and rectifier characteristics gave results that were most in line with the judgment of panel of listeners, using a wide variety of different types of noise. BBC Research Department Report EL-17 formed the basis of what became known as CCIR recommendation 468, which specified both a new weighting curve and a quasi-peak rectifier. This became the standard of choice for broadcasters worldwide, and it was also adopted by Dolby, for measurements on its noise-reduction systems which were rapidly becoming the standard in cinema sound, as well as in recording studios and the home.

Though they represent what we truly hear, CCIR weighted noise figures are typically some 11dB worse than A-weighted, a fact that brought resistance from marketing departments reluctant to put worse specifications on their equipment than the public had been used to. Dolby tried to get round this by introducing a version of their own called CCIR-Dolby which incorporated a 6dB shift into the result (and a cheaper average reading rectifier), but this only confused matters, and was very much disapproved of by the CCIR.

With the demise of the CCIR, the 468 standard is now maintained as ITU-R 468, by the International Telecommunications Union, and forms part of many national and international standards, in particular by the IEC (International Electrotechnical commission), and the BSI (British Standards Institute). It is by far the best way to measure noise, and the only way that allows fair comparisons; and yet the flawed A-weighting has made a comeback in the consumer field recently, for the simple reason that it gives the lower figures that are considered more impressive by marketing departments.

Signal to noise ratio and Dynamic range

Hi-fi equipment specifications tend to include the terms ‘signal to noise ratio’ and ‘dynamic range’, both of which are confusing and best avoided. Noise has to be measured with reference to something, but this should be ‘alignment level’. Signal to noise ratio has no real meaning as audio signals are constantly changing so there is no such thing as ‘signal level’. Dynamic range used to mean the difference between maximum level and noise level, but maximum level is often hard to define, for example on analog tape recordings, and the term has become corrupted by a tendency to refer to the dynamic range of CD players as meaning the noise level on a blank recording with no dither, in other words just the analog noise content at the output. This is not particularly useful; especially since many CD players incorporate automatic muting in the absence of signal to make them appear even quieter!

Professionals measure noise in dB below alignment level, which is a reference point above which ‘headroom’ exists up to maximum permitted level. Professionals often allow 18dB of headroom, as recommended by the EBU (European Broadcasting Union), so a noise level of –60dB ITU-R 468 would represent a dynamic range of 78dB, which if measured A-weighted might come out 11dB better at 89dB. A noise level of -60dB AL would be considered reasonably good by professionals, with –68dB representing the best attainable from 16-bit digital audio (noise shaped), and more than good enough for most purposes.

Audiophiles may talk in terms of 96 to 120dB dynamic range, but they often fail to refer to any measurement standard, making the figures meaningless. Attempts to calculate the dynamic range of digital audio on the basis that 16 bits represents a ratio of 65000:1 or 96dB are invalidated by the fact that the full digital count represents the peak possible level, rather than the rms equivalent of the maximum possible sinewave, while the minimum count of one has little to do with the noise level, which depends on the type of dither (or noise-shaping) used. They also fail to take any account of weighting for subjective validity.

This page is powered by Blogger. Isn't yours?