For all these huge power sucking multichannel amplifiers the supply of choice seems to be a (typically toroidal) back-breaking line frequency transformer (or two), followed by a simple diode rectifier and huge storage caps. Pure 1960s technology. As any engineer knows, the resulting input current is nowhere near a sine wave, namely highly distorted pulses, and the output voltage is subject to line dips and sags. Much of the time the diodes are blocking and no power is taken from the powerline, hence the huge caps. In technical terms the power factor is much less than 1, typically 0.6 here. Think of it as 40% of the input power capability is wasted! On a 15A / 1800W house circuit only 1080 watts makes it into the amp, at 70% amplifier efficiency (see http://sound.westhost.com/efficiency.htm) you can get about 750 watts out of it - if you are lucky and the line doesn't sag any.

It's common in the electronics industry (and a European requirement for most electronic items) that high-powered devices be "power factor corrected" or otherwise changed to reduce input current distortion. Power is taken from the line for nearly 100% of the time, and filter capacitor size can be reduced some without affecting performance. With a 0.99 power factor you could get 1780 watts into an amplifier, and nearly 1250 watts out under the same conditions. That's a whopping 500 watts difference on the same 15A circuit!

So I ask, why don't sound purists insist on low distortion on the powerline input? And when will the US catch up with Europe and insist on low distortion powerline currents?
_________________________
RPL