I enjoy a good deal of music in a surround mode.
I think the earliest and simplest of the surround sound ‘encoding/decoding’ from a two-channel source was merely: if a particular part of the two signals appears only in the left or right channel, leave it alone; if a part of the signal exists in-phase in both the left and right channels, remove it from the left and right channel outputs and put it in a center channel; if a part of the signal exists in both the left and right channels at or near 180ş out-of-phase, remove it from the left and right channel outputs and place it in the surround channel(s).
If a sound recording engineer is recording a ‘live’ event with a simple* two or three microphone rig, sounds from the audience, like applause, are going to arrive at the microphones, depending on where each individual clap originates, in a mix of in-and-out-of-phase signals. When this ‘live’ recording, even without specific ‘encoding,’ is played back two-channel via DPLII or similar, the ‘decoding’ still takes place. The result is not to aurally place each clap at the same listening point relative to the microphones as in the original environment, but listening in a surround sound mode places the audience sounds ‘all around’ similar to, but not exactly the same, as attending the performance live.
The same in-and-out-of-phase decoding of a live recording affects the playback in a surround sound system of the reverberations and other aural aspects originating within the venue. Will it sound exactly the same at home? No. Will it add to the feel of a ‘live’ event? Yes.
What about multiple source, mixed-down studio recordings? Even in the days of stereo, engineers knew that if a voice were placed in both channels in-phase, it would appear in the center of the playback soundstage. If purposely placed in both channels out-of-phase, the voice would then seem as if it were all around instead of localized. Producers and engineers have been artistically affecting the playback of stereo materials for many years prior to surround sound by adjusting the relative level and phase of various signals within the mix. If the engineer is aware of what happens in surround sound processing, even further refinement of the mix may take place so that certain sounds are ‘dispersed’ when listening in stereo and ‘spread’ around or to specific locations when listening in a surround sound mode. This can sound gimmicky if applied crudely, or enhance the listening experience if applied with finesse.
The recordings that, in my mind, may sound better in one of the stereo modes are those made where everything is in-phase and just spread left and right in the soundstage by relative level setting, commonly called ‘panning’ the sound to some degree right or left.
Of course one of the stereo modes may also work better, no matter what type of two-channel recording, if you are going to fill your listening area with sound, softly or loudly, while many people are present in a ‘party’ atmosphere, rather than a one-to-a-few people just ‘listening closely.’
*I don't mean to say that such recording is a 'no-brainer' task, only that, relatively speaking, there are harder and more complex ways to do a live recording. Often the 'simple' way works better for certain goals.