i would carry a fatter ipod to hold a 2.5 inch hard drive rather than the smaller version.
but the people who have no care about the higher quality versions of the audio files will start to see the bigger capacity as an 'unlimited' amount of space that could be said to 'urge' them to purchase more music to try and fill up the memory available.
since they dont care about the higher quality versions, they dont form the right view of the fatter ipod.
some people will say 'why would i buy a bigger one when this smaller one has done the job time after time?'
and that doesnt look good for the company if/when there are equal number of fatter ipods to smaller ipods.
as a fatter ipod would confuse the customers who have been constantly told 'small.. slim.. portable'
and they couldnt market the fatter ipod as being made specifically for 'audiophiles' because that would make the rest of the older ipods look bad, and people would feel like their ipods are not audiophile quality.
if such an attempt was made, they would have to reveal the truth about any imperfections of their previous products.
and those imperfections might anger the loyal customers.
audiophiles will have to use an alternative until the solid state disks (or the regular hard drives that fit) become larger.
we still have an ackward moment of 'cd quality' versus 'dvd quality' and the low number of artists who release music for the 'dvd quality' market sector.
there are thousands of people who are home theater enthusiasts.. and these people dont necessarily dive deep into the reasons of why the audio sounds better.
some of us see the sample rate and bit depth as the reason, but other people simply use the dolby digital or dts format upgrade.
when the surround sound format says things like 'master audio' or 'truehd' .. that is enough for some people to sit back and relax as they enjoy 'high definition' audio.
but, even then, that doesnt stop people from listening to these new audio formats and knowing good and well that the audio sounds better than the past.
phasing out cd's should happen.
as it will revolutionize the entire car audio industry by forcing them to install dvd players in all of their vehicles.
maybe some wont have a screen to watch movies.. but the option would be there.
i wonder what it would mean for the radio broadcast industry.. as i dont know if they can stream dvd quality audio.
if they could, i would imagine the reception distance would be less.
but i really see two things that could happen:
1. the radio stream isnt full dvd quality to keep the broadcast wide for a lot of listeners.
2. the artists and publishers might not want their audio to be played in such a low format version
because if the audio quality isnt as high as 'usual' .. the licensing fee to be paid for playing the music should be less.
and that might again anger the artist or publisher.
it could and should very well happen.. because as it has been since cd's came out, if you listen to the same song on the radio - the quality is less.
and if the gap between radio and dvd audio was bigger.. that would be another reason to go out and buy your favorite music.
since there are more samples for dvd audio than cd audio.. those extra samples would mean more data information in the air from the broadcasting towers.
i dont think bluray will be the be all that ends all format.
if you think about it, we had floppy and then zip drive.. then cd's .. then dvds .. now bluray
that is five improvements.
since cd quality was the first digital mainstream product, that should put it right there in place of the floppy disk.
technically, floppy was shrunk down in size and could be said to be the equivalent to dvd.. and again with bluray.
but the comparison isnt apples to apples since i havent said (and i dont know) of when not only the size was decreased, but the amount of data that could be stored went up.
an entire album could be put on a memory chip and loaded into a music player to read from the chip.
but that might not happen if they shrink the size of data smaller than bluray discs.
sample rates and bit depths would go up a whole bunch again.
and i believe super audio cd has shown to us, when the sample rate gets 'fine' enough.. the bit depth can actually go down.
if they can fit all of those samples onto a standard sized disc already for the super audio cd.. then we should be good to go as we wait for the lower sample rates to be phased out (meaning the cd standard will be defunct)
i'm not all that sad about it.. because they made tapes defunct (and some still use tapes since they were in the crowd of superior quality tapes to begin with).
but the reason people forget about tapes is because they didnt use a properly functional song skip feature.
i think the 'consumer' based song skip function was to keep the tape head on the tape to find a pause, and a pause = new song.
but running the head on the tape always destroys the tape slowly (probably a matter of the tape quality itself.. but be fair for everybody)
a photo optic system could be used to capture a moment of silence, and that would eliminate excessive destruction of the tape.
we even seen these options on consumer quality tape decks towards the end.
the problem with them was that the moment of silence was programmed too short and a pause in the song was enough to make the seek function stop in the middle of a song.
meaning a standard 4 second gap would have proven to be superior for everybody as it would take 3 seconds of silence to trip the seek function to stop fast forwarding.
but lets face it..
not everybody knew about tapes that could have audio quality closer to bluray than dvd quality.
not everybody could afford it.
the usual tapes at the stores are junk compared to some of the superior options.
and the tapes could still get chewed up and eaten (with a cleaning required and a much longer length of time before it happened again for professional quality tape players)
the only reason they held onto it was to prevent themselves from being stuck with inferior quality that was on the cd.
for the rich or well blessed it might be to also avoid the quality on dvd's .. but since there is no need for that now that we have bluray, there isnt really anything that could persuade me with tapes other than an increase in their quality that allows for something like super audio cd quality at a fraction of the price.
being out in the workfield and have a tape that gets chewed up, the entire testing session is probably ruined.
not only do you need to stop everything and clean the tape deck.. it might have been clean and gotten dirty because of the environment.
meaning special attention to keep the tape player clean.
dvd and bluray recorders already have a sealed up compartment.
and inorder for the tapes to have comparible seek times, the tape needs to be wider.
you might have some serious motors giving some solid speed.. but those motors cost money.
i havent heard of a lot of people who are really upset about transfering tapes to digital format.
sure, the analog to digital chip might have been poor quality.
and the sample rate might have made the digital version less quality than the tape.
but they phased out vhs for video, and they phased out tapes for audio (considering the consumer market here).
we need to hear the benefits of the new technology to accept and appreciate the phasing out of cd quality.
i havent heard bluray quality yet because i dont own a bluray player.
but with sample rates being three times as high, and the bit depth also increasing.. i fail to see how the improvement is not heard, unless the microphone used to record the audio was junk.
with all of that said,
i dont know if recording studios are being pushed to use individual microphones.
they should be, but that might be the loss of a job for extra spectral analysis persons.
if there was a team to do all of the panning, they might not need the entire team.
yes, it would be faster.. because as i mentioned above, the data needs to be viewed to properly cut and paste the up close and personal microphones.
some cutting and removing of unwanted/unmatched sounds would also need to happen.
so for some studios, maybe this means hiring some help.
i dont know which one takes more time and skill.. to sit and pan the entire length of the movie - each sound needs to be reviewed and panned.
compared to..
the entire length of the movie needs to be reviewed, with less panning - but the cutting and pasting needs to happen, along with the removal of some sounds.
if the less panning is enough, then the cutting and pasting could be much the same as cutting and pasting each individual track (microphone)
and if there was ever some removing of sounds before, then it would be about the same.
talking to the right person leads to knowing the potential reason to give it a try.
those video people who do the 3d depth videos should know all about the importance of multiple cameras.
anybody in the field of stereoscopy knows the importance of two cameras.
mirrors and lenses dont always provide.
when a single lens allows one sensor to perform as if there are two cameras 2ft apart.. you might find that to be perfect to capture a photo or video of a pair of dice on a table.
but that isnt opening doors to bigger objects.
if you wanted to record an entire side of a building, there would need to be more cameras involved.
i really dont know if a mirror is acceptable to give the sense of depth.
if they can, i would think they are something extra to carry around and the calibration of those mirrors takes more time than setting up a second or third camera.
it is a matter of the simple basics being far superior than trying to cut corners.
one camera might be able to take multiple photos to be joined together later for the illusion of depth from a single picture.
but if you try to record video, the camera cant be two places at once.
what i really wanted to say about the individual microphones, the listening point of the example might prove to be confusing.
the listening point always needs to be from the perspective of the video camera for perfection.
but
that doesnt mean you cant have the microphones on the trees behind what the camera can see to record audio for the front channels.
and then the cameras on some trees (or stands) to the side (or behind) the camera for the rear channels.
the result would be a demo.. and depending on your use of the word 'localised' would depend on whether the demo is or isnt localised.
as the vehicle draws closer to the camera, people have come to expect the sound to grow louder from the front two speakers and especially the center channel.
as the mic on the camera represents the center channel and a microphone off to the side of the camera represents the front left and right.
this is perfection for the perspective of the camera.
but, not all movies give the perspective of the camera full surround capabilities.
sometimes they use the actor as the listening point.
they really shouldnt ever do this as it is unprofessional, because the visual perspective doesnt match the audio perspective.
only when the camera switches to a first person persepctive can the surround be tailored to the actors position.
this has been done for video games, since many of the games have a first person perspective.
and that raises a solid question.. just what is going on behind the camera that we care to hear?
if an actor begins talking while behind the camera and travels forward to eventually be seen on camera, then yes.. the rear speakers would mimic what the camera see's and hear's.
the four microphones and a vehicle romping around in a field is not the typical sound field people are used to.
as the front speakers should never really get loud.
if they did, it would throw off the cameras perspective.
instead, the microphones are supposed to be used to pickup ambient sounds.
when the vehicle gets close to the camera, the rear speakers will get louder as the front speakers are simply providing ambient fill (and if any at all that could be heard at that moment)
BUT
the real trick and excitement making the situation worth a try is to trick the brain into thinking you are standing where the camera is.
as the vehicle draws closer to the camera, you are beginning to think that you might get run over.
the hard part is getting the rear speakers to sound like they are not directly behind you.
because it is a severe conflict to have a roaring engine in front of you and hear the engine noises from behind you.
and all you have to do is use some reverb to place the sounds of the rear speakers in front of you.. then continue to use the front and left speakers as ambient fill.
that brings the engine noises in front of you, and that gives you the ultimate amount of ambient fill from a quadrophonic speaker setup.
if you attach a microphone directly onto the camera, that provides a center channel and it should really bring the whole ambience into synchronization.
you might be thinking, why on earth would i want those far away microphones clear on the other side of the field playing sounds from the front two speakers?
well i will admit, the whole thing is a bit schizophrenic.
and you have to grasp the concept of how many speakers are in the room, and how to put them to good use.
think about it, if you have rear speakers and you are recording the vehicle romp around in the field.. you would need microphones way back behind the camera, and they wouldnt capture anything useful without a wall to reflect the sounds (or a serious breeze making noise, which is not really important because you cant see or feel the breeze)
so, since the rear speakers are duds without much of anything to do.. you use them for the up close and personal sounds.
that leaves the two front speakers wide open to be used for something.
well.. surround sound is all about ambience here, not localization.
once the rear speakers have been calibrated to be heard in front of you and no longer behind you, then you can use them as front speakers.
and since your ears face forwards rather than backwards, the sound captured from the far away microphones across the field in front of you.. they will be in the direction that your ears are picking up information from.
those sounds cant be loud at all from those microphones.
you are essentially capturing the echo of the soundwave as it travels further away from you.
because there are reasons those soundwaves can be heard as they travel across the ground, it is wise that you include them since your ears have satellite dishes pointed in that direction.
and here is the absolutely beautiful part..
the soundwaves from the rear speakers have to come into contact with the soundwaves from the front speakers.
together they can recreate the size of the field in any sized room.
all you need is the mental power to forget about the walls and focus on the video.
the rear speakers are up close and personal sounds and the front speakers are distance speakers for all 'distant' sounds in front of you.
you might be thinking.. well why not let the two microphones on each side of the camera pick up the distant sounds in front of you?
that is because the two microphones arent the same distance apart as your ears.
therefore you need to grab the far away sounds and add them into the audio track for a more solid inclusion.
the whole technique is a form of inclusion and making efficient use of the speaker setup.. also known as ambience.
loud volumes from afar will ruin everything quickly.
but
it is the same principle as using a clip on microphone to capture the vocals.. and then a boom microphone above the camera to capture the room's sound.
the difference between these two is complex, but extremely similar.
as the boom mic is above the camera, you capture the room echo close to your ears.
but we know good and well that there are room reflections on the wall close to the actors (opposite corner/wall to be exact)
if those reflections arent captured, you are listening to a corner of the room and are being robbed of the directivity of your ear's satellite dishes.
for perfect ambience, you want a microphone in each of the four corners to triangulate the audio.
of course the microphones far away in the opposite corner are going to be output low from the speakers (think 35dB or less)
but some of us know how important it is to triangulate.
stereoscopy isnt ambience.
triangulation isnt ambience.
stereoscopy is a factor of two.
triangulation is a factor of three.
ambience is a factor of at least four.
because the camera deserves its own microphone for perfect perspective, that includes the center channel.
high resolution of the audio quality is directly related to the recording.. meaning if the audio quality is low, you are going to hear the imperfections when the microphone is inches away from the person's mouth, or if the microphone is 8ft away.
high resolution of the ambient quality is all about the quality of the microphone (and at least two of them are needed)
but, we know microphones arent as sensitive and dynamic as are ears connected to our brain.
it just doesnt happen often enough that a microphone can record something in mono, and then we listen to the audio from one speaker to our ear and hear everything as perfect as if we were standing there ourselves.
since that lack of fidelity exists, more microphones can help capture the ambience that we use to process distance and degree of angle.
this isnt fantasy or imagination.
you simply cannot have a microphone pick up every detail for sounds 30ft away.
the microphones have limitations, and one of those main limitations are the distance of sound that can be picked up and recorded.
picked up is hard enough to get, but try picking up those sounds and recording them will perfect detail.
it isnt possible, and that is why they should place a microphone in the problem area to capture and record those voids loud and clear.
then we can add them to the audio track with pristine clarity and attenuate the sound until it matches the decibel level of what it would have been if we recorded the decibel level at the camera lens.
doing it this way should be considered 'parallax listening'
because the word contains inside of it 'parallel'
we need to know where the 'lax' part comes in.
i believe the two front speakers are parallel to eachother.
the two rear speakers are parallel to eachother.
and then going cross divded between front left and rear right (again for front right to rear left).
all of these combined creates the 'lax' portion of the word.
once this project is complete and you think it might have been difficult or easy..
you then gotta realize that there are sometimes sounds from behind the camera, and above shows a two point chamber (one far away in front and one up close and personal for the side)
another point needs to be added to the chamber, and that is the one far away from behind.
it would look like this:
____________________
. .
. .
. .
____________________
the above is for standing in the middle of the room, and the camera is usually towards the bottom wall.. so we remove the two middle dots and let the two bottom dots provide all that is needed.
since you are with your back to the wall, nothing is going to be coming from behind you except some room reflections that should be easily captured for each corner.
all you have to do then is know the decibels from the microphone to the camera lens.
but
when there is plenty of room behind you, then the two middle dots are used as the up close and personal.
those two dots can move closer to either wall.. and this works perfectly until the distance between middle dots and 'wall dots' become seperated too much.
think of a warehouse and you have one wall equipped with microphones, and then the middle dots are there for your up close and personal.
if the other two dots are like 50ft away, the noise might not register on the decibel meter from the camera lens, and that is the threshold of the rear dots being too far away and worthless/pointless.
if you continued to use them.. a person would walk into the warehouse and close the door behind them as they came in.
you would hear the sound because of the microphones close to the door.. but based on your position in the warehouse, you would be thinking 'what did i just hear? do i have super human hearing today??!'
and yes, you would have super human hearing.. more like what a dog can hear.
it is a matter of being out of range that can be quickly determined with a decibel meter at the camera lens.
as long as the decibel meter isnt much more sensitive than what the average person can hear, the whole scheme works quite well.
and the hardest part of the job of the audio mastering is to determine how sensitive the person's hearing is.
if the sounds are loud enough to hear in the video playback.. that person simply might not be able to hear that good if they were actually standing in the same spot as the camera.
with the dot example above, it should become clear that the microphone arrays are becoming to a point where they simply exist and another single dot can move anywhere amongst those microphones and get a localized sound.
its like video game programming or those 3d demos where the object bounces around the speakers and the room is supposed to sound exactly like the point of sound source moving around.
the beauty of this is when the surround sound format (or the studio itself) doesnt have the effect pre-programmed to demonstrate the same distance and angle.
you dont have to rely on those presets anymore, and you also dont have to take the closest one and manipulate it until it sounds about right.
the only problem i could think of would be the sounds not accepted by the surround sound encoder - as if to say the sounds are 'out of range'
and it can happen when 32bit bit depth is needed and you are stuck using 24bit (or you are using 64bit bit depth and you need something higher)
it is basically a problem of two things:
1. the microphone isnt as dynamic as our ears, meaning they cant capture the distance and position as well as our ears can (some mics can do it, but they dont go the distance our ears can)
2. the bit depth isnt as dynamic as our ears, again with the ability to have the information of distance and position that our ears can naturally capture.
that is why all of the scenes in movies are small.
or
they scene might be big, but you dont dare hear anything far away.
if you do hear something that is far away, what happens?
you guessed it, the sound appears much more up close and personal than what it really was.
the new trick with this setup is pretty clear.
the rear speakers being used as front speakers would also need to have the reverb revoked for sounds that are ment to be heard from behind.
since side speakers in a 7.1 speaker setup dont even begin to point forward, there is no way for them to use reverb (easily) and be heard as front speakers.
so with that said, yes.. it would be easier to use the rear wall surround speakers to play forward sounds and act as front speakres .. then use the side speakers behind you for the surround/rear sounds.
but
this proves to be highly complex, as the side speakers would blast soundwaves that collide with the rear speakers trying to do their job as front speakers (listening to only the reflection off the front wall, with no actual sound heard from behind you from those speakers)
that collision would need to be mastered and 'interlaced' to clear up the collisions.
digital sound processing would be more complex, as professional installers who do reverb would already be able to handle this.
however, consumer receivers wouldnt have the processing power needed to make it easy.. and it should be looked at.
ALL of this is dependant on the lack of microphone quality.
but
you could also say that capturing the distant audio with a microphone up close to the sound can prove to record a higher quality representation of the sound and then attenuated to fit into the audio track with higher clarity.
not saying resolution by sample rate.. but fidelity because you were close enough to capture the details.
it all boils down to helping people hear better in their listening room compared to their hearing ability if they were at the same spot of the camera while it was all happening.
not everybody can see with stereoscopic depth.. each person is different.
same thing as being able to see well in the dark.
film producers have always helped those who cant see in the dark finally be able to see in the dark by forcing good vision in the dark and recording it with a camera.
people with abnormally sensitive ears could share their experience with people who have a harder time hearing.
does it bring people of different hearing abilties together for a moment? yes.
is creating movies with 'above normal' abilities to hear helpful or annoying?
i think if we took everything and gave it a boost, people who cant and never could use their human senses to that degree of precision would benefit viewing into a world they might never have known existed.
i think one could go too far with the 'enhancements' as it would break free from the average and be really unnatural.
nobody wants perfect vision in the dark, but we should be thankful we can see in the dark when watching the video.
i havent seen many movies that were filmed in the dark and i thought to myself 'i would never ever be able to see that good in the dark.. what where they thinking?'
i might have done it once or twice, but usually i think to myself 'if i cant see that good in the dark, i know there is SOMEBODY who can.'
and the same should be applied to the audio.
it takes a dedicated person to learn of the shortcomings of a microphone.
especially if the microphone's accuracy is solid for a 15ft radius.
takes some experimenting to benchmark the attributes, but it is those attributes that they lack causing a need to fill in the voids.
if we watched film after film with super human hearing, and if it was done right, we would go outside and think we have the same super human ability to hear all the things in the film.
it would be a good gag/joke.. but it is really why the 'enhancement' shouldnt be too much.
but then again, i have seen some movies shot in the dark and i know i cant see that good in the dark.
if i was to watch these movies often enough, i would probably go outside and become quickly dissapointed with my inferior vision in the dark.
and viewing ability in the dark is easier to be known than hearing.
like, you will know right away that you are no competition in the dark.
hearing could take quite some time.
it all depends on how far away the noise is, and how many examples of that distance we get to compare.
a whole lot of writing, but i am really excited about it.
i kinda think i would make this stuff a career.
i hope the original poster isnt too upset, as the information has everything to do with 5.1 or better speaker setups and how recordings are done to create the sound from each speaker.
we are talking about how they actually do it, and the improvements of the recording techniques to make the sound from the 5.1 speaker setup more realistic (and all the more reason to buy some 5.1 speakers !! )
i think i have gone into detail about why 5.1 shouldnt be the industry standard anymore.. and quite a bit of what it would take to make that happen as a new industry standard.
i know it might seem far-fetched to expect any speaker to use reverb and play sounds from the opposite location.
but this is a clear and fair warning to digital sound processing programmers (and home theater receivers alike) to come to terms with reality and where things need to go.
if you want the casual consumer to particpate and enjoy real fidelity.. this is what it is going to take.
it is all here?
talking about the microphone placement and the quality of the microphones.. completely aside from sample rates and bit depths.
should be a relaxing alternative to the many discussions of the limitations inherent to sample rate and bit depth.
as it clears some of the confusion with why anything more than a 5.1 speaker setup should be on the market at all.
if you are hearing something better from the 7.1 or 9.1 speaker setups.. this conversation talks about how the experience can go above and beyond what is already available.
i dont think any enthusiast would have a problem with an upgrade, when all they have to do is calibrate their receiver and pop in the movie.
it would be the same distant measurements.. but would likely include distance from speaker to the wall.
one more measurement isnt going to kill anybody.. but it is a considerable amount of work for digital sound processing programmers.
although, they should be expected to know something of this complexity would eventually come.