they said ray tracing already happens with the new directX 11
but maybe i am mixing two things together..
i'm off to have a look
**okay back**
it appears that there is no ray tracing specifically listed for directX 11.
but
knowing my rationale is high.. i know this is how it goes.
back in the older versions of directX .. the wires themselves had a limit imposed on them.
obviously this limit is to keep the graphics card from running at low frames per second.
and in reality, the limit could be raised as long as the graphics card can support it.
you just cant ask the graphics card to do more than it can.
now, if the wires are that choppy.. it means the closest thing to ray tracing would be the shadows.
since all shadows are like a blanket of pixels that lay over the wires.
this too has a limit imposed to prevent the graphics card from running at low frames per second.
we have seen the many shadows from directX 9 games.
and based on the looks of those shadows, it is safe to say that the limit of the shadow 'space' in pixels is quite the same as the limit of wires.
then.. the actual pixels within the textures, these can be a totally seperate 'function' than the wires or the shadows.
what some people may not realize..
the entire world is full of pixels.. no matter how much they try to argue that it isnt.
you could take a map and give it four points, but then you would have to give each space between any of those four points a coordinate.
when you say the map has 1024x1024 pixels.. that is for the up and down / the left and right.
and people forget that there is also front to back pixels.. meaning 1024x1024x1024
if you wanted to get crazy.. you could do up/down .. left/right .. front/back .. sideways/sideways
and it would look like an X with a line through the middle (probably known as 4D)
anyways..
this is the special part..
when the edges of the world are brought in closer to eachother, that means the wires can get more complex.. the shadows can get more complex (however, locked to the amount of shades of black and grey available).
this means a step closer to ray tracing, since more pixels in a square inch is going to be closer to the more pixels per square inch like ray tracing.
i dont know if ray tracing works with x,y,z coordinates or if it uses an X with a line through it.. or even an X with a line through the middle both ways.
maybe it is an X with two lines through the middle.. and then a line through the middle of all of those.
doesnt matter much to be now.
but..
directX 11 added tessellation.. and tessellation is nothing more than more pixels available for the wire frames (meaning more vectors easily processed)
and that could mean the shadows are also upgraded with more vectors (pixels) available.
and again, maybe more pixels for the 'world' coordinates.
first lets say this, a pixel being rendered as a pixel is not the same thing as a pixel being rendered as a vector.
if you have 2048x2048x2048 pixels for the x,y,z coordinates.. there doesnt have to be that many vectors available.
there is a limit imposed by the graphics processor as to how many of those pixels can be turned into vectors.
directX 11 has increased that number.
since we see directX 9 games with bigger maps.. that means there were more wires made available, but those wires does not mean more vectors (not really, it doesnt.. )
maybe some more vectors were unlocked with the newer graphics cards.. or maybe they were always unlocked, but the current graphics cards would not render them fast enough (say not until the late 7900 or 8800 nvidia graphics cards came out)
so, now with more vectors available for rendering.. the amount of vectors available is massive compared to the older directX version.
maybe there is a limit imposed to the space between each vector.. and if there is, you could minimize the edges of the 'world' and squeeze those vectors closer together to get a results much more closer to ray tracing.
ray tracing is about the number of vectors, but could also be about the SIZE of the vectors.
i dont know if directX 11 changed the vector size or not.
but
to a skilled person, all of these changes are possible.. only to be confronted by a 'policy' or some other junk that tries to get into the way.
again with the pixels and vectors and shadow vectors/pixels being seperate 'functions'
maybe any combination of those are combined together.
but
i wouldnt think twice about lowering one to raise another.
this gets you a 'custom engine' stamped onto your marketing advertisement, and the game would look better than those who didnt decrease something to increase something else.
as an artist, those choices are like choosing a color.
lots of colors and color combinations to choose from.
that is a baseline principle to the design of the foundation the game is supposed to run on.
as the content of the video game is taken from the imagination, the actual mechanics are a game of design (or architecture)
say you give me the chance to build a car.. but you wont let me choose what engine, or what transmission, or the suspension.
that team effort is really going to put out the flame for a dedicated designer who wants (or thought) that they were going to design a vehicle.
no sense in telling me that i am going to design a vehicle and only shape the body without any say about anything else.
anybody can take a box van and beat it into shape with a hammer and some clay.
that is like saying you want to make your own music.. but you use music presets that are loops of pre-made sounds (found in ejay music software).
you really dont have any control unless you take the time to shorten each loop to a single sound.. and even then, you wont have every sound available.. not even if you use the special effects module to warp or manipulate the sound.
to say you would get close is an understatement.. as the real amount of audio sounds would be far far away from what is actually available.
and then what.. you get a sound and use a special effects module to manipulate the sound to something you want.. and then the graphics card says 'i will gladly play the sound.. but i am having difficulty playing the sound with the manipulation added onto it'
that just doesnt fly.. the special effect module would be a tease, taunt, and a tormenting/abusive joke (a sick one at that).
say you use the special effects module to change all the sounds and create a bunch of new ones you want.. and then you find out the audio player wont play the song back without skipping and stuttering.
that is enough to make any sane person quit the project entirely.
just how many vectors are available with ray tracing, and how small those vectors are, and how big the edges of the 'world' can be.. these all play a factor to the comparison of trying to re-create it fast enough to keep the frames per second up.
many people consider a circle to be 360 degrees.. but you can add or subtract points whenever you feel it necessary, since that is your human right.
to say that ray tracing uses 360 angles for each of the x,y,z coordinates.. it really means adding more coordinates.
an X with a line through the middle would be 6 angles.. and that can be used for ONLY the front to back coordinate space.
if you flip the world around, you could add the 'sideways' for the other two coordinates.
when each coordinate moves left/right.. you could always add a diagonal X .. then add a diagonal X to that .. and again and again until there are 360 'points' on the X (since the X only has four)
no reason to stop at 360?
who came up with 360 degree angles.. leonardo divinci ?
did somebody say 360 is enough for video, or for measurements?
you HAVE TO have those degree angles for ray tracing to work its magic.. more than 360 would be even better.
that is how you really snap a photo and pull out the bump map from the image.
same with snapping a photo and allowing a program to bend and fold the picture until it is 3D
adding a second camera lens to record stereoscopic would then give you the data for 4D images.
since the diagonal view from the lenses would provide data for the diagonal coordinates of the front to back coordinate space.
you wouldnt need more lenses for more data.. sure you could do it that way and get solid results, but you could also specify the distance between each lens and give it a coordinate.
no different than taking two points and adding (really dividing) the space inbetween into sub points.
each sub point would reflect the sub coordinate (the diagonal coordinate for front to back pixels)
they would be a combination of front to back and left to right since they are diagonal.
so say you take the two lenses and add five subsections.. then you add five diagonals to the front to back coordinate space.
in the end.. you would see the same depth that is highly sought after.
if your LCD television cant reproduce it all.. an old CRT probably could with ease.
the depth would be so amazing that it might give a casual person a heart attack (even if they do remember seeing it before).
it is very very possible to make each graphics card do one coordinate space.. then tie all of the diagonal spaces in together by connecting the graphics cards together (sli strap or whatever).
the situation would HAVE to be multi-threaded, and that generally means it would be best to be multi-cored.. to keep the processing time down.
no matter what, if you ask one processor to do two things.. there is going to be halts interlaced between the two requests.
the only way to remove the halts of the second request is to double the speed of the processor.
this means, if you have a 1.5% overclock on your dual-core CPU .. you could leave the threads at two, and see a 50% decrease in time to finish the process.. or you could increase the threads to three and see a 50% increase in productivity.
now.. that 50% increased productivity might be lower if the processor gets confused as to which core will do the third thread.
it would have to be hard coded into the functioning of the processor to interlace the third thread across both cores equally.
what does this mean for pixels and a television working with zero pixels?
well it talks about how pixel rendering is done and where it can go.
i faithfully believe if you increase your productivity 50% .. that has got to be like quantum computing (or whatever word it is)
because a 50% decrease in the amount of time to finish the process is nothing compared to a 50% increase in productivity.
i begin to wonder if that is how directX 11 is adding more tessellation.
to say that there is only one graphics processor and it has been made faster, then to overclock that processor 1.5% and add another thread.. you would get 50% more done in the same amount of time !!!
this is astonishing fact, and it could pave the way for people who develop video games.. because it means their games do 50% more and look much better than the other games being released the same year.
adding a third thread would be awesome.. but doing the coding to make a new thread possible, that might downright make my head spin.
kinda crazy to think a dual-core processor at 4ghz can do the exact same amount of work that a 2ghz quad core can do.. and they both finish in the exact same amount of time.
overclocking should have a new respect if you didnt know this already.
every atom IS a pixel.. but to say there are smaller things than an atom, it would be true.
maybe the line that connects one atom to another is smaller than the atom itself.
and it is a real solid way of looking at things, to know that every atom is a pixel.. or that you could put 4 pixels on a single atom.
it totally would amount to the pixels becoming smaller and smaller, until the human eye can no longer detect a single pixel without a microscope.
that kind of effort would be the closest thing to having a television appear as though there were no pixels at all.
see what i remember is this.. if you want 1920x1080 pixels ... if the screen is smaller and those pixels really exists in the number of 1920x1080 .. then those pixels would be smaller, making the video look more realistic.
that is why you could see a lot of 19 inch CRT televisions that actually look better than a 32 inch CRT television.
the pixels are bigger with the bigger screen.. and the only way to shrink those pixels is to step backwards away from the screen until your eyes see each pixel the same size as the 19 inch television.
kinda like holding a tape measure out with your arms fully extended.. you measure how wide the 19 inch screen is with the tape measure from your hands to your eyeballs.
then get in front of the 32 inch television and move forwards or backwards until the width is the same.
this is why a television looks better if you are further away.
pixel size has always been something available to put in the specifications.
it is called the 'dot pitch'
and usually it is something like 0.24 millimeters
you can read about that here:
http/en.wikipedia.org/wiki/Dot_pitch
they say it is measured both ways.. how big the pixel dot is, and how far away the pixels are from eachother.
and
i suppose i was wrong to say that the dot pitch is how big the pixel is.
the dot pitch is important though.. because the bigger the dot pitch, the further away from the television you have to be before the image blends well together.
and
if the dot pitch is really small, the further away you are from the screen.. the more detail and realism you are going to lose because of blurriness at a distance.
computer monitors should have a smaller dot pitch than televisions.. because people arent expected to use their television as a computer monitor.
and with the release of LCD televisions.. this has all changed, as people want to use their television for a computer monitor.
anyways.. it is all about how far away you intend to sit from the screen.
you might find that you cant find a dot pitch big enough for 12ft viewing distance.
the only way to ensure the dot pitch increases is to buy a bigger television.
but
this is why people buy bigger televisions for when they sit further and further away.
you dont buy a huge television and expect it to look good from 3ft away.. because you can see all of the gaps between the pixels.
yes, you might need to be that close to read text.. but being that close is still going to be harmful with all of the gaps between the pixels.
pixels per square inch is everything, unless you are too far away to focus on them.
no sense making the pixels close together and struggling to see them because you are too far away.
maybe this picture will shed some light:
http/en.wikipedia.org/wiki/Fileixel_geometry_01_Pengo.jpg
when you consider how many of the reds and greens and blues are in each pixel.. that helps get more colors, but it can also ruin the amount of black available.
that is why the PC monitor and the XO-1 LCD seem to have the best chance at being black or changing the color.
tv crt has a lot more black than the lcd square.. and this suggests that the tv crt would have a higher contrast ratio than the lcd.
but it looks like the lcd has more red, green, blue dots.. and that means those dots can combine to create more colors for the pixel.
sometimes more colors really means a more realistic visual experience.. especially when the movie is during the day without much dark blacks.