High resolution...through digital interpolation...

Page 3 - Seeking answers? Join the Tom's Guide community: where nearly two million members share solutions and discuss the latest tech.
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

On Wed, 06 Apr 2005 17:47:09 +0200, Mxsmanic <mxsmanic@hotmail.com>
wrote:


>> Your motto: "They said it couldn't be done, so I didn't try".
>
>It cannot be done, no matter how much one tries.

I seem to recall this was said about powered flight, too.

--
Bill Funk
Change "g" to "a"
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

Don Stauffer writes:

> Also, NASA and HST resolution improvement often hinges on having a good
> idea of what an object is, and optimizing algorithms that assume the
> shape of an object.

But that's just another way of saying that they have additional image
data.

--
Transpose hotmail and mxsmanic in my e-mail address to reach me directly.
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

On Wed, 06 Apr 2005 17:48:21 +0200, Mxsmanic <mxsmanic@hotmail.com>
wrote:

>Don Stauffer writes:
>
>> Also, NASA and HST resolution improvement often hinges on having a good
>> idea of what an object is, and optimizing algorithms that assume the
>> shape of an object.
>
>But that's just another way of saying that they have additional image
>data.

It's another way of saying NASA have an open minded approach to
problem solving.

--
Owamanga!
http://www.pbase.com/owamanga
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

Ron Hunter writes:

> I have explained how it IS possible, and how it IS done.

You've overlooked the covert channel, as I've pointed out in a separate
post.

--
Transpose hotmail and mxsmanic in my e-mail address to reach me directly.
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

JPS@no.komm writes:

> Define "it".

Producing an image that contains more detail than was available in the
original capture.

> You're talking about increasing maximum sampling resolution; they're
> talking about restoring contrast lost to MTF and AA filters.

You cannot restore contrast, you can only simulate it.

--
Transpose hotmail and mxsmanic in my e-mail address to reach me directly.
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

Martin Brown writes:

> Yes. They can. Typically on very high quality signal to noise data it is
> possible to obtain about a factor of 3x increase in apparent resolution
> on the brightest points using one of the regularised deconvolution
> methods like Maximum Entropy.

Apparent resolution is not actual detail.

> The critical requirement is that you must
> know or be able to determine the blurring characteristics of the imaging
> system exactly in order to use them.

If you know enough to fully reconstruct missing detail in the image, you
don't need the image in the first place.

> No it isn't. Knowing a priori that image brightness is always positive
> is a tremendously powerful constraint on deconvolution algorithms.

Knowing anything in advance adds image information.

If that advance knowledge doesn't match the reality of the original
scene, though, the results can be hugely misleading.

--
Transpose hotmail and mxsmanic in my e-mail address to reach me directly.
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

JPS@no.komm wrote:
> In message <d2uuq1$894$1@news.ucalgary.ca>,
> Bryan Heit <bjheit@nospamucalgary.ca> wrote:
>
>
>>Interpolation CANNOT improve resolution.
>
>
> Yes, but the Fuji 7000, when it outputs 6MP from its 6MP sensor *is*
> interpolating, with loss. At 12MP, it is also interpolating, but with
> *no* loss.

This seems to be a specific issue with fuji cameras, whom for some
reason rotated their CCD which defies all logic when it comes to
imaging. The large majority of camera will not benefit from
interpolation because they are designed properly, and can read the full
CCD resolution without interpolation. I cannot think of a single reason
why fuji would do this, aside from some sort of rip-off marketing gimic.

I do a lot of imaging for scientific purposes (predominantly
microscopy), and we avoid any image interpolation due to it's tendency
to induce artifacts. I take this knowledge into my photography as I
know that the uninterpolated image I produce will be truer then an
interpolated image. Afterwards I can alter the image as I see fit -
interpolation, cropping, etc, but by not interpolating on the camera you
give yourself the maximal ability to alter your image as what you start
with is the raw, completely unprocessed image. Once interpolated it is
impossible to get back to the raw image.

Keep in mind that the raw image is the maximal amount of detail your
camera can capture - interpolation can produce a larger image with the
appearance of greater resolution, but in reality the image is no more
detailed (and possible less detailed) then the original image.

Bryan
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

In message <q71851p1ohrtb9ehrq7q0c58go763lldrd@4ax.com>,
Mxsmanic <mxsmanic@hotmail.com> wrote:

>JPS@no.komm writes:

>> You're talking about increasing maximum sampling resolution; they're
>> talking about restoring contrast lost to MTF and AA filters.
>
>You cannot restore contrast, you can only simulate it.

If you know how much the pixel-to-pixel contrast is attenuated, you can
bring it back up again, albeit by increasing the capture artifacts as
well.
--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS@no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

Mxsmanic wrote:

> Martin Brown writes:
>
>>Yes. They can. Typically on very high quality signal to noise data it is
>>possible to obtain about a factor of 3x increase in apparent resolution
>>on the brightest points using one of the regularised deconvolution
>>methods like Maximum Entropy.
>
> Apparent resolution is not actual detail.

Some of it is. You can know the positions and relative positions of
bright point sources much more accurately than to the nearest pixel even
with relatively crude deconvolution methods.

>>The critical requirement is that you must
>>know or be able to determine the blurring characteristics of the imaging
>>system exactly in order to use them.
>
> If you know enough to fully reconstruct missing detail in the image, you
> don't need the image in the first place.

Rubbish. You only need to know the point spread function. And the
positivity constraint - but there are still a lot of all positive images.
>
>>No it isn't. Knowing a priori that image brightness is always positive
>>is a tremendously powerful constraint on deconvolution algorithms.
>
> Knowing anything in advance adds image information.

No it adds additional information beyond the actual raw image. Namely:

The point spread function that the original target image was convolved
with to make the raw data.

And usually that there can be no regions of negative intensity in the
real world.

Taken together these provide the basis for genuine superresolution.
>
> If that advance knowledge doesn't match the reality of the original
> scene, though, the results can be hugely misleading.

The *fundamental* point that you are missing (perhaps by being
deliberately obtuse) is that there are never any negative brightness
regions in the real world. We sense things by light arriving at the
detector. This a priori knowledge provides the basis for most of the
enhanced resolution achieved by modern deconvolution codes and it is
very real from an information theoretic point of view.

Regions of frequency space where no data was measured can be
reconstructed reliably by imposing the positivity constraint. The answer
may not be perfect but it is a heck of a lot better than the raw image.

Regards,
Martin Brown
 

Confused

Distinguished
Feb 17, 2001
419
0
18,930
Archived from groups: rec.photo.digital (More info?)

On Wed, 06 Apr 2005 09:47:42 GMT
In message <2fO4e.1779$5F3.1443@news-server.bigpond.net.au>
Posted from BigPond Internet Services
"Douglas" <decipleofeos@hotmail.com> wrote:

> ...
> http://users.tpg.com.au/hpc/examples2.htm

"...harnessing the power of several powerful microprocessors."

Please, edumacate us. What programs and processors?

Jeff
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

Mxsmanic wrote:
> Ron Hunter writes:
>
>
>>And if you took a picture of something that was all vertical and
>>horizontal lines?
>
>
> I'm not interested in test charts.
>
>
>>You could then interpolate to just about any level,
>>and the picture would be an accurate representation of the original, and
>>the same as from a camera with whatever resolution you could find.
>
>
> Really? Try doing that with a picture of a picket fence.>
>
>>This
>>is a rather limited case, of course, but it does illustrate the point.
>>IF the subject matter lends itself to interpolation, then much
>>improvement, indistinguishable from 'real' can be had.
>>So, what does your information theory have to say about that?
>
>
> That there is no net increase in information.
>
You are saying that even though the 'created 'pixel is in the same
place, and the same color and intensity as a real pixel WOULD be on a
higher resolution sensor, there is no gain? If the created information
is indistinguishable from the 'real' information, then what is the
difference?


--
Ron Hunter rphunter@charter.net
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

Mxsmanic wrote:
> Owamanga <owamanga(not-this-bit)@hotmail.com> writes:
>
>
>>This is a very narrow-minded view of the problem.
>
>
> It's a harsh reality.
>
>
>>Your motto: "They said it couldn't be done, so I didn't try".
>
>
> It cannot be done, no matter how much one tries.

Grin. Do you know how many educated and experienced modem experts said
exactly that about getting data through a phone line at more than 450
BPS? I recall a programmer who told his bosses something couldn't be
done, and they asked him why someone seemed to have already DONE it.
They asked ME why I had done it, and I said no one told me it couldn't
be done, so I did it.

>
>
>>If objects can be correctly identified by software, they can be
>>re-rendered at any resolution.
>
>
> Only if the software contains all the detail concerning the objects. In
> other words, only if all the detail is already present (the fact that
> the software contains it instead of the captured image doesn't alter the
> constraints of information theory).
>
> In practice, no software can do this outside of the most trivial test
> cases.
>
>
>>Image recognition can read the text, identify each
>>letter, identify the font used and re-render it at 100 times the
>>original, maintaining the angle, color balance and texture from the
>>original.
>
>
> Some fonts are so slightly different that they cannot be identified in
> this way.
>
>
>>Software
>>development moves fast, and even though we may not have the magic
>>'enlarge' button in Photoshop yet, it *will* be there one day.
>
>
> No, it won't.
>
> What we will have is capture at higher resolutions instead.
>
> This reminds me of a claim I heard from someone long ago who said that
> the future would be shaped by ever-improving compression algorithms. In
> fact, it has been shaped by ever-increasing bandwidth.
>

Perhaps both will improve over time. That's the way it usually happens.
Think how many noted physicists said the smallest particle of matter
was the atom...
--
Ron Hunter rphunter@charter.net
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

Big Bill wrote:
> On Wed, 06 Apr 2005 17:47:09 +0200, Mxsmanic <mxsmanic@hotmail.com>
> wrote:
>
>
>
>>>Your motto: "They said it couldn't be done, so I didn't try".
>>
>>It cannot be done, no matter how much one tries.
>
>
> I seem to recall this was said about powered flight, too.
>
Or 6 mbps data transmission over a telephone line (and WITH a
conversation going on at the same time!).


--
Ron Hunter rphunter@charter.net
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

Mxsmanic wrote:
> Don Stauffer writes:
>
>
>>Also, NASA and HST resolution improvement often hinges on having a good
>>idea of what an object is, and optimizing algorithms that assume the
>>shape of an object.
>
>
> But that's just another way of saying that they have additional image
> data.
>
Yes, one way of improving resolution is to add information. Does it
matter if the information comes from another photo, or from a program
using predictive assumptions, IF the assumptions are consistent with
reality?


--
Ron Hunter rphunter@charter.net
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

Mxsmanic wrote:
> Martin Brown writes:
>
>
>>Yes. They can. Typically on very high quality signal to noise data it is
>>possible to obtain about a factor of 3x increase in apparent resolution
>>on the brightest points using one of the regularised deconvolution
>>methods like Maximum Entropy.
>
>
> Apparent resolution is not actual detail.
>
>
>>The critical requirement is that you must
>>know or be able to determine the blurring characteristics of the imaging
>>system exactly in order to use them.
>
>
> If you know enough to fully reconstruct missing detail in the image, you
> don't need the image in the first place.
>
>
>>No it isn't. Knowing a priori that image brightness is always positive
>>is a tremendously powerful constraint on deconvolution algorithms.
>
>
> Knowing anything in advance adds image information.
>
> If that advance knowledge doesn't match the reality of the original
> scene, though, the results can be hugely misleading.
>

I would have to agree with that, but in some cases, we can accurately
infer that the cat is, indeed, alive.


--
Ron Hunter rphunter@charter.net
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

Owamanga wrote:
> On Wed, 06 Apr 2005 10:09:33 -0500, Ron Hunter <rphunter@charter.net>
> wrote:
>
>
>>Confused wrote:
>>
>>
>>>How much improvement is possible? That is the question.
>>>
>>
>>How much improvement depends mainly on the subject matter. If it is
>>mostly random shapes, such as trees, and grass, not a lot.
>
>
> Disagree. These are excellent subjects for fractal representation.
>
>
>>Regular
>>shapes with sharp lines and edges, quite a bit.
>
>
> The vector approach works well here, yes.
>
> --
> Owamanga!
> http://www.pbase.com/owamanga

Perhaps someday someone will implement both in an intelligent way. I
suspect one would have to do it area by area at this point.


--
Ron Hunter rphunter@charter.net
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

"Steve" <SPAMTRAPglawackus@hvc.rr.com> wrote in message
news:mDK4e.41175$qn2.9785972@twister.nyc.rr.com...
> David J. Littleboy wrote:
>
>> The Fuji cameras have sensors that are rotated 45 degrees.
>
> I don't see why this should automatically make a difference in the
> real world. I can see that having the pixel array parallel to lines
> in the subject woud make sense, such as a subject with lots of lines
> that are vertical and horizontal. OTOH, a picture of a pyramid
> would seem to be perfectly suited to a sensor that is rotated.

Quite on the contrary. As Dave Martindale formulated much better than
I could, diagonal (45 degree on square pixel) resolution of a
rectangular sampling grid is a factor Sqrt(2) better than
horizontal/vertical (aligned with the grid) resolution.

Bart
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

"Owamanga" <owamanga(not-this-bit)@hotmail.com> wrote in message
news:7ns751pfim0oqu686ar2krpq4g3l5che8b@4ax.com...
> On Wed, 06 Apr 2005 06:57:40 +0200, Mxsmanic <mxsmanic@hotmail.com>
> wrote:
SNIP
>>All cameras are constrained by the limits of information theory,
>>and it's mathematically impossible to create additional useful
>>information in an image where none was originally captured.
>
> This is a very narrow-minded view of the problem.

What's worse, it's wrong. At least with regards to "additional useful
information". The data may be not exactly correct but only very
plausible (like interpolation of a straight edge), but that may indeed
still add "useful" information. The issue is more about the
possibility to extract "useful" information from the captured data,
which will not be successful *all* the time, but (prior knowledge
about image content and) Poisson statistics may increase the rate of
success.

Bart
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

"Mxsmanic" <mxsmanic@hotmail.com> wrote in message
news:lq0851hrfic5jbchrlqag7r46gnv7jn8vf@4ax.com...
> Ron Hunter writes:
>
>> And if you took a picture of something that was all vertical and
>> horizontal lines?
>
> I'm not interested in test charts.

Why not? They can produce repeatable and objective test results,
indicative of what can be expected under different circumstances, like
the difference between (horizontal and) vertical resolution on
<http://www.dpreview.com/reviews/fujifilms3pro/page23.asp> (e.g. the
3rd crop with the golden rays, even though only captured at 6MP).

Bart
 
G

Guest

Guest
Archived from groups: rec.photo.digital (More info?)

Bart van der Wolf writes:

> Why not?

Because I'm not an equipment geek.

> They can produce repeatable and objective test results,
> indicative of what can be expected under different circumstances, like
> the difference between (horizontal and) vertical resolution on
> <http://www.dpreview.com/reviews/fujifilms3pro/page23.asp> (e.g. the
> 3rd crop with the golden rays, even though only captured at 6MP).

Wow.

--
Transpose hotmail and mxsmanic in my e-mail address to reach me directly.