Limit to Human Vision & its Effect on Optimum Digital Image Resolution:
I wanted to figure out how well a human could see the dots that are printed
on a page before I decided to make any decisions on a digital camera, or
scanner. As usual, I couldn't find a lot of information about the specific
topic of vision with regard to digital images. So I decided to post what I
found to let everyone else know how this works. Bear with me, this gets a
little in depth, but the results are worth the effort. After some print testing,
staring at a lot of dots on a page and a lot of reading on the web, this is what I learned:
I knew that in general the higher the resolution that an image has, whether from
a digital camera or a scanner, that the image would appear more clean and less
jagged. From experience, I also knew that the the closer the dots in an
image are, the more likely I was to not notice that they were dots at all.
At some point, if the dots are close enough together, somehow my eyes would just
blur everything together and the image I wanted would look great.
Typically, I would end up scanning a picture in at a bunch of different
resolutions and printing them off a bunch of ways before figuring out what
resolution was best. Because I didn't know if making them look good
depended on the picture I was making or my own eyes. I typically ended up
going through this process a lot. The resolution always ended up in the
200-300 dpi range, and that seemed to be the same as most of the suggestions for
printed resolution that could find. However, there were enough suggestions
to print at 600 dpi or higher that I never knew what was best.
It turns out that the human eye only has a certain number of light detectors
in it. The retina of your eye, where anything you look at is projected and
converted into a signal to send the brain, only has a limited number of sensors,
called Cones & Rods. It is these sensors that convert the light that
enters your eye into nerve impulses for your brain to figure out. Since
these sensors are in limited number, it makes sense that they can only handle a
certain amount of information. When they have taken in all they can, your
brain goes to work and interprets the signals from the sensors, and also
determines what is likely to be between the sensors as well. It is for
this reason that after a certain number of dots are printed that everything
blurs together and the images appear clean. So the question was, how much
information is necessary, before my brain makes everything look good.
The good news is that a lot of hard working scientists in the field of vision
have already figured out how much information a human eye can handle.
However, it appears as if they have left it to me to determine how that relates
to digital images. So I did. Your eye takes in light in the shape of
an ever increasing cone that gets larger as the subject you are viewing gets
farther away. The smallest part of the cone is on the retina in the back
of your eye, where the light is focused.
Source: NDT Resource Center, www.ndt-ed.org
The size of the peak of the cone
does not change much, and your eye is constantly working to focus the light on
this area. In this area, your eyes have a densely packed bunch of sensors,
that make up your central field of vision. This area is called the
"fovea". Since the size of the circle of light changes very little, and
there are only a fixed number of sensors in this area, your brain must determine
how to blend what you look at together. The further away something is, the
smaller the area it will cover at the end of the cone where it is located.
This translates back to an even smaller area at the retina. Therefore,
when an object is viewed up-close, your eye's central field of vision is filled
with the object and a lot of information about it is sent to the brain, and the
detail of the object can be recognized. As you move away from the object,
it covers less of your central field of vision, and therefore less information
about it is sent to the brain which results in the detail of the object being
reduced. As you move away, eventually, it will cover such a small portion
of your central field of vision that virtually no detail at all can be seen, but
your brain does a lot of work and still interprets what you see as best it
can. This is all very good for those of us who enjoy digital images, as
they are completely made up of a bunch of dots that our brain needs to blur
together so we can see one smooth image.
The point at which the dots blur together, is determined by the number of
sensors stimulated by light. For the best possible case of lighting and
contrast (bright lighting, 100% contrast [black vs. white]). This can be measured by the angle of the cone
that is formed from your retina to the image you are viewing. The angle is
determined by the height of the object and its distance from you, and is
typically measure in "degrees" (Θ
-theta in the figure above). Scientist and doctors measure your ability
to recognize a certain amount of detail for each single degree of your viewing
field. I am sure that you have heard the term 20/20 vision which means
that you are able to recognize at 20 feet what the average person with good
eyesight can recognize at 20 feet. This also happens to relate to the
minimum size of an object that you are able to recognize from the rest of an
image before it is blurred together into one image. This is called
"Visual Acuity". To understand how this is measured, you must
know how angles are measure form a circle. A circle is divided into 360
"degrees". Each one of these degrees can be divided into smaller units called
"minutes". There are 60 "minutes" in every
"degree". Visual
Acuity for an individual with 20/20 vision is measured as the minimum angle of
their viewing field that must be filled with an image to recognize one feature
from the rest of the image (measured in "minutes"), 20 / 20 = 1
"minute". A person with 20/10 vision can recognize one feature
from the rest of an image at 20 feet when the average person has to be at 10
feet to recognize the same detail. The 20/10 person also only has to fill
10 / 20 = .5 "minutes" of their field of vision to see that level of detail.
A person with 20/200 vision has to be 20 feet away to recognize one feature from
the rest of an image, when the average person can recognize this from 200 feet
away. This person also has to fill 200 / 20 = 10 "minutes" of
their field of vision to recognize that level of detail, not very good.
The greatest possible resolution for a person's vision is determined by the
total number of sensor on the retina, per "degree" of their field of vision.
Every "degree" of a field of vision (60 minutes) results in
approximately 288 micrometers of light that is projected on the retina from your
eye's lens. The average human eye contains approximately 120 Cones
in this same area. This can differ depending on the person. Since
there are only 120 sensors, no more than 120 pieces of information can be determined
for each "degree" before your brain blurs things together. Any
more information delivered to the retina and there are no sensors available to
send it to the brain. That
means that if you line up a number of dots alternating in color between black
and white, that only 120 of these can be viewed per "degree" of vision
and still be seen as individual dots. Any more dots in this
"degree", and your eye will blur all those black and white dots
together into a shade of gray. The person with this level of perfect
vision would be classified with 20/10 vision, 10 / 20 =.5 "minutes" to
recognize 1 feature from the rest of an image. If there is one one
recognizable feature, it must be recognizable from another contrasting feature
in the image. Therefore, there are 2 features per "minute", .5
"minutes" for each white and black. So, in one
"degree" there are 120 features that can be recognized (2 features x
60 minutes = 120 features). To relate this to a line of black and white
dots, there would be 60 black dots and 60 white dots. That is the maximum
that a person with ultimate vision could ever see. The average person with
20/20 vision can only recognize 60 features (20 / 20 = 1 "minute", 1
feature per minute is equal to 60 features per "degree" 1 x 60 =
60). Now, that we know the maximum resolution of the eye per
"degree" all we need to do is figure out how these "degrees"
translate into the actual size of objects in your viewing field. Let's
face it, you don't hear many people saying that an object covers 37
"minutes" of their field of vision when asked how tall something is.
Since they have determined how much of the field of vision must be covered to
recognize one feature from the rest of the image, we can use this to determine how many
dots we must pack into every inch of an image before our eyes can't tell that
they are dots at all. Remember, we are trying to determine the minimum
number of dots that it will take to force the features to blur. Blurring
is what we want! To do this we need to take into account the persons
quality of vision, and their distance from the image. The maximum distance
between the features or dots for a person's given vision and distance from an
image can be determined with a little geometry:
"degrees" = your vision / 20
distance = 2 x tan("degrees" / 2), corner-to-corner
All of the measurement that are mentioned above relate to the absolute
distance between two features. The best way to think about this is the
distance between the center of a circle and its ring. No matter which
point on the circle we measure from the center, it is always the same.
However, in digital images, the pixels or dots are square. The distance
from the center of a square to every point on its edge it not the same.
The shortest distance is to the flat edges, and the longest distance is to the
corners. When we figure out the maximum distance between dots, we must
measure from corner-to-corner, which is the greatest distance between objects in
a pattern filed with black and white squares. However, picture resolution
is not measured on the diagonal. It is measured in the horizontal and vertical
directions. Therefore, we have to translate the diagonal distance that we
calculate into the horizontal and vertical directions. The result will be that
the distance between the flat edges of the squares (horizontal & vertical) will be closer than the
maximum distance needed to blur the dots.
max distance = distance / SQRT(2), flat-to-flat
The max distance plus the size of the dot itself make up one "dot pair" that
cannot be recognized as individual dots from a chain of these dot pairs.
Dot pair distance = max distance x 2
The number of dot pairs that must be placed into each inch of image to create a
continuous blurred image can be calculated as well:
Dot pairs per inch = 1 / Dot pair distance
In an image, we do not actually create the information for the blank space
between dots. The resolution of a picture in dots per inch is equal to the
number that we just calculated, since the white spaces left by the paper are
always located between the
colored dots that we are printing by default. Therefore we know the minimum digital picture
resolution in dots per inch, dpi:
minimum dpi = Dot pairs per inch
Below is a chart for the minimum
needed resolution of your image (dpi) for
various levels of vision and distance from the
retina. Personally, moving an image closer
than about 8" from my eye tends to only
make the image out of focus. So in
general, I don't consider the results for any
distance less than 8" to be
important. To make sure that everyone sees
the image as clean as me, I keep my images
closer to the 20/15 level of vision. If
Superman comes along with 20/10, he will just
have to suffer with seeing the dots in my
pictures.
Minimum
Resolution for Smooth, Clean Images (dpi) |
Distance |
20/20
(6/6) |
20/15
(6/4.5) |
36" |
68 |
90 |
24" |
101 |
135 |
18" |
135 |
180 |
12" |
203 |
270 |
8" |
304 |
405 |
6" |
405 |
540 |
Keep in mind that these are the
worst case for how much resolution you need (the
best situation for someone to see the individual
dots). If any of the assumptions change
from this optimal viewing situation such as:
-
Contrast not 100%
(most dots are not alternating black
and white like an eye chart, typically they
are different colors or some shades of gray)
-
Lighting not extremely
bright (normal home lighting is usually not
the brightest)
-
Viewing at larger distances
(are people going to hold your pictures
right up to their face to try to see the
dots? Probably not, 12" to
18"+ is probably more reasonable)
-
Vision not perfect
the resolutions above will be
more than enough to blur the dots. To
check these numbers I created a few print tests
of a field of black dots (1 pixel wide) on a
white background for a number of different
resolutions. I set my laser printer to
print out at 600 dpi, so that it would not blur
the dots together for me at the typical setting
of 300 dpi. Printing out the test image
set to 600 dpi resulted in what looked like a
solid gray bar. According to the charts,
that is what should happen. The highest
resolution it says I would need, even if I have
20/15 vision, is 540. So the calculations
and the scientists appear to be right.
However, I wanted to make sure my printer
actually printed those tiny dots that I couldn't
see, and that this gray bar wasn't created by my
printer. So, I got a magnifying glass, and
a really bright light, and what did I see?
A lot of really tiny dots, just as I had created
in the picture. The dots were really hard
to see even with the magnifying glass and really
bright lighting. So it was my brain, blurring
the dots and not my printer. Good
news. I then printed off a couple of more
resolutions. At 400 dpi, I could just very
barely start to see a hint of dots at about
8", but once again the magnifying glass
showed them prominently, but I
could definitely only see a gray bar at
12". At 300 dpi, I could clearly see
the dots at about 8", lost them at about
10.5" and at 12" they were definitely
gone and only a gray bar could be seen. I
must have vision somewhere 20/20 and 20/15
according to the chart. If you want to try
this yourself, you can download the images
yourself.
Print Test: (Print
Test 600, 300, 150 dpi, Print
Test 400, 200, 100 dpi).
Download the pictures by right
clicking on the link and selecting "save
link as" or "save target as".
Then print them out using photo editing
software. Remember to make sure your
printer can print off at 600dpi quality.
There you have it, it appears as
if 300 dpi is more than enough resolution for
any reasonable person to print their pictures,
as the dots could only be seen under the best
conditions and by people with the best vision.
If you have any questions please feel free to email me at: johnatblahadotnet.
Now that takes us to another question, when I
buy a digital camera, how many megapixels do I have to have to print off good
pictures?
If
you didn't read the section on the differences in creating and editing pictures
for screen viewing and printing, you can read that now.
Check
ebay's "Business Graphics" section for low cost auctions of imaging
software
Check
ebay's "Desktop Publishing" section for low cost auctions of imaging
software
Check
ebay's "Digital Camera & Accessories" section for low cost
auctions of Digital Cameras & accessories
Links to more in depth information on on this issue:
Visual
Acuity from the University of Utah Medical School
Visual
Acuity of the Human Eye
Printers
and prints by Norman Koren
|