Information

How much percent of image does our eyes focus at any instant?

How much percent of image does our eyes focus at any instant?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I learned in highschool that even thought we have a wide view, we only observe a tiny fraction of that view through our eyes. So at any instant we are not really looking at all the objects infront of us, this is why we need to individually focus at each object at a time.

Question: What is the % of view that is in focus at any instant of time compared to the full view we observe thorough our eyes?


What you learned in high school isn't exactly true (or is misremembered). Human eyes get visual input across about 120 degrees of visual angle, and some herbivore mammals see near 360 degrees total between their two eyes.

However, the resolution is not constant. The fovea is an area of high-density receptors in the retina. When you look at something specific, you are directing light from that area of the visual field to the fovea. The fovea is limited to about 2 degrees of visual angle.

Additionally, your attention is often limited to a small section of the visual field. When viewing a scene, your eyes will naturally make saccades to bring various points into high acuity vision to get a good sense of the entire scene, but you are still seeing all the surrounding area, even if you aren't paying much attention to it. Importantly, you are still sensitive to surprising stimuli (especially those that are high-contrast or moving) in those areas that might cause you to redirect your attention, otherwise you would be very easy to sneak up on.


Science Explains Instant Attraction

How do you know when you're attracted to a new face? Thank your medial prefrontal cortex, a brain region now discovered to play a major role in romantic decision-making.

Different parts of this region, which sits near the front of the brain, make a snap judgment about physical attraction and about whether the person is Mr. or Ms. Right — all within milliseconds of seeing a new face, a new study from Ireland finds.

The research is the first to use real-world dating to examine how the brain makes fast romantic judgments.

To conduct the study, researchers recruited 78 women and 73 men, all heterosexual and single, from Trinity College Dublin to participate in a speed-dating event. Like any typical speed-dating night, participants rotated around the room and chatted with one another for five minutes. After this meet-and-greet, they filled out forms indicating whom they'd like to see again.

But before the speed-dating event, 39 of the participants had their brains imaged. Using a functional magnetic resonance imaging machine (fMRI), researchers recorded the volunteers' brain activity as they saw pictures of the people they'd soon meet at the event. For each picture, the volunteers had a few seconds to rate, on a scale of 1 to 4, how much they would like to date that person. They also reported their physical attraction to each person and how likeable they thought each person was.

Speed-dating for science

In the next few days, the volunteers met face-to-face with the people in the pictures, during the speed-dating event.

People turned out to be pretty good at knowing who interested them based on photographs alone, the researchers found. Some 63 percent of the time, their initial, photograph-based interest in dating a person was backed up by their real decision after their five-minute speed date.

The dating event, incidentally, was all aboveboard, said Jeffrey Cooper, a psychology researcher who conducted the study while he was a postdoctoral student at Trinity College. Participants who "matched" with another study volunteer really did exchange phone numbers, and between 10 percent and 20 percent ended up getting in touch with each other later, Cooper told LiveScience.

'We joked quite a bit that we hoped there might be a wedding someday, but no invitations have come through yet," he said. [10 Wedding Traditions from Around the World]

The brain on dating

More intriguing was what the brain was doing to make those judgments. The researchers found a link between one specific region of the medial prefrontal cortex, called the paracingulate cortex, and people's ultimate decisions about dating. This region buzzed with increased activity when volunteers saw photographs of the people they'd later say "yes" to.

"We think it is especially involved in comparing options against a whole bunch of other options, or some sort of standard," Cooper said. [10 Surprising Sex Statistics]

Meanwhile, the ventromedial prefrontal cortex, which sits closer to the front of the head, became especially active when participants looked at faces they thought were attractive. But there was a catch: This region was most active when looking at faces that most people agreed were hot. Of course, people don't always agree on who looks good. When people saw a face that tripped their trigger but didn't get great ratings from others, a different region activated: the rostromedial prefrontal cortex, a segment of the medial prefrontal cortex located lower in the brain.

"That region in this moment may be doing something like evaluating not just 'Is this person a good catch?' but 'Is this person a good catch for me?'" Cooper said.

That role makes sense for the rostromedial region, he added, because the region is known to be very important in social decisions. Among the judgments this region makes is how similar someone else is to you. Given that people tend to find similar folks attractive as potential mates, the rostromedial prefrontal cortex could be saying, "Hey, this one matches us!"

There are two ways to look at the results, published in the Nov. 7 issue of the Journal of Neuroscience. One, Cooper said, is that we're pretty shallow. In the first few milliseconds of seeing a new face, we're evaluating physical attractiveness. But the rostromedial prefrontal cortex goes a bit deeper, very quickly asking, "Yeah, but are they compatible with me?"

"These really are separate processes," Cooper said. "But they really are both happening in your head as you make those initial evaluations."


What Is The Resolution Of The Human Eye In Megapixels?

What is the resolution of the human eye in megapixels? originally appeared on Quora: the knowledge sharing network where compelling questions are answered by people with unique insights.

Answer by Dave Haynie, Engineer, Musician, Photo/Videographer, on Quora:

What is the resolution of the human eye in megapixels? Well, it wouldn't directly match a real-world camera . but read on.

On most digital cameras, you have orthogonal pixels: they're in the same distribution across the sensor (in fact, a nearly perfect grid), and there's a filter (usually the "Bayer" filter, named after Bryce Bayer, the scientist who came up with the usual color array) that delivers red, green, and blue pixels.

So, for the eye, imagine a sensor with a huge number of pixels, about 120 million. There's a higher density of pixels in the center of the sensor, and only about 6 million of those sensors are filtered to enable color sensitivity. And of course, only about 100,000 sense for blue! Oh, and by the way, this sensor isn't made flat, but in fact, semi-spherical, so that a very simple lens can be used without distortions real camera lenses have to project onto a flat surface, which is less natural given the spherical nature of a simple lens (in fact, better lenses usually contain a few aspherical elements).

This is about 22mm diagonal on the average, just a bit larger than a micro four-thirds sensor, but the spherical nature means the surface area is around 1100mm^2, a bit larger than a full-frame 35mm camera sensor. The highest pixel resolution on a 35mm sensor is on the Canon 5Ds, which stuffs 50.6Mpixels into about 860mm^2.

So that's the hardware. But that's not the limiting factor on effective resolution. The eye seems to see "continuously," but it's cyclical, there's kind of a frame rate that's really fast, but that's not the important one. The eye is in constant motion from ocular microtremors that occur at around 70-110Hz. Your brain is constantly integrating the output of your eye as it's moving around into the image you actually perceive, and the result is that, unless something's moving too fast, you get an effective resolution boost from 120MP to something more like 480MP as the image is constructed from multiple samples.

Which makes perfect sense—our brains can do this kind of problem as a parallel processor with performance comparable to the fastest supercomputers we have today. When we perceive an image, there's this low-level image processing, plus specialized processes that work on higher level abstractions. For example, we humans are really good at recognizing horizontal and vertical lines, while our friendly frog neighbors have specialized processing in their relatively simple brains looking for a small object flying across the visual field: that fly he just ate. We also do constant pattern matching of what we see back to our memories of things. So we don't just see an object, we instantly recognize an object and call up a whole library of information on that thing we just saw.

Another interesting aspect of our in-brain image processing is that we don't demand any particular resolution. As our eyes age and we can't see as well, our effective resolution drops, and yet, we adapt. In a relatively short term, we adapt to what the eye can actually see, and you can experience this at home. If you're old enough to have spent lots of time in front of Standard Definition television, you have already experienced this. Your brain adapted to the fairly terrible quality of NTSC television (or the slightly less terrible but still bad quality of PAL television), and then perhaps jumped to VHS, which was even worse than what you could get via broadcast. When digital started, between VideoCD and early DVRs like the TiVo, the quality was really terrible, but if you watched lots of it, you stopped noticing the quality over time if you didn't dwell on it. An HDTV viewer of today, going back to those old media, will be really disappointed, and mostly because their brain moved on to the better video experience and dropped those bad-TV adaptations over time.

Back to the multi-sampled image for a second cameras do this. In low light, many cameras today have the ability to average several different photos on the fly, which boosts the signal and cuts down on noise your brain does this, too, in the dark. We're even doing the "microtremor" thing in cameras. The recent Olympus OM-D E-M5 Mark II has a "hires" mode that takes eight shots with 1/2 pixel adjustment, to deliver what's essentially two 16MP images in full RGB (because full pixel steps ensure every pixel is sampled at R, G, B, G), one offset by 1/2 pixel from the other. Interpolating these interstitial images as a normal pixel grid delivers 64MP, but the effective resolution is more like 40MP, still a big jump up from 16MP. Hasselblad showed a similar thing in 2013 that delivered a 200MP capture, and Pentax is also releasing a camera with something like this built-in.

We're doing simple versions of the higher-level brain functions, too, in our cameras. All kinds of current-model cameras can do face recognition and tracking, follow-focus, etc. They're nowhere near as good at it as our eye/brain combination, but they do ok for such weak hardware.

They're only few hundred million years late.

This question originally appeared on Quora. Ask a question, get a great answer. Learn from experts and access insider knowledge. You can follow Quora on Twitter, Facebook, and Google+. More questions:


The Relationship Between Consciousness, Attention and Awareness

Like all animals, we humans interpret the world around us through our senses. The neural signals from the senses (via our peripheral nervous system) can trigger attention, awareness and consciousness. Sensory signals inform us of our situational environment, so we can respond appropriately. But consider how much information is bombarding our sensory systems at any one time. Some have estimated a million bits of data are available to our senses at any moment, but no one would suggest that we are aware of all those bits of sensory data, however much that number turns out to be. What’s the relationship between our senses and consciousness?

Scientists understand many of the mechanisms that limit or filter sensory information from reaching the brain in the first place. Our senses filter out a sizeable chunk of the stimuli they detect, and only a portion of that available information reaches our brains. For example, the eye actively modifies the sensation of light that hits the retina by a process called lateral inhibition. Certain layers of cells in the retina respond to strong or intense light by inhibiting neighboring cells from firing. If neighboring retinal cells register less light than adjacent cells, the less-light cells fire even less than they would otherwise due to the inhibiting effect of the more-light cells. The result is to increase the perception of contrast. The dark areas would be sensed as even darker than the actual amount of light would indicate.

Lateral inhibition is also leveraged by our sense of touch and may even be utilized by our auditory system to alter sound perception. It’s just one mechanism our nervous system uses to reconstruct sensory input. Before perceptions ever reach the brain, external information is modified, so we’re getting an approximation of the world around us. We’re receiving a constructed portrayal of our environment.

I also ask if awareness and consciousness are the same thing. I addressed this in my split brain article, and the answer wanders into the arena of philosophy. A surgery to reduce the worst symptoms of epilepsy severed the band of nerves connecting the two brain hemispheres, which had the side-effect of causing the patients to not be able to “consciously” acknowledge or verbalize an object that was shown only to their right, non-language hemisphere. At the same time they were able to select that same object from among several different objects by feel only. In other words, they could select the correct object by touch but were unable to say the name of that same object. They were verbally unconscious of seeing the object.

Is it lack of awareness or consciousness when the subjects say , “I don’t see anything,” or “I don’t know” to what was shown, yet at the same time they were able to correctly select the same object by hand that they couldn’t report seeing. Much depends on your biases for how you will decide this. In this follow-up article, I discuss other cases — blindsight and memory deficits — where a person’s self-perception doesn’t match their actions. I suggest that our consciousness and awareness are not the same thing. We can be aware without being conscious. Awareness is required to be able to choose an object by feel even when that information is unavailable to consciousness, the overt sense of knowing. Is this a semantic problem? Perhaps, but it bears resolving.

Before continuing with this article, try this selective attention test available on YouTube to experience techniques used in the studies I will discuss. The task is to count the number of basketball passes in the short video. Follow the instructions carefully.

If you took the selective attention test above, try this follow-up test to see if you perform as well or better. I will discuss the significance of these tests in a moment, but first I want to differentiate between attention and awareness.

Filtering processes in the brain itself similarly contribute to what gets perceived and made aware. In his article, Why Visual Attention and Awareness Are Different, Victor Lamme, Professor of Cognitive Neuroscience at the University of Amsterdam, delineates the differences between attention and awareness and explains why the two are often confabulated. He says that in order for a stimulus to reach awareness, it has to reach a “privileged status” in the brain. Sensory stimuli reach the brain and, via the process of selective attention, some of these reach a conscious state, which enables us to report about them. (Note: Lamme doesn’t differentiate between awareness and consciousness.) He says that selective attention is a “process where some inputs are processed faster, better, or deeper than others, so that they have a better chance of producing or influencing a behavioral response or of being memorized.” Note that behavioral responses don’t always require consciousness. “Fully attended stimuli are occasionally not perceived, suggesting that sensory processing does not necessarily always complete to a perceptual stage.”

In this case, when Lamme refers to stimuli not being perceived, he means consciously perceived. Sensory stimuli may reach the brain, but may not attain the privileged status necessary for consciousness. Nevertheless, we may respond unconsciously to the stimuli. Selective attention increases the chances that information reaches consciousness, but attention itself is an unconscious process and not a guarantee or a requirement for sensory information to reach consciousness.

Lamme lists several instances of invisible stimuli or “unconscious” inputs that have cryptic names such as anti-correlated disparity or the non-dominant patterns during perceptual rivalry. Lamme explains that these subliminal neurological priming mechanisms can impact and modify additional inputs. “The processing of a stimulus will leave a trace of activated and inhibited neurons that can last for a variable amount of time. The processing of subsequent stimuli might benefit from this trace if the two stimuli share properties (such as retinal position), resulting in attentional priming.” This is an example of how a neural signal can persist such that subsequent signals might be enhanced or suppressed. The brain filters, too.

There are many unconscious factors that influence attention, which shapes what reaches consciousness, but we have virtually no control over those processes in a meaningful way. Attention is the process that selects from among all the competing sensory inputs, some of which get passed on for additional handling. This selection process feeds awareness as well as consciousness. Attention and awareness are not the same thing in any case.

Simons and Chabris performed the gorilla-basketball study evaluating the role of attention and awareness. The subjects’ task was to watch a video of three white-shirt people passing a basketball back and forth and count the number of passes, while at the same time, three black-shirt people were also passing a basketball. Into the midst of this video, a person dressed in a gorilla costume walked into the scene, and, in the middle of the ball passers, faced the camera, pounded its chest, and walked out of frame. The gorilla was completely visible for nine seconds. In experiments about 50% of the subjects were so engrossed in counting the passes by the white shirts, they were unaware of the explicit presence of the gorilla. Half the subjects had a perceptual failure that scientists call inattentional blindness. Some subjects were so focused on their task that they miss the gorilla’s presence.

Far more information is processed by our senses than we can be aware of, and our senses and our brain actively filter input in order to reduce noise and seek relevance. The inability to cognitively attend to the gorilla is not evidence of not seeing the gorilla. The visual system — the eyes and the primary visual cortex in the occipital lobe — receive the image of the gorilla who is just as apparent as any person in the video, but the visual stimulus does not reach privileged status and get reported by downstream brain processes. Somewhere in the brain’s post-processing the gorilla image doesn’t make it past the attention gatekeeper.

Access to the workings of the brain is beyond self-reflection. Neuroscientists can look at the brain in action, but no person can turn their sights inward and perceive the dynamic interactions taking place. And, in fact, those tacit interactions are not only invisible to self-reflection, they don’t operate in ways that are intuitive. In Strangers to Ourselves, Discovering the Adaptive Unconscious, Professor of Psychology Timothy Wilson points out humans have inherited the nonconscious brains of our animal ancestors.

It is difficult to know ourselves because there is no direct access to the adaptive unconscious, no matter how hard we try. Because our minds have evolved to operate largely outside of consciousness, and nonconscious processing is part of the architecture of the brain, it may not be possible to gain direct access to nonconscious processes.


The Chemistry of Life: The Human Body

Editor's Note: This occasional series of articles looks at the vital things in our lives and the chemistry they are made of. You are what you eat. But do you recall munching some molybdenum or snacking on selenium? Some 60 chemical elements are found in the body, but what all of them are doing there is still unknown. Roughly 96 percent of the mass of the human body is made up of just four elements: oxygen, carbon, hydrogen and nitrogen, with a lot of that in the form of water. The remaining 4 percent is a sparse sampling of the periodic table of elements.

Some of the more prominent representatives are called macro nutrients, whereas those appearing only at the level of parts per million or less are referred to as micronutrients. These nutrients perform various functions, including the building of bones and cell structures, regulating the body's pH, carrying charge, and driving chemical reactions. The FDA has set a reference daily intake for 12 minerals (calcium, iron, phosphorous, iodine, magnesium, zinc, selenium, copper, manganese, chromium, molybdenum and chloride). Sodium and potassium also have recommended levels, but they are treated separately. However, this does not exhaust the list of elements that you need. Sulfur is not usually mentioned as a dietary supplement because the body gets plenty of it in proteins. And there are several other elements &mdash such as silicon, boron, nickel, vanadium and lead &mdash that may play a biological role but are not classified as essential. "This may be due to the fact that a biochemical function has not been defined by experimental evidence," said Victoria Drake from the Linus Pauling Institute at Oregon State University. Sometimes all that is known is that lab animals performed poorly when their diets lacked a particular non-essential element. However, identifying the exact benefit an element confers can be difficult as they rarely enter the body in a pure form. "We don't look at them as single elements but as elements wrapped up in a compound," said Christine Gerbstadt, national spokesperson for the American Dietetic Association. A normal diet consists of thousands of compounds (some containing trace elements) whose effects are the study of ongoing research. For now, we can only say for certain what 20 or so elements are doing. Here is a quick rundown, with the percentage of body weight in parentheses. Oxygen (65%) and hydrogen (10%) are predominantly found in water, which makes up about 60 percent of the body by weight. It's practically impossible to imagine life without water. Carbon (18%) is synonymous with life. Its central role is due to the fact that it has four bonding sites that allow for the building of long, complex chains of molecules. Moreover, carbon bonds can be formed and broken with a modest amount of energy, allowing for the dynamic organic chemistry that goes on in our cells. Nitrogen (3%) is found in many organic molecules, including the amino acids that make up proteins, and the nucleic acids that make up DNA. Calcium (1.5%) is the most common mineral in the human body &mdash nearly all of it found in bones and teeth. Ironically, calcium's most important role is in bodily functions, such as muscle contraction and protein regulation. In fact, the body will actually pull calcium from bones (causing problems like osteoporosis) if there's not enough of the element in a person's diet. Phosphorus (1%) is found predominantly in bone but also in the molecule ATP, which provides energy in cells for driving chemical reactions. Potassium (0.25%) is an important electrolyte (meaning it carries a charge in solution). It helps regulate the heartbeat and is vital for electrical signaling in nerves. Sulfur (0.25%) is found in two amino acids that are important for giving proteins their shape. Sodium (0.15%) is another electrolyte that is vital for electrical signaling in nerves. It also regulates the amount of water in the body. Chlorine (0.15%) is usually found in the body as a negative ion, called chloride. This electrolyte is important for maintaining a normal balance of fluids. Magnesium (0.05%) plays an important role in the structure of the skeleton and muscles. It also is necessary in more than 300 essential metabolic reactions. Iron (0.006%) is a key element in the metabolism of almost all living organisms. It is also found in hemoglobin, which is the oxygen carrier in red blood cells. Half of women don't get enough iron in their diet. Fluorine (0.0037%) is found in teeth and bones. Outside of preventing tooth decay, it does not appear to have any importance to bodily health. Zinc (0.0032%) is an essential trace element for all forms of life. Several proteins contain structures called "zinc fingers" help to regulate genes. Zinc deficiency has been known to lead to dwarfism in developing countries. Copper (0.0001%) is important as an electron donor in various biological reactions. Without enough copper, iron won't work properly in the body. Iodine (0.000016%) is required for making of thyroid hormones, which regulate metabolic rate and other cellular functions. Iodine deficiency, which can lead to goiter and brain damage, is an important health problem throughout much of the world. Selenium (0.000019%) is essential for certain enzymes, including several anti-oxidants. Unlike animals, plants do not appear to require selenium for survival, but they do absorb it, so there are several cases of selenium poisoning from eating plants grown in selenium-rich soils. Chromium (0.0000024%) helps regulate sugar levels by interacting with insulin, but the exact mechanism is still not completely understood. Manganese (0.000017%) is essential for certain enzymes, in particular those that protect mitochondria &mdash the place where usable energy is generated inside cells &mdash from dangerous oxidants. Molybdenum (0.000013%) is essential to virtually all life forms. In humans, it is important for transforming sulfur into a usable form. In nitrogen-fixing bacteria, it is important for transforming nitrogen into a usable form. Cobalt (0.0000021%) is contained in vitamin B12, which is important in protein formation and DNA regulation.


Neuropsychology

Neuropsychology studies how the anatomy of the brain affects someone’s behavior, emotion, and cognition. Over the years, brain scientists have shown that different parts of the brain are responsible for specific functions, whether it’s recognizing colors or problem solving. Contrary to the 10 percent myth, scientists have proven that every part of the brain is integral for our daily functioning, thanks to brain imaging techniques like positron emission tomography and functional magnetic resonance imaging.

Research has yet to find a brain area that is completely inactive. Even studies that measure activity at the level of single neurons have not revealed any inactive areas of the brain. Many brain imaging studies that measure brain activity when a person is doing a specific task show how different parts of the brain work together. For example, while you are reading this text on your smartphone, some parts of your brain, including those responsible for vision, reading comprehension, and holding your phone, will be more active.

However, some brain images unintentionally support the 10 percent myth, because they often show small bright splotches on an otherwise gray brain. This may imply that only the bright spots have brain activity, but that isn’t the case. Rather, colored splotches represent brain areas that are more active when someone’s doing a task compared to when they’re not. The gray spots are still active, just to a lesser degree.

A more direct counter to the 10 percent myth lies in individuals who have suffered brain damage–through a stroke, head trauma, or carbon monoxide poisoning–and what they can no longer do as a result of that damage, or can still do just as well. If the 10 percent myth were true, damage to perhaps 90 percent of the brain wouldn’t affect daily functioning.

Yet studies show that damaging even a very small part of the brain may have devastating consequences. For example, damage to Broca’s area hinders proper formation of words and fluent speech, though general language comprehension remains intact. In one highly publicized case, a Florida woman permanently lost her “capacity for thoughts, perceptions, memories, and emotions that are the very essence of being human” when a lack of oxygen destroyed half of her cerebrum, which makes up about 85 percent of the brain.


Radiation Thermometer

Source: CDC

Learn more about cancer risk in the U.S. at the National Cancer Institute.

Learn more about how EPA estimates cancer risk in, EPA Radiogenic Cancer Risk Models and Projections for the U.S. Population, also known as the Blue Book.

Limiting Cancer Risk from Radiation in the Environment

EPA bases its regulatory limits and nonregulatory guidelines for public exposure to low level ionizing radiation on the linear no-threshold (LNT) model. The LNT model assumes that the risk of cancer due to a low-dose exposure is proportional to dose, with no threshold. In other words, cutting the dose in half cuts the risk in half.

The use of the LNT model for radiation protection purposes has been repeatedly recommended by authoritative scientific advisory bodies, including the National Academy of Sciences and the National Council on Radiation Protection and Measurements Exit. There is evidence to support LNT from laboratory data and from studies of cancer in people exposed to radiation. 2,3,4,5


Convex lenses are thicker in the middle then the edges and concave lenses are thicker at the edges then the middle 5. As light travels through the lens, it bends either outward or inward, toward the thickest part of the lens. The University of California, San Diego (UCSD) notes that a convex lens magnifies by bending the light into a focal point and helps you see an object that is very far in the distance or very small 6.

How to Tell If My Oakleys Are Polarized

Eye glasses are the most popular kind of convex lenses for vision correction according to OSU. A frame holds two glass or plastic lenses, which are either concave for nearsighted vision or convex for farsighted. Contact lenses that correct for farsighted vision problems are also convex.

  • Eye glasses are the most popular kind of convex lenses for vision correction according to OSU.
  • A frame holds two glass or plastic lenses, which are either concave for nearsighted vision or convex for farsighted.

How Do Computers Affect Vision?

CVS is similar to carpal tunnel syndrome and other repetitive motion injuries you might get at work. It happens because your eyes follow the same path over and over. And it can get worse the longer you continue the movement.

When you work at a computer, your eyes have to focus and refocus all the time. They move back and forth as you read. You may have to look down at papers and then back up to type. Your eyes react to images constantly moving and changing, shifting focus, sending rapidly varying images to the brain. All these jobs require a lot of effort from your eye muscles. And to make things worse, unlike a book or piece of paper, the screen adds contrast, flicker, and glare. What's more, it is proven that we blink far less frequently when using a computer, which causes the eyes to dry out and blur your vision periodically while working.

You’re more likely to have problems if you already have eye trouble, if you need glasses but don't have them, or if you wear the wrong prescription for computer use.

Computer work gets harder as you age and the natural lenses in your eyes becomes less flexible. Somewhere around age 40, your ability to focus on near and far objects will start to go away. Your eye doctor will call this condition presbyopia.


Intestine Treatments

    : Various medicines can slow down diarrhea, reducing discomfort. Reducing diarrhea does not slow down recovery for most diarrheal illnesses. : Over-the-counter and prescription medicines can soften the stool and reduce constipation. : Medicines can relieve constipation by a variety of methods including stimulating the bowel muscles, and bringing in more water.
  • Enema: A term for pushing liquid into the colon through the anus. Enemas can deliver medicines to treat constipation or other colon conditions. : Using tools passed through an endoscope, a doctor can treat certain colon conditions. Bleeding, polyps, or cancer might be treated by colonoscopy. : During colonoscopy, removal of a colon polyp is called polypectomy. : Using open or laparoscopic surgery, part or all of the colon may be removed (colectomy). This may be done for severe bleeding, cancer, or ulcerative colitis.

Sources

Herliner, H., Maglinte, D., Birnbaum, B., and Balthazar, E. Clinical Imaging of the Small Intestine, Springer, Nov. 30, 2001.

The Physics Factbook web site: "Length of a Human Intestine."

Dummies.com: "Running Through the Human Digestive System."

National Cancer Institute: "Small Intestine Cancer," "Colon and Rectal Cancer."

National Digestive Diseases Information Clearinghouse: "Celiac Disease," "Colonoscopy," "Flexible Sigmoidoscopy," "Diverticulosis and Diverticulitis."