From the book “THE BRAIN”
by David Eagleman
How does the biological wetware of the brain give rise to our experience: the sight of emerald green, the taste of cinnamon, the smell of wet soil? What if I told you that the world around you, with its rich colors, textures, sounds, and scents is an illusion, a show put on for you by your brain? If you could perceive reality as it really is, you would be shocked by its colorless, odorless, tasteless silence. Outside your brain, there is just energy and matter…..the human brain has become adept at turning this energy and matter into a rich sensory experience of being in the world. How?
The illusion of reality
From the moment you awaken in the morning, you're surrounded with a rush of light and sounds and smells. Your senses are flooded. All you have to do is show up every day, and without thought or effort, you are immersed in the irrefutable reality of the world.
But how much of this reality is a construction of your brain, taking place only inside your head?
Consider the rotating snakes, below. Although nothing is actually moving on the page, the snakes appear to be slithering. How can your brain perceive motion when you know that the figure is fixed in place?
[WELL FOR ME MY BRAIN SEES NO MOVEMENT OF THESE SNAKES! WOW…. AM I DIFFERENT THAN MOST, OR THE VAST MAJORITY? THE AUTHOR TAKES IT FOR GRANTED PEOPLE WILL SEE THE SNAKES MOVING, BUT I DO NOT - Keith Hunt]
Nothing moves on the page, but you perceive motion. Rotating Snakes illusion by Akiyoshi Kitaoka.
Compare the color of the squares marked A and B. Checkerboard illusion by Edward Adelson.
Or consider the checkerboard above.
Although it doesn't look like it, the square marked A is exactly the same color as the square marked B. Prove this to yourself by covering up the rest of the picture. How can the squares look so different, even though they're physically identical?
[AND THE TWO SQUARES FOR ME REMAIN THE TWO DIFFERENT COLORS THAT THEY ARE! AGAIN, I AM SO DIFFERENT THAN MOST OR THE VAST MAJORITY OF PEOPLE? IT WOULD SEEM SO; NEITHER TEST WORKED FOR ME - Keith Hunt]
Illusions like these give us the first hints that our picture of the external world isn't necessarily an accurate representation. Our perception of reality has less to do with what's happening out there, and more to do with what's happening inside our brain.
[WOW…AS THE TWO TEST VISIONS DO NOT WORK FOR ME AS THE AUTHOR TAKES FOR GRANTED THEY WILL…. WOW DOES THAT MEAN I DO SEE THE WORLD WITH AN ACCURATE PICTURE; IS MY BRAIN IN A MODE THAT CANNOT BE DECEIVED OR SOMETHING LIKE THAT? Keith Hunt]
Your experience of reality
It feels as though you have direct access to the world through your senses. You can reach out and touch the material of the physical world - like this book or the chair you're sitting on. But this sense of touch is not a direct experience. Although it feels like the touch is happening in your fingers, in fact it's all happening in the mission control center of the brain. It's the same across all your sensory experiences. Seeing isn't happening in your eyes; hearing isn't taking place in your ears; smell isn't happening in your nose. All of your sensory experiences are taking place in storms of activity within the computational material of your brain.
Here's the key: the brain has no access to the world outside. Sealed within the dark, silent chamber of your skull, your brain has never directly experienced the external world, and it never will.
Instead, there's only one way that information from out there gets into the brain. Your sensory organs - your eyes, ears, nose, mouth, and skin - act as interpreters. They detect a motley crew of information sources (including photons, air compression waves, molecular concentrations, pressure, texture, temperature) and translate them into the common currency of the brain: electrochemical signals.
These electrochemical signals dash through dense networks of neurons, the main signalling cells of the brain. There are a hundred billion neurons in the human brain, and each neuron sends tens or hundreds of electrical pulses to thousands of other neurons every second of your life.
Neurons communicate with one another via chemical signals called neurotransmitters. Their membranes carry electrical signals rapidly along their length. Although artistic renditions like this one show empty space, in fact there is no room between cells in the brain - they are packed tightly against one another.
Everything you experience - every sight, sound, smell - rather than being a direct experience, is an electrochemical rendition in a dark theater.
How does the brain turn its immense electrochemical patterns into a useful understanding of the world?
It does so by comparing the signals it receives from the different sensory inputs, detecting patterns that allow it to make its best guesses about what's "out there". Its operation is so powerful that its work seems effortless.
But let's take a closer look.
Let's begin with our most dominant sense: vision…
The act of seeing feels so natural that it's hard to appreciate the immense machinery that makes it happen. About a third of the human brain is dedicated to the mission of vision, to turning raw photons of light into our mother's face, or our loving pet, or the couch we're about to nap on. To unmask what's happening under the hood, let's turn to the case of a man who lost his vision, and then was given the chance to get it back.
I was blind but now I see
Mike May lost his sight at the age of three and a half. A chemical explosion scarred his corneas, leaving his eyes with no access to photons. As a blind man, he became successful in business, and also became a championship paralympic skier, navigating the slopes by sound markers.
Then, after over forty years of blindness, Mike learned about a pioneering stem cell treatment that could repair the physical damage to his eyes. He decided to undertake the surgery; after all, the blindness was only the result of his unclear corneas, and the solution was straightforward.
But something unexpected happened. Television cameras were on hand to document the moment the bandages came off. Mike describes the experience when the physician peeled back the gauze: "There's this whoosh of light and bombarding of images on to my eye. All of a sudden you turn on this flood of visual information. It's overwhelming."
Biology has discovered many ways to convert information from the world into electrochemicai signals. Just a few of the translation machines that you own: hair cells in the inner ear, several types of touch receptors in the skin, taste buds in the tongue, molecular receptors in the olfactory bulb, and photoreceptors at the back of the eye.
Signals from the environment are translated into electrochemical signals carried by brain cells. It is the first step by which the brain taps into information from the world outside the body. The eyes convert (or transduce) photons into electrical signals. The mechanisms of the inner ear convert vibrations in the density of the air into electrical signals. Receptors on the skin (and also inside the body) convert pressure, stretch, temperature, and noxious chemicals into electrical signals. The nose converts drifting odor molecules, and the tongue converts taste molecules to electrical signals. In a city with visitors from all over the world, foreign money must be translated into a common currency before meaningful transactions can take place. And so it is with the brain. It's fundamentally cosmopolitan, welcoming travellers from many different origins.
One of neuroscience's unsolved puzzles is known as the "binding problem": how is the brain able to produce a single, unified picture of the world, given that vision is processed in one region, hearing in another, touch in another, and so on? While the problem is still unsolved, the common currency among neurons - as well as their massive interconnectivity - promises to be at the heart of the solution.
Mike's new corneas were receiving and focussing light just as they were supposed to. But his brain could not make sense of the information it was receiving. With the news cameras rolling, Mike looked at his children and smiled at them. But inside he was petrified, because he couldn't tell what they looked like, or which was which. "I had no face recognition whatsoever," he recalls.
In surgical terms, the transplant had been a total success. But from Mikes point of view, what he was experiencing couldn't be called vision. As he summarized it: "my brain was going 'oh my gosh'".
With the help of his doctors and family, he walked out of the exam room and down the hallway, casting his gaze toward the carpet, the pictures on the wall, the doorways. None of it made sense to him. When he was placed in the car to go home, Mike set his eyes on the cars, buildings, and people whizzing by, trying unsuccessfully to understand what he was seeing. On the freeway, he recoiled when it looked like they were going to smash into a large rectangle in front of them. It turned out to be a highway sign, which they passed under. He had no sense of what objects were, nor of their depth. In fact, post-surgery, Mike found skiing more difficult than he had as a blind man. Because of his depth perception difficulties, he had a hard time telling the difference between people, trees, shadows, and holes. They all appeared to him simply like dark things against the white snow.
The lesson that surfaces from Mike's experience is that the visual system is not like a camera. It's not as though seeing is simply about removing the lens cap. For vision, you need more than functioning eyes.
In Mike's case, forty years of blindness meant that the territory of his visual system (what we would normally call the visual cortex) had been largely taken over by his remaining senses, such as hearing and touch. That impacted his brain's ability to weave together all the signals it needed to have sight. As we will see, vision emerges from the coordination of billions of neurons working together in a particular, complex symphony.
Today, fifteen years after his surgery, Mike still has a difficult time reading words on paper and the expressions on people's faces. When he needs to make better sense of his imperfect visual perception, he uses his other senses to crosscheck the information: he touches, he lifts, he listens. This comparison across the senses is something we all did at a much younger age, when our brains were first making sense of the world.
[YOU MAY REMEMBER JESUS GAVE SIGHT TO A BLIND MAN, BUT HE SAW THINGS WRONGLY; JESUS DID ANOTHER MIRACLE WITH THE MAN’S BRAIN, AND HIS SIGHT WAS FULLY RESTORED AS NORMAL, AS IF NEVER BEING BLIND - Keith Hunt]
Seeing requires more than the eyes
When babies reach out to touch what's in front of them, it's not only to learn about texture and shape. These actions are also necessary for learning how to see. While it sounds strange to imagine that the movement of our bodies is required for vision, this concept was elegantly demonstrated with two kittens in 1963.
Richard Held and Alan Hein, two researchers at MIT, placed two kittens into a cylinder ringed in vertical stripes. Both kittens got visual input from moving around inside the cylinder. But there was a critical difference in their experiences: the first kitten was walking of its own accord, while the second kitten was riding in a gondola attached to a central axis. Because of this setup, both kittens saw exactly the same thing: the stripes moved at the same time and at the same speed for both. If vision were just about the photons hitting the eyes, their visual systems should develop identically. But here was the surprising result: only the kitten that was using its body to do the moving developed normal vision. The kitten riding in the gondola never learned to see properly; its visual system never reached normal development.
Inside a cylinder with vertical stripes, one kitten walked while the other was carried. Both received exactly the same visual input, but only the one who walked itself - the one able to match its own movements to changes in visual input - learned to see properly.
Vision isn't about photons that can be readily interpreted by the visual cortex. Instead it's a whole body experience. The signals coming into the brain can only be made sense of by training, which requires cross-referencing the signals with information from our actions and sensory consequences. It's the only way our brains can come to interpret what the visual data actually means.
If from birth you were unable to interact with the world in any way, unable to work out through feedback what the sensory information meant, in theory you would never be able to see. When babies hit the bars of their cribs and chew their toes and play with their blocks, they're not simply exploring - they're training up their visual systems. Entombed in darkness, their brains are learning how the actions sent out into the world (turn the head, push this, let go of that) change the sensory input that returns. As a result of extensive experimentation, vision becomes trained up.
Vision feels effortless but it's not
Seeing feels so effortless that it's hard to appreciate the effort the brain exerts to construct it. To lift the lid a little on the process, I flew to Irvine, California, to see what happens when my visual system doesn't receive the signals it expects.
Dr. Alyssa Brewer at the University of California is interested in understanding how adaptable the brain is. To that end, she outfits participants with prism goggles that flip the left and right sides of the world - and she studies how the visual system copes with it.
On a beautiful spring day, I strapped on the prism goggles. The world flipped - objects on the right now appeared on my left, and vice versa. When trying to figure out where Alyssa was standing, my visual system told me one thing, while my hearing told me another. My senses weren't matching up. When I reached out to grab an object, the sight of my own hand didn't match the position claimed by my muscles. Two minutes into wearing the goggles, I was sweating and nauseated.
Prism goggles flip the visual world, making it inordinately difficult to perform simple tasks, such as pouring a drink, grabbing an object, or getting through a doorway without bumping into the frame.
Although my eyes were functioning and taking in the world, the visual data stream wasn't consistent with my other data streams. This spelled hard work for my brain. It was like I was learning to see again for the first time.
I knew that wearing the goggles wouldn't stay that difficult forever. Another participant, Brian Barton, was also wearing prism goggles - and he had been wearing them for a full week. Brian didn't seem to be on the brink of vomiting, as I was. To compare our levels of adaptation, I challenged him to a baking competition. The contest would require us to break eggs into a bowl, stir in cupcake mix, pour the batter into cupcake trays, and put the trays in the oven.
It was no contest: Brian's cupcakes came out of the oven looking normal, while most of my batter ended up dried onto the counter or baked in smears across the baking tray. Brian could navigate his world without much trouble, while I had been rendered inept. I had to struggle consciously through every move.
Wearing the goggles allowed me to experience the normally hidden effort behind visual processing. Earlier that morning, just before putting on the goggles, my brain could exploit its years of experience with the world. But after a simple reversal of one sensory input, it couldn't any longer.
To progress to Brian's level of proficiency, I knew I would need to continue interacting with the world for many days: reaching out to grab objects, following the direction of sounds, attending to the positions of my limbs. With enough practice, my brain would get trained up by a continual cross-referencing between the senses, just the way that Brian's brain had been doing for seven days. With training, my neural networks would figure out how various data streams entering into the brain matched up with other data streams.
Brewer reports that after a few days of wearing the goggles, people develop an internal sense of a new left and an old left, and a new right and an old right. After a week, they can move around normally, the way Brian could, and they lose the concept of which right and left were the old ones and new ones. Their spatial map of the world alters. By two weeks into the task, they can write and read well, and they walk and reach with the proficiency of someone without goggles. In that short time span, they master the flipped input.
The brain doesn't really care about the details of the input; it simply cares about figuring out how to most efficiently move around in the world and get what it needs. All the hard work of dealing with the low-level signals is taken care of for you. If you ever get a chance to wear prism goggles, you should. It exposes how much effort the brain goes through to make vision seem effortless.
Synchronizing the senses
So we've seen that our perception requires the brain to compare different streams of sensory data against one another. But there's something which makes this sort of comparison a real challenge, and that is the issue of timing. All of the streams of sensory data - vision, hearing, touch, and so on - are processed by the brain at different speeds.
Consider sprinters at a racetrack. It appears that they get off the blocks at the instant the gun fires. But it's not actually instantaneous: if you watch them in slow motion, you'll see the sizeable gap between the bang and the start of their movement - almost two tenths of a second. (In fact, if they move off the blocks before that duration, they're disqualified - they've "jumped the gun".) Athletes train to make this gap as small as possible, but their biology imposes fundamental limits: the brain has to register the sound, send signals to the motor cortex, and then down the spinal cord to the muscles of the body. In a sport where thousandths of a second can be the difference between winning and losing, that response seems surprisingly slow.
Could the delay be shortened if we used, say, a flash instead of a pistol to start the racers? After all, light travels faster than sound - so wouldn't that allow them to break off the blocks faster?
I gathered up some fellow sprinters to put this to the test. In the top photograph, we are triggered by a flash of light; in the bottom photo we're triggered by the gun.
Sprinters can break off the blocks more quickly to a bang (bottom panel) than to a flash (top panel).
We responded more slowly to the light. At first this may seem counterintuitive, given the speed of light in the outside world. But to understand what's happening we need to look at the speed of information processing on the inside. Visual data goes through more complex processing than auditory data. It takes longer for signals carrying flash information to work their way through the visual system than for bang signals to work through the auditory system. We were able to respond to the light at 190 milliseconds, but to a bang at only 160 milliseconds. That's why a pistol is used to start sprinters.
But here's where it gets strange. We've just seen that the brain processes sounds more quickly than sights. And yet take a careful look at what happens when you clap your hands in front of you. Try it. Everything seems synchronized. How can that be, given that sound is processed more quickly? What it means is that your perception of reality is the end result of fancy editing tricks: the brain hides the difference in arrival times. How? What it serves up as reality is actually a delayed version. Your brain collects up all the information from the senses before it decides upon a story of what happens.
These timing difficulties aren't restricted to hearing and seeing: each type of sensory information takes a different amount of time to process. To complicate things even more, even within a sense there are time differences. For example, it takes longer for signals to reach your brain from your big toe than it does from your nose. But none of this is obvious to your perception: you collect up all the signals first, so that everything seems synchronized. The strange consequence of all this is that you live in the past. By the time you think the moment occurs, it's already long gone. To synchronize the incoming information from the senses, the cost is that our conscious awareness lags behind the physical world. That's the unbridgeable gap between an event occurring and your conscious experience of it.
When the senses are cut off, does the show stop?
Our experience of reality is the brain's ultimate construction. Although it's based on all the streams of data from our senses, it's not dependent on them. How do we know? Because when you take it all away, your reality doesn't stop. It just gets stranger.
On a sunny San Francisco day, I took a boat across the chilly waters to Alcatraz, the famous island prison. I was going to see a particular cell called the Hole. If you broke the rules in the outside world, you were sent to Alcatraz. If you broke the rules in Alcatraz, you were sent to the Hole.
I entered the Hole and closed the door behind me. It's about ten by ten feet. It was pitch black: not a photon of light leaks in from anywhere. Sounds are cut off completely. In here, you are left utterly alone with yourself.
THE BRAIN IS LIKE A CITY
Just like a city, the brain's overall operation emerges from the networked interaction of its innumerable parts. There is often a temptation to assign a function to each region of the brain, in the form of "this part does that". But despite a long history of attempts, brain function cannot be understood as the sum of activity in a collection of well-defined modules.
Instead, think of the brain as a city. If you were to look out over a city and ask "where is the economy located?" you'd see there's no good answer to the question. Instead, the economy emerges from the interaction of all the elements - from the stores and the banks to the merchants and the customers.
And so it is with the brain's operation: it doesn't happen in one spot. Just as in a city, no neighborhood of the brain operates in isolation. In brains and in cities, everything emerges from the interaction between residents, at all scales, locally and distantly. Just as trains bring materials and textiles into a city, which become processed into the economy, so the raw electrochemical signals from sensory organs are transported along super-highways of neurons. There the signals undergo processing and transformation into our conscious reality.
What would it be like to be locked in here for hours, or for days? To find out, I spoke to a surviving inmate who had been here. Armed robber Robert Luke - known as Cold Blue Luke - was sent to the Hole for twenty-nine days for smashing up his cell. Luke described his experience: "The dark Hole was a bad place. Some guys couldn’t take that. I mean, they were in there and in a couple of days they were banging their head on the wall. You didn't know how you would act when you got in there. You didn't want to find out."
Completely isolated from the outside world, with no sound and no light, Luke's eyes and ears were completely starved of input. But his mind didn't abandon the notion of an outside world. It just continued to make one up. Luke describes the experience: "I remember going on these trips. One I used to remember was flying a kite. It got pretty real. But they were all in my head." Luke's brain continued to see.
Such experiences are common among prisoners in solitary confinement. Another resident of the Hole described seeing a spot of light in his mind's eye; he would expand that spot into a television screen and watch TV. Deprived of new sensory information, prisoners said they went beyond daydreaming: instead, they spoke of experiences that seemed completely real. They didn't just imagine pictures, they saw.
This testimony illuminates the relationship between the outside world and what we take to be reality. How can we understand what was going on with Luke? In the traditional model of vision, perception results from a procession of data that begins from the eyes and ends with some mysterious end point in the brain. But despite the simplicity of that assembly-line model of vision, it's incorrect.
In fact, the brain generates its own reality, even before it receives information coming in from the eyes and the other senses. This is known as the internal model.
The basis of the internal model can be seen in the brain's anatomy. The thalamus sits between the eyes at the front of the head and the visual cortex at the back of the head. Most sensory information connects through here on its way to the appropriate region of the cortex. Visual information goes to the visual cortex, so there are a huge number of connections going from the thalamus into the visual cortex. But here's the surprise: there are ten times as many going in the opposite direction.
Visual information travels from the eyes to the lateral geniculate nucleus to the primary visual cortex (gold). Strangely, ten times as many connections feed information back in the other direction (purple).
Detailed expectations about the world - in other words, what the brain "guesses" will be out there - are being transmitted by the visual cortex to the thalamus. The thalamus then compares what's coming in from the eyes. If that matches the expectations ("when I turn my head I should see a chair there"), then very little activity goes back to the visual system. The thalamus simply reports on differences between what the eyes are reporting, and what the brain's internal model has predicted. In other words, what gets sent back to the visual cortex is what fell short in the expectation (also known as the "error"): the part that wasn't predicted away.
So at any moment, what we experience as seeing relies less on the light streaming into our eyes, and more on what's already inside our heads.
And that's why Cold Blue Luke sat in a pitch-black cell having rich visual experiences. Locked in the Hole, his senses were providing his brain with no new input, so his internal model was able to run free, and he experienced vivid sights and sounds. Even when brains are unanchored from external data, they continue to generate their own imagery. Remove the world and the show still goes on.
You don't have to be locked up in the Hole to experience the internal model. Many people find great pleasure in sensory deprivation chambers - dark pods in which they float in salty water. By removing the anchor of the external world, they let the internal world fly free.
And of course you don't have to go far to find your own sensory deprivation chamber. Every night when you go to sleep you have full, rich, visual experiences. Your eyes are closed, but you enjoy the lavish and colorful world of your dreams, believing the reality of every bit of it.
Seeing our expectations
When you walk down a city street, you seem to automatically know what things are without having to work out the details. Your brain makes assumptions about what you're seeing based on your internal model, built up from years of experience of walking other city streets. Every experience you've had contributes to the internal model in your brain.
Instead of using your senses to constantly rebuild your reality from scratch every moment, you're comparing sensory information with a model that the brain has already constructed: updating it, refining it, correcting it. Your brain is so expert at this task that you're normally unaware of it. But sometimes, under certain conditions, you can see the process at work.
Try taking a plastic mask of a face, the type you wear on Halloween. Now rotate around so you're looking at the hollow backside. You know it's hollow. But despite this knowledge, you often can't help but see the face as though it's coming out at you. What you experience is not the raw data hitting your eyes, but instead your internal model - a model which has been trained on a lifetime of faces that stick out. The hollow mask illusion reveals the strength of your expectations in what you see. (Here's an easy way to demonstrate the hollow mask illusion to yourself: stick your face into fresh snow and take a photo of the impression. The resulting picture looks to your brain like a 3D snow sculpture that's sticking out.
When you're confronted with the hollow side of a mask (right), it still looks like it's coming towards you. What we see is strongly influenced by our expectations.
[HAVE NOT TRIED THESE TESTS, BUT FROM THE OTHER TESTS HE GAVE EARLIER, PROBABLY WILL NOY WORK WITH ME, SO HAVE NO IDEA WHAT THAT MEANS - Keith Hunt]
It's also your internal model that allows the world out there to remain stable - even when you're moving. Imagine you were to see a cityscape that you really wanted to remember. So you take out your cell phone to capture a video. But instead of smoothly panning your camera across the scene, you decide to move it around exactly as your eyes move around. Although you're not generally aware of it, your eyes jump around about four times a second, in jerky movements called saccades. If you were to film this way, it wouldn't take you long to discover that this is no way to take a video: when you play it back, you'd find that your rapidly lurching video is nauseating to watch.
So why does the world appear stable to you when you're looking at it? Why doesn't it appear as jerky and nauseating as the poorly filmed video? Here's why: your internal model operates under the assumption that the world outside is stable. Your eyes are not like video cameras - they simply venture out to find more details to feed into the internal model. They're not like camera lenses that you're seeing through; they're gathering bits of data to feed the world inside your skull.
Our internal model is low resolution but upgradeable
Our internal model of the outside world allows us to get a quick sense of our environment. And that is its primary function - to navigate the world. What's not always obvious is how much of the finer detail the brain leaves out. We have the illusion that we're taking in the world around us in great detail. But as an experiment from the 1960s shows, we aren't.
Russian psychologist Paul Yarbus devised a way to track peoples eyes as they took in a scene for the first time. Using the painting The Unexpected Visitor by Ilya Repin, he asked subjects to take in its details over three minutes, and then to describe what they had seen after the painting was hidden away.
In a re-run of his experiment, I gave participants time to take in the painting, time for their brains to build an internal model of the scene. But how detailed was that model? When I asked the participants questions, everyone who had seen the painting thought they knew what was in it. But when I asked about specifics, it became clear that their brains hadn't filled in most of the details. How many paintings were on the walls? What was the furniture in the room? How many children? Carpet or wood on the floor? What was the expression on the face of the unexpected visitor? The lack of answers revealed that people had taken in only a very cursory sense of the scene. They were surprised to discover that even with a low-resolution internal model, they still had the impression that everything had been seen. Later, after the questions, I gave them a chance to look again at the painting to seek out some of the answers. Their eyes sought out the information and incorporated it for a new, updated internal model.
We tracked eye movements as volunteers looked at “The Unexpected Visitor,” a painting by llya Repin. The white streaks show where their eyes went. Despite the coverage with eye movements, they retained almost none of the detail.
This isn't a failure of the brain. It doesn't try to produce a perfect simulation of the world. Instead, the internal model is a hastily drawn approximation - as long as the brain knows where to go to look for the finer points, more details are added on a need-to-know basis.
So why doesn't the brain give us the full picture? Because brains are expensive, energy-wise. Twenty percent of the calories we consume are used to power the brain. So brains try to operate in the most energy-efficient way possible, and that means processing only the minimum amount of information from our senses that we need to navigate the world.
Neuroscientists weren't the first to discover that fixing your gaze on something is no guarantee of seeing it. Magicians figured this out long ago. By directing your attention, magicians perform slight of hand in full view. Their actions should give away the game, but they can rest assured that your brain processes only small bits of the visual scene.
This all helps to explain the prevalence of traffic accidents in which drivers hit pedestrians in plain view, or collide with cars directly in front of them. In many of these cases, the eyes are pointed in the right direction, but the brain isn't seeing what's really out there.
Trapped on a thin slice of reality
We think of color as a fundamental quality of the world around us. But in the outside world, color doesn't actually exist.
When electromagnetic radiation hits an object, some of it bounces off and is captured by our eyes. We can distinguish between millions of combinations of wavelengths - but it is only inside our heads that any of this becomes color. Color is an interpretation of wavelengths, one that only exists internally.
And it gets stranger, because the wavelengths we're talking about involve only what we call "visible light", a spectrum of wavelengths that runs from red to violet. But visible light constitutes only a tiny fraction of the electromagnetic spectrum - less than one ten-trillionth of it. All the rest of the spectrum - including radio waves, microwaves, X-rays, gamma rays, cell phone conversations, wi-fi, and so on - all of this is flowing through us right now, and we're completely unaware of it. This is because we don't have any specialized biological receptors to pick up on these signals from other parts of the spectrum. The slice of reality that we can see is limited by our biology.
Humans detect a tiny fraction of the information carried on the electromagnetic spectrum.The rainbow-colored slice marked 'Visible light" is made of the same stuff as the rest of the spectrum, but it's the only part for which we come equipped with biological receptors.
Each creature picks up on its own slice of reality. In the blind and deaf world of the tick, the signals it detects from its environment are temperature and body odor. For bats, it's the echolocation of air compression waves. For the black ghost knifefish, its experience of the world is defined by perturbations in electrical fields. These are the slices of their ecosystem that they can detect. No one is having an experience of the objective reality that really exists; each creature perceives only what it has evolved to perceive. But presumably, every creature assumes its slice of reality to be the entire objective world.
Why would we ever stop to imagine there's something beyond what we can perceive?
So what does the world outside your head really "look" like? Not only is there no color, there's also no sound: the compression and expansion of air is picked up by the ears, and turned into electrical signals. The brain then presents these signals to us as mellifluous tones and swishes and clatters and jangles. Reality is also odorless: there's no such thing as smell outside our brains. Molecules floating through the air bind to receptors in our nose and are interpreted as different smells by our brain. The real world is not full of rich sensory events; instead, our brains light up the world with their own sensuality.
Your reality, my reality
How do I know if my reality is the same as yours? For most of us it's impossible to tell, but there's a small fraction of the population whose perception of reality is measurably different to ours.
Consider Hannah Bosley. When she looks at letters of the alphabet, she has an internal experience of color. For her, it's self-evidently true that J is purple, or that T is red. Letters automatically and involuntarily trigger color experiences, and her associations never change. Her first name looks to her like a sunset, starting with yellow, fading into red, then to a color like clouds, and then back into red and to yellow. The name "Iain", in contrast, looks like vomit to her, although she's perfectly nice to people with that name.
Hannah is not being poetic or metaphorical - she has a perceptual experience known as synesthesia. Synesthesia is a condition in which senses (or in some cases concepts) are blended. There are many different kinds of synesthesia. Some taste words. Some see sounds as colors. Some hear visual motion. About 3% of the population has some form of synesthesia.
Hannah is just one of over 6,000 synesthetes I have studied in my lab; in fact, Hannah worked in my lab for two years. I study synesthesia because it's one of the few conditions in which it's clear that someone else's experience of reality is measurably different from mine. And it makes it obvious that how we perceive the world is not one-size-fits-all.
Synesthesia is the result of cross-talk between sensory areas of the brain, like neighboring districts with porous borders. Synesthesia shows us that even microscopic changes in brain wiring can lead to different realities.
Every time I meet someone who has this kind of experience, it's a reminder that from person to person - and from brain to brain - our internal experience of reality can be somewhat different.
Believing what our brains tell us
We all know what it is to have dreams at night, to have bizarre, unbidden thoughts that take us on journeys. Sometimes these are disturbing journeys we have to suffer through. The good news is that when we wake up, we are able to compartmentalize: that was a dream, and this is my waking life.
Imagine what it would be like if these states of your reality were more intertwined, and it were more difficult - or impossible - to distinguish one from the other. For about 1% of the population, that distinction can be difficult, and their realities can be overwhelming and terrifying.
Elyn Saks is a professor of law at the University of Southern California. She's smart and kind, and she's been sporadically experiencing schizophrenic episodes since she was sixteen years old. Schizophrenia is a disorder of her brain function, causing her to hear voices, or see things others don't see, or believe that other people are reading her thoughts. Fortunately, thanks to medication and weekly therapy sessions, Elyn has been able to lecture and teach at the law school for over twenty-five years.
I spoke with her at USC, and she gave me examples of schizophrenic episodes she's had in the past. "I felt like the houses were communicating with me: You are special. You are especially bad. Repent. Stop. Go. I didn't hear these as words, but I heard them as thoughts put into my head. But I knew they were the houses' thoughts, and not my thoughts." In one incident, she believed that explosions were being set off in her brain, and she was afraid that this was going to hurt other people, not just her. At a different time in her life she held a belief that her brain was going to leak out of her ears and drown people.
Now, having escaped those delusions, she laughs and shrugs, wondering what it was all about.
It was about chemical imbalances in her brain that subtly changed the pattern of signals. A slightly different pattern, and one can suddenly be trapped inside a reality in which strange and impossible things unfold. When Elyn was inside a schizophrenic episode, it never struck her that something was strange. Why? Because she believed the narrative told by the sum of her brain chemistry.
I once read an old medical text in which schizophrenia was described as an intrusion of the dream state into the waking state. Although I don't often see it described that way anymore, it's an insightful way to understand what the experience would be like from the inside. The next time you see someone on a street corner talking to himself and acting out a narrative, remind yourself what it would be like if you couldn't distinguish your waking and sleeping states.
Elyn's experience is an inroad to understanding our own realities. When we're in the middle of a dream, it seems real. When we've misinterpreted a quick glance of something we've seen, it's hard to shake the feeling that we know the reality of what we saw. When we're recalling a memory that is, in fact, false, it's difficult to accept claims that it didn't really happen. Although it's impossible to quantify, accumulations of such false realities color our beliefs and actions in ways of which we can never be cognizant.
Whether she was in the thick of a delusion, or else aligned with the reality of the broader population, Elyn believed that what she was experiencing was really happening. For her, as with all of us, reality is a narrative played out inside the sealed auditorium of the cranium.
There's another facet of reality that we rarely stop to consider: our brain's experience of time can often be quite strange. In certain situations, our reality can seem to run more slowly or more quickly. When I was eight years old I fell off the roof of a house, and the fall seemed to take a very long time. When I got to high school I learned physics and I calculated how long the fall actually took. It turns out it took eight tenths of a second. So that set me off on a quest to understand something: why did it seem to take so long and what did this tell me about our perception of reality?
Up above the mountains, professional wingsuit flyer Jeb Corliss has experienced time distortion. It all began with a particular jump he'd done before. But on this day, he decided to aim for a target: a set of balloons to smash past with his body. Jeb recalls: "As I was coming in to hit one of those balloons, tied to a ledge of granite, I misjudged." He bounced off flat granite at what he estimates to be 120 miles per hour.
A small miscalculation while wingsuiting put Jeb in fear of his life. His internal experience of the event was different from what the cameras saw.
Because Jeb wingsuits professionally, the events this day were captured by a collection of cameras on the cliffs and on his body. In the video, one can hear the thump as Jeb hits the granite. He streaks past the cameras and keeps going, over the edge of the cliffside he's just scraped against.
And here's where Jeb's sense of time warped. As he describes it: "My brain split into two separate thought processes. One of the thought processes was just technical data. You've got two options: you cannot pull, so you go ahead and impact and basically die. Or, you can pull, get a parachute over your head and then bleed to death while you're waiting for rescue."
To Jeb these two separate thought processes felt like minutes of time: "It feels like you're operating so fast that your perception of everything else seems to slow down, and everything gets stretched. Time slows down and you get that feeling of slow motion."
He pulled his ripcord and careened to the ground having broken a leg, both ankles, and three toes. Six seconds elapsed between the instant Jeb hit the rock, and the moment he yanked the cord. But, just like my fall from the roof, that stretch seemed to him to have taken a longer time.
The subjective experience of time slowing has been reported in a variety of life-threatening experiences - for example, car accidents or muggings - as well as in events that involve seeing a loved one in danger, such as a child falling into a lake. All these reports are characterized by a sense that the events unfolded more slowly than normal, with rich details available.
When I fell off the roof, or when Jeb bounced off the cliffs lip, what happened inside our brains? Does time really slow down in frightening situations?
A few years ago, my students and I designed an experiment to address this open question. We induced extreme fear in people by dropping them from 150 feet in the air. In free fall. Backward.
In this experiment, participants fell with a digital display strapped to their wrists - a device we invented called the perceptual chronometer. They reported the numbers they were able to read on the device strapped to their wrists. If they really could see time in slow motion they would be able to read the numbers. But no one could.
When the perceptual chronometer alternates numbers slowly, they can be read out. At a slightly higher alternation rate, they become impossible to read.
MEASURING THE SPEED OF
SIGHT: THE PERCEPTUAL
To test time perception in frightening situations, we dropped volunteers from 150 feet. I dropped myself three times; each time was equally terrifying. On the display, numbers are generated with LED lights. Every moment, the lights that are on go off, and those that are off turn on. At slow rates of alternation, participants have no trouble reporting the numbers. But at a slightly faster rate, the positive and negative images fuse together, making the numbers impossible to see. To determine whether participants could actually see in slower motion, we dropped people with the alternation rate just slightly higher than people could normally see. If they were actually seeing in slow motion - like Neo in The Matrix - they would have no trouble discriminating the numbers. If not, the rate at which they can perceive the numbers should be no different than when they were on the ground. The result? We dropped twenty-three volunteers, including myself. No one's in-flight performance was better than their ground-based performance. Despite initial hopes, we were not like Neo.
[I AIN’T GOING TO TRY THIS TEST, NO WAY - SCARES ME TO JUST THINK ABOUT IT - Keith Hiunt]
So why do Jeb and I both recall our accidents as happening in slow motion? The answer appears to lie in the way our memories are stored.
In threatening situations, an area of the brain called the amygdala kicks into high gear, commandeering the resources of the rest of the brain and forcing everything to attend to the situation at hand. When the amygdala is in play, memories are laid down with far more detail and richness than under normal circumstances; a secondary memory system has been activated. After all, that's what memory is for: keeping track of important events, so that if you're ever in a similar situation, your brain has more information to try to survive. In other words, when things are life-threateningly scary, it's a good time to take notes.
The interesting side effect is this: your brain is not accustomed to that kind of density of memory (the hood was crumpling, the rear-view mirror was falling off, the other driver looked like my neighbor Bob) - so when the events are replayed in your memory, your interpretation is that the event must have taken a longer time. In other words, it appears we don't actually experience terrifying accidents in slow motion; instead, the impression results from the way memories are read out. When we ask ourselves "What just happened?" the detail of memory tells us that it must have been in slow motion, even though it wasn't. Our time distortion is something that happens in retrospect, a trick of the memory that writes the story of our reality.
Now, if you've been in a life-threatening accident, you might insist that you were conscious of the slow-motion unfolding as it happened. But note: that's another trick about our conscious reality. As we saw above with the synchronizing of the senses, we're never actually present in the moment. Some philosophers suggest that conscious awareness is nothing but lots of fast memory querying: our brains are always asking "What just happened? What just happened?". Thus, conscious experience is really just immediate memory.
As a side note, even after we published our research on this, some people still tell me that they know the event actually unfolded like a slow-motion movie. So I typically ask them whether the person next to them in the car was screaming like people do in slow-motion movies, with a low-pitched "noooooooo!" They have to allow that didn't happen. And that's part of why we think that perceptual time doesn't actually stretch out, a person's internal reality notwithstanding.
Your brain serves up a narrative - and each of us believes whatever narrative it tells. Whether you're falling for a visual illusion, or believing the dream you happen to be trapped in, or experiencing letters in color, or accepting a delusion as true during an episode of schizophrenia, we each accept our realities however our brains script them.
Despite the feeling that we're directly experiencing the world out there, our reality is ultimately built in the dark, in a foreign language of electrochemical signals. The activity churning across vast neural networks gets turned into your story of this, your private experience of the world: the feeling of this book in your hands, the light in the room, the smell of roses, the sound of others speaking.
Even more strangely, it's likely that every brain tells a slightly different narrative. For every situation with multiple witnesses, different brains are having different private subjective experiences. With seven billion human brains wandering the planet (and trillions of animal brains), there's no single version of reality. Each brain carries its own truth.
So what is reality?
It's like a television show that only you can see, and you can't turn it off. The good news is that it happens to be broadcasting the most interesting show you could ask for: edited, personalized, and presented just for you.
AND SO THIS IS WHY YOU HAVE THREE GOSPELS ABOUT THE LIFE AND MINISTRY OF JESUS CHRIST……. EACH FROM A DIFFERENT REALITY; SO NO ONE GIVES EXACTLY THE SAME REALITY OF THE SAME EVENTS THAT ARE IMPORTANT EVENTS; SMALL STUFF LIKE “HE WENT TO BETHANY….” MAY BE GIVERN BY ALL THREE. AND A FOURTH GOSPEL, OF JOHN, THAT IS A DIFFERENT REALITY FROM THE OTHER THREE. THEY ALL ADD THEIR REALITY INGREDIENTS TO GIVE US THE BEAUTIFUL FRUIT CAKE AS A WHOLE FRUIT CAKE, SET BEFORE US TO EAT AND ENJOY.
OH INDEED THE WONDERS OF THE BRAIN
THEN WE HAVE THE BIBLE TELLING US THAT THERE IS A SPIRIT IN MAN, WHICH UPON DEATH GOES BACK TO GOD - ECC. 12:9.
YOU NEED TO STUDY MY STUDY ABOUT THE SPIRIT IN MAN; YOU’LL FIND IT UNDER “LIFE, DEATH AND RESURRECTION” SECTION OF THIS WEBSITE.