Monday, 23 April 2018

Genius - Baby steps (Do our first three years of life determine how we’ll turn out?)

Baby Steps
by Malcolm Gladwell

Do our first three years of life determine how we’ll turn out?


In April of 1997, Hillary Clinton was the host of a daylong conference at the White House entitled “What New Research on the Brain tells Us About Our Youngest Children.” In her opening remarks, which were beamed live by satellite to nearly a hundred hospitals, universities, and schools, in thirty-seven states, Mrs. Clinton said, “Fifteen years ago, we thought that a baby’s brain structure was virtually complete at birth.” She went on:

Now we understand that it is a work in progress, and that everything we do with a child has some kind of potential physical influence on that rapidly forming brain. A child’s earliest experiences–their relationships with parents and caregivers, the sights and sounds and smells and feelings they encounter, the challenges they meet–determine how their brains are wired. . . . These experiences can determine whether children will grow up to be peaceful or violent citizens, focussed or undisciplined workers, attentive or detached parents themselves.

At the afternoon session of the conference, the keynote speech was given by the director turned children’s advocate Rob Reiner. His goal, Reiner told the assembled, was to get the public to “look through the prism” of the first three years of life “in terms of problem solving at every level of society”:

If we want to have a real significant impact, not only on children’s success in school and later on in life, healthy relationships, but also an impact on reduction in crime, teen pregnancy, drug abuse, child abuse, welfare, homelessness, and a variety of other social ills, we are going to have to address the first three years of life. There is no getting around it. All roads lead to Rome.

The message of the conference was at once hopeful and a little alarming.On the one hand, it suggested that the right kind of parenting during those first three years could have a lasting effect on a child’s life; on the other hand, it implied that if we missed this opportunity the resulting damage might well be permanent. Today, there is a zero-to-three movement, made up of advocacy groups and policymakers like Hillary Clinton, which uses the promise and the threat of this new brain science to push for better pediatric care, early childhood education, and day care. Reiner has started something called the I Am Your Child Foundation, devoted to this cause, and has enlisted the support of, among others, Tom Hanks, Robin Williams, Billy Crystal, Charlton Heston, and Rosie O’Donnell. Some lawmakers now wonder whether programs like Head Start ought to be drastically retooled, to focus on babies and toddlers rather than on preschoolers. The state of California recently approved a fifty-cent-per-pack tax on cigarettes to fund programs aimed at improving care for babies and toddlers up to the age of five. The state governments of Georgia and Tennessee send classical-music CDs home from the hospital with every baby, and Florida requires that day-care centers play classical music every day–all in the belief that Mozart will help babies build their minds in this critical window of development. “During the first part of the twentieth century, science built a strong foundation for the physical health of our children,” Mrs. Clinton said in her speech that morning. “The last years of this century are yielding similar breakthroughs for the brain. We are . . . coming closer to the day when we should be able to insure the well-being of children in every domain–physical, social, intellectual, and emotional.”

The First Lady took pains not to make the day’s message sound too extreme. “I hope that this does not create the impression that, once a child’s third birthday rolls around, the important work is over,”she said, adding that much of the brain’s emotional wiring isn’t completed until adolescence, and that children never stop needing the love and care of their parents. Still, there was something odd about the proceedings. This was supposed to be a meeting devoted to new findings in brain science, but hardly any of the brain science that was discussed was new. In fact, only a modest amount of brain science was discussed at all. Many of the speakers were from the worlds of education and policy. Then, there was Mrs. Clinton’s claim that the experiences of our first few years could “determine” whether we grow up to be peaceful or violent, focussed or undisciplined. We tend to think that the environmental influences upon the way we turn out are the sum of a lifetime of experiences–that someone is disciplined because he spent four years in the Marines, or because he got up every morning as a teen-ager to train with the swim team. But Hillary Clinton was proposing that we direct our attention instead to what happens to children in a very brief window early in life. The First Lady, now a candidate for the United States Senate, is associating herself with a curious theory of human development. Where did this idea come from? And is it true?


John Bruer tackles both these questions in his new book, “The Myth of The First Three Years” (Free Press; $25). From its title, Bruer’s work sounds like a rant. It isn’t. Noting the cultural clout of the zero-to- three idea, Bruer, who heads a medical-research foundation in St. Louis, sets out to compare what people like Rob Reiner and Hillary Clinton are saying to what neuroscientists have actually concluded. The result is a superb book, clear and engaging, that serves as both popular science and intellectual history.

Mrs. Clinton and her allies, Bruer writes, are correct in their premise: the brain at birth is a work in progress. Relatively few connections among its billions of cells have yet been established. In the first few years of life, the brain begins to wire itself up at a furious pace, forming hundreds of thousands, even millions, of new synapses every second. Infants produce so many new neural connections, so quickly, that the brain of a two-year-old is actually far more dense with neural connections than the brain of an adult. After three, that burst of activity seems to slow down, and our brain begins the long task of rationalizing its communications network, finding those connections which seem to be the most important and getting rid of the rest.

During this brief initial period of synaptical “exuberance,” the brain is especially sensitive to its environment. David Hubel and Torsten Wiesel, in a famous experiment, sewed one of the eyes of a kitten shut for the first three months of its life, and when they opened it back up they found that the animal was permanently blind in that eye. There are critical periods early in life, then, when the brain will not develop properly unless it receives a certain amount of outside stimulation. In another series of experiments, begun in the early seventies, William Greenough, a psychologist at the University of Illinois, showed that a rat reared in a large, toy-filled cage with other rats ended up with a substantially more developed visual cortex than a rat that spent its first month alone in a small, barren cage: the brain, to use the word favored by neuroscientists, is plastic–that is, modifiable by experience. In other words, Hillary Clinton’s violent citizens and unfocussed workers might seem to be the human equivalents of kittens who’ve had an eye sewed shut, or rats who’ve been reared in a barren cage. If in the critical first three years of synapse formation we could give people the equivalent of a big cage full of toys, she was saying, we could make them healthier and smarter.

Put this way, these ideas sound quite reasonable, and it’s easy to see why they have attracted such excitement. But Bruer’s contribution is to show how, on several critical points, this account of child development exaggerates or misinterprets the available evidence.

Consider, he says, the matter of synapse formation. The zero-to- three activists are convinced that the number of synapses we form in our earliest years plays a big role in determining our mental capacity. But do we know that to be true? People with a form of mental retardation known as fragile-X syndrome, Bruer notes, have higher numbers of synapses in their brain than the rest of us. More important, the period in which humans gain real intellectual maturity is late adolescence, by which time the brain is aggressively pruning the number of connections. Is intelligence associated with how many synapses you have or with how efficiently you manage to sort out and make sense of those connections later in life? Nor do we know how dependent the initial burst of synapse formation is on environmental stimulation. Bruer writes of an experiment where the right hand of a monkey was restrained in a leather mitten from birth to four months, effectively limiting all sensory stimulation. That’s the same period when young monkeys form enormous numbers of connections in the somatosensory cortex, the area of the monkey brain responsible for size and texture discriminations, so you’d think that the restrained hand would be impaired. But it wasn’t: within a short time, it was functioning normally, which suggests that there is a lot more flexibility and resilience in some aspects of brain development than we might have imagined.

Bruer also takes up the question of early childhood as a developmental window. It makes sense that if children don’t hear language by the age of eleven or twelve they aren’t going to speak, and that children who are seriously neglected throughout their upbringing will suffer permanent emotional injury. But why, Bruer asks, did advocates arrive at three years of age as a cutoff point? Different parts of the brain develop at different speeds. The rate of synapse formation in our visual cortex peaks at around three or four months. The synapses in our prefrontal cortex–the parts of our brain involved in the most sophisticated cognitive tasks–peak perhaps as late as three years, and aren’t pruned back until middle-to-late adolescence. How can the same cutoff apply to both regions?

Greenough’s rat experiments are used to support the critical- window idea, because he showed that he could affect brain development in those early years by altering the environment of his animals. The implications of the experiment aren’t so straightforward, though. The experiments began when the rats were about three weeks old, which is already past rat “infancy,” and continued until they were fifty-five days old, which put them past puberty. So the experiment showed the neurological consequences of deprivation not during some critical window of infancy but during the creature’s entire period of maturation. In fact, when Greenough repeated his experiment with rats that were four hundred and fifty days old–well past middle age–he found that those kept in complex environments once again had significantly denser neural connections than those kept in isolation.

Even the meaning of the kitten with its eye sewn shut turns out to be far from obvious. When that work was repeated on monkeys, researchers found that if they deprived both eyes of early stimulation–rearing a monkey in darkness for its first six months–the animal could see (although not perfectly), and the binocularity of its vision, the ability of its left and right eyes to co├Ârdinate images, was normal. The experiment doesn’t show that more stimulation is better than less for binocular vision. It just suggests that whatever stimulation there is should be balanced, which is why closing one eye tilts the developmental process in favor of the open eye.

To say that the brain is plastic, then, is not to say that the brain is dependent on certain narrow windows of stimulation. Neuroscientists say instead that infant brains have “experience-expectant plasticity”–which means that they need only something that approximates a normal environment. Bruer writes:

The odds that our children will end up with appropriately fine-tuned brains are incredibly favorable, because the stimuli the brain expects during critical periods are the kinds of stimuli that occur everywhere and all the time within the normal developmental environment for our species. It is only when there are severe genetic or environmental aberrations from the normal that nature’s expectations are frustrated and neural development goes awry.

In the case of monkeys, the only way to destroy their binocular vision is to sew one eye shut for six months–an entirely contrived act that would almost never happen in the wild. Greenough points out that the “complex” environment he created for his rats–a large cage full of toys and other animals–is actually the closest equivalent of the environment that a rat would encounter naturally. When he created a super-enriched environment for his rats, one with even more stimulation than they would normally encounter, the rats weren’t any better off. The only way he could affect the neurological development of the animals was to put them in a barren cage by themselves–again, a situation that an animal would never encounter in the wild. Bruer quotes Steve Petersen, a neuroscientist at Washington University, in St. Louis, as saying that neurological development so badly wants to happen that his only advice to parents would be “Don’t raise your children in a closet, starve them, or hit them in the head with a frying pan.” Petersen was, of course, being flip. But the general conclusion of researchers seems to be that we human beings enjoy a fairly significant margin of error in our first few years of life. Studies done of Romanian orphans who spent their first year under conditions of severe deprivation suggest that most (but not all) can recover if adopted into a nurturing home. In another study, psychologists examined children from an overcrowded orphanage who had been badly neglected as infants and subsequently adopted into loving homes. Within two years of their adoption, one psychologist involved in their rehabilitation had concluded:

We had not anticipated the older children who had suffered deprivations for periods of 21/2 to 4 years to show swift response to treatment. That they did so amazed us. These inarticulate, underdeveloped youngsters who had formed no relationships in their lives, who were aimless and without a capacity to concentrate on anything, had resembled a pack of animals more than a group of human beings….As we worked with the children, it became apparent that their inadequacy was not the result of damage but, rather, was due to a dearth of normal experiences without which development of human qualities is impossible. After a year of treatment, many of these older children were showing a trusting dependency toward the staff of volunteers and…self-reliance in play and routines.


Some years ago, the Berkeley psychology professor Alison Gopnik and one of her students, Betty Repacholi, conducted an experiment with a series of fourteen-month-old toddlers. Repacholi showed the babies two bowls of food, one filled with Goldfish crackers and one filled with raw broccoli. All the babies, naturally, preferred the crackers. Repacholi then tasted the two foods, saying “Yuck” and making a disgusted face at one and saying “Yum” and making a delighted face at the other. Then she pushed both bowls toward the babies, stretched out her hand, and said, “Could you give me some?”

When she liked the crackers, the babies gave her crackers. No surprise there. But when Repacholi liked the broccoli and hated the crackers, the babies were presented with a difficult philosophical issue–that different people may have different, even conflicting, desires. The fourteen-month-olds couldn’t grasp that. They thought that if they liked crackers everyone liked crackers, and so they gave Repacholi the crackers, despite her expressed preferences. Four months later, the babies had, by and large, figured this principle out, and when Repacholi made a face at the crackers they knew enough to give her the broccoli. “The Scientist in the Crib”(Morrow; $24), a fascinating new book that Gopnik has written with Patricia Kuhl and Andrew Meltzoff, both at the University of Washington, argues that the discovery of this principle–that different people have different desires–is the source of the so-called terrible twos. “What makes the terrible twos so terrible is not that the babies do things you don’t want them to do–one-year-olds are plenty good at that–but that they do things because you don’t want them to,” the authors write. And why is that? Not, as is commonly thought, because toddlers want to test parental authority, or because they’re just contrary. Instead, the book argues, the terrible twos represent a rational and engaged exploration of what is to two-year-olds a brand-new idea–a generalization of the insight that the fact that they hate broccoli and like crackers doesn’t mean that everyone hates broccoli and likes crackers. “Toddlers are systematically testing the dimensions on which their desires and the desires of others may be in conflict,” the authors write. Infancy is an experimental research program, in which “the child is the budding psychologist; we parents are the laboratory rats.”

These ideas about child development are, when you think about it, oddly complementary to the neurological arguments of John Bruer. The paradox of the zero-to-three movement is that, for all its emphasis on how alive children’s brains are during their early years, it views babies as profoundly passive–as hostage to the quality of the experiences provided for them by their parents and caregivers. “The Scientistin the Crib” shows us something quite different. Children are scientists, who develop theories and interpret evidence from the world around them in accordance with those theories. And when evidence starts to mount suggesting that the existing theory isn’t correct–wait a minute, just because I like crackers doesn’t mean Mommy likes crackers–they create a new theory to explain the world, just as a physicist would if confronted with new evidence on the relation of energy and matter. Gopnik, Meltzoff, and Kuhl play with this idea at some length. Science, they suggest, is actually a kind of institutionalized childhood, an attempt to harness abilities that evolved to be used by babies or young children. Ultimately, the argument suggests that child development is a rational process directed and propelled by the child himself. How does the child learn about different desires? By systematically and repeatedly provoking a response from adults. In the broccoli experiment, the adult provided the fourteen-month-old with the information (“I hate Goldfish crackers”) necessary to make the right decision. But the child ignored that information until he himself had developed a theory to interpret it. When “The Scientist in the Crib” describes children as budding psychologists and adults as laboratory rats, it’s more than a clever turn of phrase. Gopnik, Meltzoff, and Kuhl observe that our influence on infants “seems to work in concert with children’s own learning abilities.” Newborns will “imitate facial expressions” but not “complex actions they don’t understand themselves.” And the authors conclude, “Children won’t take in what you tell them until it makes sense to them. Other people don’t simply shape what children do; parents aren’t the programmers. Instead, they seem designed to provide just the right sort of information.”

It isn’t until you read “The Scientist in the Crib” alongside more conventional child-development books that you begin to appreciate the full implications of its argument. Here, for example, is a passage from “What’s Going On in There? How the Brain and Mind Develop in the First Five Years of Life,” by Lise Eliot, who teaches at the University of Chicago: “It’s important to avoid the kind of muddled baby-talk that turns a sentence like ‘Is she the cutest little baby in the world?’ into ‘Uz see da cooest wiwo baby inna wowud?’ Caregivers should try to enunciate clearly when speaking to babies and young children, giving them the cleanest, simplest model of speech possible.” Gopnik, Meltzoff, and Kuhl see things a little differently. First, they point out, by six or seven months babies are already highly adept at decoding the sounds they hear around them, using the same skills we do when we talk to someone with a thick foreign accent or a bad cold. If you say “Uz see da cooest wiwo baby inna wowud?” they hear something like “Is she the cutest little baby in the world?” Perhaps more important, this sort of Motherese–with its elongated vowels and repetitions and overpronounced syllables–is just the thing for babies to develop their language skills.And Motherese, the authors point out, seems to be innate. It’s found in every culture in the world, and anyone who speaks to a baby uses it, automatically, even without realizing it. Babies want Motherese, so they manage to elicit it from the rest of us. That’s a long way from the passive baby who thrives only because of the specialized, high-end parenting skills of the caregiver. “One thing that science tells us is that nature has designed us to teach babies, as much as it has designed babies to learn,” Gopnik, Meltzoff, and Kuhl write. “Almost all of the adult actions we’ve described”–actions that are critical for the cognitive development of babies–”are swift, spontaneous, automatic and unpremeditated.”


Does it matter that Mrs. Clinton and her allies have misread the evidence on child development? In one sense, it doesn’t. The First Lady does not claim to be a neuroscientist. She is a politician, and she is interested in the brains of children only to further an entirely worthy agenda: improved day care, pediatric care, and early- childhood education. Sooner or later, however, bad justifications for social policy can start to make for bad social policy, and that is the real danger of the zero-to-three movement.

In Lise Eliot’s book, for instance, there’s a short passage in which she writes of the extraordinary powers of imitation that infants possess. A fifteen-month-old who watches an experimenter lean over and touch his forehead to the top of a box will, when presented with that same box four months later, do exactly the same thing. “The fact that these memories last so long is truly remarkable–and a little bit frightening,” Eliot writes, and she continues:

It goes a long way toward explaining why children, even decades later, are so prone to replicating their parents’ behavior. If toddlers can repeat, even several months later, actions they’ve seen only once or twice, just imagine how watching their parents’ daily activities must affect them. Everything they see and hear over time–work, play, fighting, smoking, drinking, reading, hitting, laughing, words, phrases, and gestures–is stored in ways that shape their later actions, and the more they see of a particular behavior, the likelier it is to reappear in their own conduct.

There is something to this. Why we act the way we do is obviously the result of all kinds of influences and experiences, including those cues we pick up unconsciously as babies. But this doesn’t mean, as Eliot seems to think it does, that you can draw a straight line between a concrete adult behavior and what little Suzie, at six months, saw her mother do. As far as we can tell, for instance, infant imitation has nothing to do with smoking. As the behavioral geneticist David Rowe has demonstrated, the children of smokers are more likely than others to take up the habit because of genetics: they have inherited the same genes that made their parents like, and be easily addicted to, nicotine. Once you account for heredity, there is little evidence that parental smoking habits influence children; the adopted children of smokers, for instance, are no more likely to smoke than the children of non-smokers. To the extent that social imitation is a factor in smoking, the psychologist Judith Rich Harris has observed, it is imitation that occurs in adolescence between a teen-ager and his or her peers. So if you were to use Eliot’s ideas to design an anti- smoking campaign you’d direct your efforts to stop parents from smoking around their children, and miss the social roots of smoking entirely.

This point–the distance between infant experience and grownup behavior–is made even more powerfully in Jerome Kagan’s marvellous new book, “Three Seductive Ideas”(Oxford; $27.50). Kagan, a professor of psychology at Harvard, offers a devastating critique of what he calls “infant determinism,” arguing that many of the truly critical moments of socialization–the moments that social policy properly concerns itself with–occur well after the age of three. As Kagan puts it, a person’s level of “anxiety, depression, apathy and anger” is linked to his or her “symbolic constructions of experience”–how the bare facts of any experience are combined with the context of that event, attitudes toward those involved, expectations and memories of past experience. “The Palestinian youths who throw stones at Israeli soldiers believe that the Israeli government has oppressed them unjustly,” Kagan writes. He goes on:

The causes of their violent actions are not traceable to the parental treatment they received in their first few years. Similarly, no happy African-American two-year-old knows about the pockets of racism in American society or the history of oppression blacks have suffered. The realization that there is prejudice will not take form until that child is five or six years old.

Infant determinism doesn’t just encourage the wrong kind of policy. Ultimately, it undermines the basis of social policy. Why bother spending money trying to help older children or adults if the patterns of a lifetime are already, irremediably, in place? Inevitably, some people will interpret the zero-to-three dogma to mean that our obligations to the disadvantaged expire by the time they reach the age of three. Kagan writes of a famous Hawaiian study of child development, in which almost seven hundred children, from a variety of ethnic and economic backgrounds, were followed from birth to adulthood. The best predictor of who would develop serious academic or behavioral problems in adolescence, he writes, was social class: more than eighty per cent of the children who got in trouble came from the poorest segment of the sample. This is the harsh reality of child development, from which the zero-to-three movement offers a convenient escape. Kagan writes, “It is considerably more expensive to improve the quality of housing, education and health of the approximately one million children living in poverty in America today than to urge their mothers to kiss, talk to, and play with them more consistently.” In his view, “to suggest to poor parents that playing with and talking to their infant will protect the child from future academic failure and guarantee life success” is an act of dishonesty. But that does not go far enough. It is also an unwitting act of reproach: it implies to disadvantaged parents that if their children do not turn out the way children of privilege do it is their fault–that they are likely to blame for the flawed wiring of their children’s brains.


In 1973, when Hillary Clinton–then, of course, known as Hillary Rodham–was a young woman just out of law school, she wrote an essay for the Harvard Educational Review entitled “Children Under the Law.” The courts, she wrote, ought to reverse their long-standing presumption that children are legally incompetent. She urged, instead, that children’s interests be considered independently from those of their parents. Children ought to be deemed capable of making their own decisions and voicing their own interests, unless evidence could be found to the contrary. To her, the presumption of incompetence gave the courts too much discretion in deciding what was in the child’s best interests, and that discretion was most often abused in cases of children from poor minority families. “Children of these families,” she wrote, “are perceived as bearers of the sins and disabilities of their fathers.”

This is a liberal argument, because a central tenet of liberalism is that social mobility requires a release not merely from burdens imposed by poverty but also from those imposed by family–that absent or indifferent or incompetent parents should not be permitted to destroy a child’s prospects. What else was the classic Horatio Alger story about? In “Ragged Dick,” the most famous of Alger’s novels, Dick’s father runs off before his son’s birth, and his mother dies destitute while Dick is still a baby. He becomes a street urchin, before rising to the middle class through a combination of hard work, honesty, and luck. What made such tales so powerful was, in part, the hopeful notion that the circumstances of your birth need not be your destiny; and the modern liberal state has been an attempt to make good on that promise.

But Mrs. Clinton is now promoting a movement with a different message–that who you are and what you are capable of could be the result of how successful your mother and father were in rearing you. In her book “It Takes a Village,” she criticizes the harsh genetic determinism of “The Bell Curve.” But an ideology that holds that your future is largely decided at birth by your parents’ genes is no more dispiriting than one that holds that your future might be decided at three by your parents’ behavior. The unintended consequence of the zero-to-three movement is that, once again, it makes disadvantaged children the bearers of the sins and disabilities of their parents.

The truth is that the traditional aims of the liberal agenda find ample support in the arguments of John Bruer, of Jerome Kagan, of Judith Rich Harris, and of Gopnik, Meltzoff, and Kuhl. All of them offer considerable evidence that what the middle class perceives as inadequate parenting need not condemn a baby for life, and that institutions and interventions to help children as they approach maturity can make a big difference in how they turn out. It is, surely, a sad irony that, at the very moment when science has provided the intellectual reinforcement for modern liberalism, liberals themselves are giving up the fight.

Full article here

Wednesday, 18 April 2018

Improvement on brain efficiency - for everyone

Some common sense advice, related to the biochemistry of the brain.
Keep yourself hydrated. A 5% dehydration is already reducing your brain efficiency. A 30% is comparable to being drunk. Drink clear, good water often.
Sleep- is it often overlooked, but we need 8–9 hours daily, ideally starting from 21–22. Being tired is a creeping sensation that we recognise too late (burnout). Sleep deprivation is also a major cause of performance decrease.
Eating- there are good foods and bad foods. Avoid cholesterol and increase the good fats (omega3, essential fatty acids such as linoleic and linolenic acids). Fruits and veg to be a major part of your diet. Water rich and alkaline is possible.
Exercise - you got two parts. The high intensity training got the advantage to take less time and to be more efficient. Static or slow exercise like yoga or tai chi is recommended given the theory that it is not that we lose mobility because we age, but we age because we lose mobility. If this is true, our performance will increase with our mobility, especially the spine mobility, as an important part of the nervous system. And a beneficial extension of our health span. And hormonal balance.
Supplements: Ginseng, Ginkgo biloba, Magnesium Malate, Omega 3, Zinc, Vitamin D3, B Complex , Uritidine Monophosphate, just to name few.
That was about the efficiency of the brain.
Now to the wise part, this is a bit tricky. Implying that we can become wise through our experience or through the experience of others.
Reading and Travelling will occupy the two first positions on my list.
Meditation and any kind of awareness exercise.
Studying the art of happiness, the art of gratitude
Being in Awe.

Saturday, 20 January 2018

From normal to genius - Can we change destiny? - Yes

We are always asking ourselves: Is there a way for me to become a genius? According to this article from BBC, yes, it is possible. But it is probable a risk you are not willing to take.

Full link here.

The Mistery of Why Some People Become Sudden Geniuses

By Zaria Gorvett
6 January 2018

   "It was the summer of 1860 and Eadweard Muybridge was running low on books. This was somewhat problematic, since he was a bookseller. He handed his San Francisco shop over to his brother and set off on a stagecoach to buy supplies. Little did he know, he was about to change the world forever.

He was some way into his journey, in north-eastern Texas, when the coach ran into trouble. The driver cracked his whip and the horses broke into a run, leading the coach surging down a steep mountain road. Eventually it veered off and into a tree. Muybridge was catapulted into the air and cracked his head on a boulder.

He woke up nine days later at a hospital 150 miles (241 km) away. The accident left him with a panoply of medical problems, including double vision, bouts of seizures and no sense of smell, hearing or taste. But the most radical change was his personality.

Previously Muybridge had been a genial and open man, with good business sense. Afterwards he was risk-taking, eccentric and moody; he later murdered his wife’s lover. He was also, quite possibly, a genius.

The question of where creative insights come from – and how to get more of them – has remained a subject of great speculation for thousands of years. According to scientists, they can be driven by anything from fatigue to boredom. The prodigies themselves have other, even less convincing ideas. Plato said that they were the result of divine madness. Or do they, as Freud believed, arise from the sublimation of sexual desires? Tchaikovsky maintained that eureka moments are born out of cool headwork and technical knowledge.

But until recently, most sensible people agreed on one thing: creativity begins in the pink, wobbly mass inside our skulls. It surely goes without saying that striking the brain, impaling it, electrocuting it, shooting it, slicing bits out of it or depriving it of oxygen would lead to the swift death of any great visions possessed by its owner.

As it happens, sometimes the opposite is true.

After the accident, Muybridge eventually recovered enough to sail to England. There his creativity really took hold. He abandoned bookselling and became a photographer, one of the most famous in the world. He was also a prolific inventor. Before the accident, he hadn’t filed a single patent. In the following two decades, he applied for at least 10.

In 1877 he took a bet that allowed him to combine invention and photography. Legend has it that his friend, a wealthy railroad tycoon called Leland Stanford, was convinced that horses could fly. Or, more accurately, he was convinced that when they run, all their legs leave the ground at the same time. Muybridge said they didn’t.

To prove it he placed 12 cameras along a horse track and installed a tripwire that would set them off automatically as Stanford’s favourite racing horse, Occident, ran. Next he invented the inelegantly named “zoopraxiscope”, a device which allowed him to project several images in quick succession and give the impression of motion. To his amazement, the horse was briefly suspended, mid-gallop. Muybridge had filmed the first movie – and with it proven that yes, horses can fly.

Jon Sarkin was transformed from a chiropractor into an artist after a stroke

The abrupt turnaround of Muybridge’s life, from ordinary bookseller to creative genius, has prompted speculation that it was a direct result of his accident. It’s possible that he had “sudden savant syndrome”, in which exceptional abilities emerge after a brain injury or disease. It’s extremely rare, with just 25 verified cases on the planet.

There’s Tony Cicoria, an orthopaedic surgeon who was struck by lightning at a New York park in 1994. It went straight through his head and left him with an irresistible desire to play the piano. To begin with he was playing other people’s music, but soon he started writing down the melodies that were constantly running through his head. Today he’s a pianist and composer, as well as a practicing surgeon.

Another case is Jon Sarkin, who was transformed from a chiropractor into an artist after a stroke. The urge to draw landed almost immediately. He was having “all kinds” of therapy at the hospital – speech therapy, art therapy, physical therapy, occupational therapy, mental therapy – “And they stuck a crayon in my hand and said ‘want to draw?’ And I said ‘fine’,” he says.

Gottfried Mind was an "artistic savant" who drew cats in extraordinary detail (Credit: Alamy)
His first muse was a cactus at his home in Gloucester, Massachusetts. It was the fingered kind, like you might find in Western movies from the 50s. Even his earliest paintings are extremely abstract. In some versions the branches resemble swirling green snakes, while others they are red, zig-zagging staircases.

His works have since been published in The New York Times, featured on album covers and been covered in a book by a Pulitzer Prize-winning author. They regularly sell for $10,000 (£7,400).

Most strikingly there’s Jason Padgett, who was attacked at a bar in Tacoma, Washington in 2002. Before the attack, Padgett was a college dropout who worked at a futon store. His primary passions in life were partying and chasing girls. He had no interest in maths – at school, he didn’t even get into algebra class.

But that night, everything changed. Initially he was taken to the hospital with a severe concussion. “I remember thinking that everything looked funky, but I thought it was just the narcotic pain shot they gave me” he says. “Then the next morning I woke up and turned on the water. It looked like little tangent lines [a straight line that touches a single point on a curve], spiralling down.”.

When you’re bashed on the head, the effects are similar to a dose of LSD

From then onwards Padgett’s world was overlaid with geometric shapes and gridlines. He became obsessed with maths and is now renowned for his drawings of formulas such as Pi. Today he’s incredulous that he once didn’t know what a tangent was. “I do feel like two people, and I’ve had my mum and my dad say that. It’s like having two separate kids,” he says.

Why does this happen? How does it work? And what does it teach us about what makes geniuses special?

There are two leading ideas. The first is that when you’re bashed on the head, the effects are similar to a dose of LSD. Psychedelic drugs are thought to enhance creativity by increasing the levels of serotonin, the so-called “happiness hormone”, in the brain. This leads to “synaesthesia”, in which more than one region is simultaneously activated and senses which are usually separate become linked.

Many people don’t need drugs to experience this: nearly 5% of the population has some form of synaesthesia, with the most common type being “grapheme-colour”, in which words are associated with colours. For example, the actor Geoffrey Rush believes that Mondays are pale blue.

When the brain is injured, dead and dying cells leak serotonin into the surrounding tissue. Physically, this seems to encourage new connections between brain regions, just as with LSD. Mentally, it allows the person to link the seemingly unconnected. “We’ve found permanent changes before – you can actually see connections in the brain that weren’t there before,” says Berit Brogaard, a neuroscientist who directs the Brogaard Lab for Multisensory Research, Florida.

But there is an alternative. The first clue emerged in 1998, when a group of neurologists noticed that five of their patients with dementia were also artists – remarkably good ones. Specifically, they had frontotemporal dementia, which is unusual in that it only affects some parts of the brain. For example, visual creativity may be spared, while language and social skills are progressively destroyed.

One of these was “Patient 5”. At the age of 53 he had enrolled in a short course in drawing at a local park, though he previously had no interest in such things. It just so happened to coincide with the onset of his dementia; a few months later, he was having trouble speaking.

Soon he became irritable and eccentric, developing a compulsion to search for money on the street. As his illness progressed, so did his drawing, advancing from simple still-life paintings to haunting, impressionist depictions of buildings from his childhood.

To find out what was going on, the scientists performed 3D scans of their patients’ brains. In four out of five cases, they found lesions on the left hemisphere. Nobel Prize-winning research from the 1960s shows that the two halves of the brain specialise in different tasks; in general, the right side is home to creativity and the left is the centre of logic and language.

 ‘Autistic savants’ can have superhuman skills to rival those of the Renaissance polymaths

But the left side is also something of a bully. “It tends to be the dominant brain region,” says Brogaard. “It tends to suppress very marginal types of thinking - highly original, highly creative thinking, because it’s beneficial for our decision-making abilities and our ability to function in normal life.”. The theory goes that as the patients’ left hemispheres became progressively more damaged, their right hemispheres were free to flourish.

This is backed up by several other studies, including one in which creative insight was roused in healthy volunteers by temporarily dialling down activity in the left hemisphere and increasing it in the right. “[the lead researcher] Allen Snyder’s work was replicated by another person, so that’s the theory that I think is responsible,” says Darold Treffert, a psychiatrist from the University of Wisconsin Medical School, who has been studying savant syndrome for decades.

But what about more mainstream geniuses? Could the theory explain their talents, too?

Consider autism. From Daniel Tammet, who can perform mind-boggling mathematical calculations at stupendous speed, to Gottfried Mind, the “Cat Raphael”, who drew the animal with an astonishing level of realism, so-called “autistic savants” can have superhuman skills to rival those of the Renaissance polymaths.

It’s been estimated that as many as one in 10 people with autism have savant syndrome and there’s mounting evidence the disorder is associated with enhanced creativity. And though it’s difficult to prove, it’s been speculated that numerous intellectual giants, including Einstein, Newton, Mozart, Darwin and Michelangelo, were on the spectrum.

One theory suggests that autism arises from abnormally low levels of serotonin in the left hemisphere in childhood, which prevents the region from developing normally. Just like with sudden savant syndrome, this allows the right hemisphere to become more active.

They are usually able to have a normal life, but they also have this obsession - Berit Brogaard, neuroscientist

Interestingly, many people with sudden savant syndrome also develop symptoms of autism, including social problems, obsessive compulsive disorder (OCD) and all-consuming interests. “It got so bad that if I had money I would spray the money with Lysol and put it in the microwave for a few seconds to get rid of the germs,” says Padgett.

“They are usually able to have a normal life, but they also have this obsession,” says Brogaard. This is something universal across all sudden savants. Jon Sarkin compares his art to an instinct. “It doesn’t feel like I like drawing, it feels like I must draw.” His studio contains thousands of finished and unfinished works, which are often scribbled with curves, words, cross-hatchings, and overlapping images.
In fact, though they often don’t need to, sudden savants work hard at improving their craft. “I mean, I practiced a lot. Talent and hard work, I think they are indistinguishable – you do something a lot and you get better at it,” says Sarkin. Padgett agrees. “When you’re fixated on something like that, of course you do discover things.”

Muybridge was no exception. After the bet, he moved to Philadelphia and continued with his passion for capturing motion on film, photographing all kinds of activities such as walking up and down the stairs and, oddly, himself swinging a pickaxe in the nude. Between 1883 and 1886, he took more than 100,000 pictures.

“In my opinion at least, the fact that they can improve their abilities doesn’t negate the suddenness or insistence with which they are there,” says Treffert. As our understanding of sudden savant syndrome improves, eventually it’s hoped that we might all be able to unlock our hidden mental powers – perhaps with the help of smart drugs or hardware.

But until then, perhaps us mortals could try putting in some extra hours instead. "

Thursday, 9 November 2017

Tuesday, 26 September 2017

Sunday, 10 September 2017