zaterdag 10 april 2010

Moving...

Hello all,

I'll try send a personal message to all people whom I've told about this blog, but if you're curious why I haven't posted an update in three days: I'm transferring these posts slowly to the definite location of this blog, on http://blogs.nature.com/ericwubbo/. Of course, I'm adapting/editing the texts a bit while I'm transferring them, which doesn't give too much time to write entirely new stuff, but somewhen next week there will be truly fresh and sparkly new posts to be found on http://blogs.nature.com/ericwubbo/. You've been warned...

Hoping to see you all there!

Eric-Wubbo

dinsdag 6 april 2010

Are modern scientists duller?

Summary: Last year, the journal Medical Hypotheses contained an editorial titled “Why are modern scientists so dull?” It claimed that 'revolutionary' scientists, who are both smart and creative, are going extinct because universities select more and more for hard-working, agreeable people instead of intelligent or creative ones. But is this really true?

The journal Medical Hypotheses is a journal that allows thinkers to express their unconventional ideas in a non-peer-reviewed way. It therefore contains a number of articles which must not be taken very seriously (such as an article disputing the link between HIV and AIDS), but sometimes the recalcitrant thoughts expressed by writers and editors can be interesting to ponder. And the particular paper I'd like to ponder in this blog post is the 2009 issue 72, page 237-243: 'Why are modern scientists so dull? How science selects for perseverance and sociability at the expense of intelligence and creativity'

At first sight, this may just seem a science-flavoured equivalent of the old complaint that in the past scientists were as not soft and pampered as the spoiled 50- to 60-year old brats who currently populate our universities. No, in the olden days, to be a scientist you had to walk barefoot to university through the snow and the blizzard, uphill both ways! However, I think it is my duty as a scientist (and my pleasure as a talent science investigator) to give any opinion, no matter how unpromising at first sight, a decent hearing and analysis.

The paper itself is about 6 pages long, but the gist of it is that scientists are more and more selected for agreeableness ('nice people') and conscientousness ('hard workers'), at the cost of intelligence and creativity. After all, if you pick the ten most agreeable people in your class, it is probable that they are not the ten most intelligent people or the ten most creative people as well.

The first issue one can have with the paper is the implication that modern scientists are dull, at least duller than the scientists of an undefined past. The problem here is lack of quantitative data: even if we could somehow define 'cleverness' by IQ (which is doubtful in itself, as certainly not all high-IQ people are star scientists, think for example of Marilyn Vos Savant who has a claimed IQ of over 200 but no real achievements to her name) we don't have reliable IQ-estimates of the average scientist in the 17th, 18th or 19th century to compare with the current university population. So it is always dangerous to get stuck comparing the average university professor nowadays (say a professor Jones, who is a decent authority in his area and quite a competent teacher and administrator but almost certainly not Nobel worthy) with Isaac Newton or Dmitri Mendeleev. Such a comparison would even make an intelligent man like prof. Jones seem rather dull in comparison. To perform a really fair comparison, one should either compare the average scientist of the past with the average scientist now, or the elite scientist of the past with the elite scientists now. As long as such a comparison is not made, the assertion that modern scientists are duller than the scientists of the past stands on rather shaky ground.

However, the author sidesteps the issue whether modern scientists are actually duller by arguing that they at least should be duller than in the past. And it may be interesting to take a closer look at his line of reasoning.

The author´s main argument for claiming scientist dullness is that the time needed to become an independent practitioner of science is increasing all the time. Whereas in the past one often could start independent research in one's early or mid-twenties already (some scientists already got full professorships around age 21), nowadays most people below 30 are strung along by elaborate PhD-studies and a series of postdocs, in which they simply do what other people tell them, instead of being able to follow their own passion. And even when they attain assistant professorships and beyond, the necessity to get funding limits their research to those areas that are palatable to government agencies and charitable organisations that they need to finance their investigations. In short: nowadays scientists are much less free than in the past, which may deter independent, creative spirits who would be the most likely generators of new ideas and innovations.

First of all, I would agree that scientists nowadays may in general not be in the most luxurious of positions, especially as it seems that funding is shifting more and more from universities giving resources to their scientists in a 'no questions asked' manner towards external agencies which can require large amounts of paperwork, reducing the time which a scientist can spend on actual research. However, relative to companies a university scientist still has quite a lot of freedom; and it may be that this relatively larger freedom may still influence creative people to prefer working at universities rather than companies.

It is unlikely however, that the creative freedom of scientists is really suffering a major impact from longer training times and a greater dependence on external funding. First of all, supervisors greatly differ in the amount of freedom they allow their junior researchers – while there undoubtedly exist a fair number of professorial control freaks, a truly smart and creative person will try to find a supervisor who allows a fair amount of freedom as long as at least some of the work is publishable; after all, those supervisors exist as well.

Regarding external funding: external funding so far does not seem to have the effect of making scientists do truly different work from what they'd prefer, it merely seems to induce university-based researchers to 'spin' their own favourite subject in such a way that it seems useful and fundable. So while increasing necessity of obtaining external funding may at first sight seem to limit the researcher's freedom, in reality most of the research only seems adapted towards societal usefulness, and in reality is almost exactly the same work as the work the investigator would do spontaneously, or at the very least chosen in such a way that it is still rather interesting to the investigator him- or herself.

In summary, then, I don't think that current scientists are much duller than the average scientist in the past. There are some things that may hamper their doing science to a greater degree than used to be the case in the past (such as the increased amounts of time needed to get funding), still, for creative people science still offers much more freedom than most companies.

That does not mean that science cannot be made more engaging, and politicians and scientists alike should probably seriously reconsider some aspects of current science, especially the funding aspect. Still, I think the scientists will have some good suggestions for that - whatever the Medical Hypotheses editor claims, they are probably still smart enough.

zaterdag 3 april 2010

Why ten years of experience may not be much better than two

Summary: most people have over 10 years experience in a field of work, like the 10-year rule requires for world-class excellence. Most employees, however, are far from 'world-class'. It is, of course, possible that employees, like normal people, just get stuck in routines that are good enough for practical purposes, but certainly not optimal, as I discussed in my last post. However, this seems odd for companies to condone, as they put a premium on top performance. Still, if we look closely at companies, it becomes less and less surprising why learning to excel in one's work can be so hard.

Most people, especially in more traditional callings, have worked many thousands of hours at it. Yet only a very small minority is considered to really excel. Is this because of innate talents, or something else? In my previous post I discussed how mere repetition of action sequences may not in itself lead to mastery, as one needs a constant drive for improvement to really eliminate poor habits and replace them with better ways of doing things. Unfortunately, in companies, there seem to be several factors making the life of the self-improving employee harder than it could be.

1) despecialization. In contrast to sportspeople and artists, most employees in a regular company need to be more or less multifunctional. Consider a manager at a medium-sized company. The manager needs to have a decent understanding of both the global goal of his (or her) division, as well as knowing the basics of what the people and machinery can and cannot do. He must be proficient in dealing with his subordinates, as well as in dealing with his superordinates (which are different skill sets – billionaire steel baron Andrew Carnegie was great at cozying up to his superiors, but too impatient and demanding to be an efficient supervisor for his workmen). A good manager must also be a great networker, keep an eye on the future of the industry, and be good at written communication and giving presentations. And doubtlessly there are many more subjobs that I am forgetting to mention here. A typical individual might reach high levels in one (or perhaps two) of these skills, but excelling in all of them is probably impossible.

2) lack of time. In many cases, learning and exercising is seen as an unproductive way to spend time (which, I admit, may have to do with too many poorly designed training courses). However, the results of not investing in employee skills are especially apparent in underpaid jobs like schoolteaching (at least in the Netherlands); there simply is no time available for training the teachers, and no extra buffer time during regular work hours in which they can try out new behaviours, which are always slower in the beginning; time pressures too easily lead a budding improvement to be abandoned for the old, less effective, but automated and hence faster method. Compare it to learning to type with ten fingers - how many people (ehm... like me) abandon their training because it takes too long to become proficient at it, while working pressures and deadlines are mounting?

3) risk avoidance. Companies, understandably, don't like failure since it tends to cost them money, as well as reputation (which may cost them even more money by not getting future assignments). Therefore, many companies opt for 'overkill'- they hire someone so competent, or make the task so easy, that the employee will not easily fail. Unfortunately, the old saying, 'if you never fail, you aren't aiming high enough', is true, for lack of failure produces lack of feedback. If Tiger Woods had practiced his putting near a golf hole 1 m in diameter, he might have shot many more hole-in-ones, but he would not have become such a great golfer. There are, admittedly, very good reasons why companies don't want their employees to fail ever – unfortunately, this wish can interfere with the other goal of having people excel.

4) 'impossible' jobs. Some jobs, such as stock market prediction, may be inherently unlearnable. For other jobs, the information to reach a good decision may be too expensive (or even impossible) to obtain. There are also limits on the things that even the best practitioner can do - even the best doctors cannot cure metastatized lung cancer (in most if not all cases), since medical science has not found the cure yet. There will always be a gap between the performance of a beginner and the maximal theorical performance possible; if this gap is small, however, expertise won't help much at all.

5) Lack of feedback. If a worker in a DVD-player factory systematically places a certain screw wrong, and it would get the player broken in 10 years, probably no one would notice this error or correct him for it. And without any feedback, no learning takes place. It may very well be that the banking crisis was brought upon us because of the people who approved the loans were different people than those who had to mop up the damage after the bad loans defaulted. Such lack of feedback could cause the 'loaners' to smugly give out more loans in the assurance that their judgement was infallible.

6) Variation over time. Most people slowly change jobs in a company (get promoted, usually) or their work changes. The more their work changes, the less their previous knowledge helps. Essentially, they need to start over and over again to learn a new jobs all of the time, which may prevent them from ever accumulating enough skill to be an expert in any one of them.

All this, however, does not mean that companies are bad places to be; companies have probably contributed more to human wellfare than all the geniuses in the Encyclopedia Brittanica combined (or at the very least made it possible to have the geniuses' discoveries touch the lives of us all). The contributions of many great workers are unfortunately seldom recognized, probably due to the distribution of the work which often makes it impossible to assign an invention or a performance to one person alone. Nevertheless, the strengths and weaknesses of companies can teach us a lot about human learning. To that learning I will return soon, but first for something a bit different, coming Tuesday 6-4-'10.


*this post is greatly indebted to Geoff Colvin, and his book "Talent is overrated". Geoff Colvin treats the reasons why companies are often such poor training grounds in more elaborate detail than I can do here. His suggestions for improving the situation are a bit more tentative, but they certainly warrant a later discussion.

Why practice doesn't make perfect

Summary: In any field, 10 years seems to be the minimum time needed to attain world-class level performance. However, most employees and even quite a lot of hobbyists have more than ten years or 10,000 hours experience in a certain field -why then are so many of them just mediocre at it? Is it lack of innate talent, or perhaps something else?

A while ago I was discussing with some acquaintances the rule that ten year of practice is necessary to achieve world-class performance. One of them said: then we must be world-class, for we have worked in our field for over ten years!

My acquaintances may have been world-class in their field (I wouldn't know the ranking order in their profession). Generally observing people around me, however, I must unfortunately conclude that 10 years of practicing anything does not necessarily guarantee top performance. Not all people who have played tennis for ten years can get into the semi-finals of Wimbledon, and not all professors win Nobel prizes (and are even not nominated for them!), even if they have been active in their fields for over twenty years. Apparently, while ten years of experience in something seems a necessary condition, it apparently is not sufficient to make one's performance excellent.

To understand why one's performance does not necessarily improve with practice, one should first realize that great performance is the exception rather than the rule in human learning. After all, most of us do most things decently, but rather mediocrily. Einstein might have been a great theoretical physicist, but he was not a very good teacher, husband, cook, actor, public speaker, juggler, driver, and so on. He wasn't even very good at experimental physics. The general pattern in human learning would be that we improve our performance in something until we reach a sufficient level, and then we stop improving.

At first sight, that may seem silly. Why stop improving when we could continue getting better?

Actually, the reason to stop improving is probably entirely practical. Our brain just 'automates' a process that we do well enough. This means that the different steps of the process are fired off' in the right order without us needing to think about it. This has some very nice advantages. First of all, it doesn't cost too much brain power, we can perform it efficiently under stress and time constraints, even when fatigued, and we can free the rest of our brain for planning and learning other things. It is incredibly handy that we can spend the time we are tying our shoes or brushing our teeth pondering how to ask our boss for a raise, instead of trying to create improved ways of shoe-tying (or just having to continually concentrate on how to move the toothbrush) all the time. Having habits and automated reflexes for things that are relatively unimportant or easy saves valuable brain power for things that are important and complex; this is similar to a good process engineer who would not pay much attention to the color of the paint in the factory hall, in order to be more aware whether the meters are indicating that an explosion is imminent.

The process of learning, in most cases, is like a river starting to form. During the first few attempts the water will try all sides to find a good way; but after a while the stream finds a decent way, and as that path is transporting more and more water, it hollows the ground, which creates an even more attractive trajectory for water, which draws even more water, and so on. The final path may not be the best path available- most likely, it is just the first accidentally encountered path that was good enough. However, it is good enough. And for most purposes, that is sufficient.

As an analogy, consider a young German from Berlin who wants to visit Amsterdam. Since there are no road pointers in Berlin indicating in which direction Amsterdam lies, he decides to execute the old heuristic and drive to Rome first, since, as the saying goes, all roads lead to Rome. In Rome there may be a sign or a friendly Italian pointing the way to Amsterdam, so the German learns to reach Amsterdam via Rome.



Since Amsterdam is such a great place, the German travels there many times during the following years. He learns the route (via Rome!) by heart, and buys a faster car, so he gets in Amsterdam quite a bit faster than his first tentative journey. But of course, he is still taking a huge detour and will continue to do so until he decides to find a faster way, or an acquaintance accidentally points him towards a road map.

While this story may seem somewhat absurd, similar trial-and-error and consolidation of the first more-or-less effective pathway seems to be the way in which the brain learns. A good-enough path is stored, and as it is traversed many times, it becomes faster and automatic; but it may not be the best path (or method) possible. The only way to continue improving is to regularly ask oneself (or others/books/teachers) whether it would be possible to increase performance by doing things differently.

So... are my two acquaintances mentioned before really at the top of their field? As I have tried to make clear, not necessarily. Ten years of experience may just be one week of experience repeated 520 times just deepening the inefficient initial riverbed; or it may be a systematic effort that gradually elevates the practitioner to world-class levels. As a rule of thumb, however, work in general does not tend to build much skill; for a part because most jobs consist of doing the same things over and over, but also for other reasons, which can also learn us much about talent and excellence (or lack thereof). But that may be good to discuss next time.

zaterdag 27 maart 2010

The ten year rule

Summary: Some people seem so incredibly good at their field, that one can understand why the Romans attributed outstanding performance to a 'genius', a spirit sent by the Gods to inspire the fortunate individual. However, when studying the lives of prodigies and eminent persons, it becomes clear that eminence in any field takes lots of practice for even the most 'talented' - at least 10 years. Unless you're a painter...


'What nonsense is that?' Tom Poes exclaimed. You gave Sir Oliver the recipe for making gold, which involved molten lead, that needed to be stirred...'
'With a stone', the other added. 'Stirring and stirring, around and around and around. 123,456,789 times; for it is not the formula that is hard, it is the work! After stirring 123,456,789 times the stone has become the philosopher's stone, and only then the lead it touches turns into gold.'
Tom Poes and Roerik Omenom, 'The lead reformer', Marten Toonder


When watching or hearing prodigies or world class experts, it can sometimes be hard to believe that those people are mere humans. How can those people create the most wonderful music, hit the impossible ball, develop such simple yet marvelous theories? It is no wonder the Romans explained extremely high ability with the concept 'genius', denoting that there must have been a special spirit, the genius, instilled by the gods at birth into the fortunate individual.

The existence of prodigies like Mozart, Tiger Woods, or for example Bobby Fischer in chess seems to lend extra credence to the 'genius' idea. However, one should remember that prodigies are called prodigies because they perform much better in a certain field than other children. For example, Tiger Woods won his first golf competition at age 2 (the Under Age 10 section of the Drive, Pitch, and Putt competition at the Navy Golf Course in Cypress, California). However, he was not able to beat his father, who was merely a good amateur golfer, until age 11. So while the golfing skill of Woods was extraordinary for a child his age, in absolute terms Woods was by far not yet good enough to take a serious shot at the world championships: he won his first major trophy in the adult competition at age 21, at which time he had been golfing for more than 19 years. Similarly, Mozart was indeed a child prodigy, but his first composition that is still regarded as a masterwork (instead of a pretty good work – for a 10 year old) was the Piano Concerto no. 9, written when Mozart was 21 years old. By that time, he had had composition lessons for at least 10 years. Bobby Fischer, who was not yet quite an adult when he became a chess grand master at age 15, had nevertheless been practicing chess for 9 years already.

In conclusion, even prodigies apparently need to put in time to reach an (adult) world-class level in their chosen field.

Researchers who looked at prodigies as well as people who were not considered prodigies but nevertheless reached great eminence in their field (for example Einstein, Darwin, Marie Curie, Picasso), discovered that all of them had spent at least 10 years (or estimated to about 10,000 hours) mastering their field before they produced their first masterpiece. This finding has been so consistent, that it has been dubbed the '10 year rule': over a decade of practice is needed to excel in any field.

Now, the exact number of 10 years is not entirely accurate or absolute: it depends on the field. If a field is relatively young, competition is relatively low or the knowledge required is relatively limited, it seems to be easier to reach world-class levels. Examples are the World Memory Championships (very recently instated), painting (where the rule is rather a six-year rule than a 10 year rule, probably since there may be relatively fewer active painters and hence less competition than in other fields; or would there be less knowledge required to produce novel and high-quality works?), and physics/maths, especially in the past (winners of Nobel prizes in physics tended to be younger than chemists and biologists/doctors – perhaps because one needs relatively less knowledge to succeed in maths).

On the other hand, if competition is fierce, and performance does not greatly depend on being in one's physical prime (as is the case in tennis), the ten-year rule can become a 15-year rule or even a 20-year rule, which is the direction performers of classical music seem to be moving.

So, the first rule of talent is that whatever the field is, and no matter how smart or talented you are, you still need quite some years of hard work to get to the top of your chosen field. However, that begs the question: if only ten years of effort are required to excel in a certain field, why aren't there more people excelling? Why isn't the world rife with Einstein-class physicists and Rembrandt-eclipsing painters? Must we not conclude that the real talents have a certain gift that allows them to soar to great heights while the rest of us is still plodding along, even after ten, fifteen or twenty years? Actually, the answer seems to be 'no', and that is what I intend to talk about next time.

woensdag 24 maart 2010

'Integrated science': a better way of teaching science?

Summary: Princeton University currently offers a special course programme for undergraduates: integrated science. Would this be a good idea to stimulate scientific talent, or is it just a marketing ploy?

Quite some time ago, one of my former students pointed me towards a website of Princeton university, http://www.princeton.edu/integratedscience/ . His question to me, as beginning expert on science talent development, was what I thought of such a programme.

While this question may sound simple, it actually goes quite deep, and it is only now (with a lot more reading and thinking under my belt) that I'd venture to give an answer. But before that, I should probably explain what this 'integrated science' is all about.

'Integrated science' is an undergraduate programme at Princeton University, consisting of a number of related courses which cover some mathematics, physics, chemistry and biology as well as teaching students how to apply for example computer programs to science problems. Its aim seems to be to give students a broad yet solid science background, as well as preparing them for the sciences of the future.

For the students, integrated science would seem to offer multiple advantages.

First of all, integrated science may be less 'scary' to students than traditional monodisciplines such as offered at Dutch universities. Many students, when leaving highschool, are rather uncertain what discipline to pursue; it is a weighty decision, and despite open days at the universities students have often no idea what they get themselves into. This leads to large numbers of students abandoning their studies within the first year, or switching studies (only 20% of Dutch students get their BSc or BA in three years). Other students try to prevent burning bridges by following studies that are 'generally useful', like economics, management sciences, psychology or law. Natural science is generally considered too specialist, and therefore too scary to choose. By creating a course such as integrated science that students can choose while feeling that they can still go any direction they want (so not necessarily being condemned to a life of 'stuffy' labwork), the Princeton faculty makes it more attractive to choose science. And since it is, in my opinion, very good if more people have at least a decent understanding of science, and some people who may not think themselves to be much of a scientist to discover the joy of science, I can only applaud integrated science for that.

The second reason that I think integrated science may be useful has to do with the American system. While many European systems (such as the Dutch system) only allow students to choose from one of the many monodisciplines, the US system generally allows students to pick and choose what courses to follow. However, the freedom this offers is not without its price, for at least some students apparently wind up with a hodgepodge of courses that doesn't make them very marketable at all (as the Avenue-Q song goes, “What do you do with a B.A. in English?”). Having a solid course with a certificate may ensure that Princeton students don't shoot themselves in the foot (too much) and are better prepared for a career.

However, there may have been another undercurrent in my former student's question, and it may be well to discuss it here, since this, after all, aims to be a science talent blog. That question is: would integrated science make people better scientists than traditional (European) monodisciplinary studies?

The cop-out answer would be that we can only know for sure in twenty years (which is scientifically speaking true). However, I think we can make some decent predictions based on history.

Briefly, if one looks at the careers of famous scientists (such as Darwin, Einstein, Newton), most of them seem to have been very monodisciplinary. That does not mean that they did not have hobbies (Einstein played the violin and loved sailing), neither does it mean that they never read books outside their field (Darwin read about geology, as well as Malthus' book on economics and population growth). However, their main field was the subject of almost all their exertions. Even people who exhausted their university's courses in multiple fields (like Linus Pauling did) quickly specialized in one area. Globally, if one trusts Simonton's conclusions in his book “Origins of Genius”, there are basically two kinds of creative breakthroughs. The first and most common kind is by people who have thoroughly mastered a specific field over many years (such as Charles Darwin). The second kind is usually a once-in-a-lifetime spark of brilliance of someone doing work in one field (and having reached a decent level there) switching to another field; but then the multidisciplinarity could be considered more 'serial', stacking one field on top of another, instead of truly combining them. Creativity researchers have also found that most breakthroughs are a combination of deep domain-specific knowledge obtained by many years of intensive studies in a particular field and general heuristics that everyone possesses (see for example Weisberg's paper in the Cambridge Handbook of Expertise and Expert Performance). In short: if multidisciplinarity does make a great scientist, it is not readily apparent from history.

Of course, this is just history, and people could justly accuse me of not taking into account that in the future areas will pop up that combine multiple traditional fields. However, that has been happening for over a hundred years already – take for example biochemistry. Biochemistry slowly came into being during the 19th century – but not by people who had elaborate chemistry and biology background, but by chemists who were curious enough to investigate biological processes. In general, even authorities in 'new fields' commonly just start as specialists in one close field, who'discover' the adjacent field during their PhD or postdoc and make it their speciality. For example, one of the most well-known Dutch bioinformaticists started out as an 'ordinary' biochemist who discovered the possibilities of the computer, and one of my own former supervisors, a computational chemist, started out as an regular organic chemist who did an internship with a professor who was just discovering the possibilities of computer calculations for chemistry. In brief: it would be reasonable to expect that even the leaders of the new scientific areas of the future start out as good monodisciplinarists today, even though they probably would be monodisciplinarists with some curiosity and an open mind about developments outside the traditional confines of their field.

It IS of course possible that a specially combined programme such as integrated science would outperform traditional multidisciplinary science, or even monodisciplinary science. At the moment, though, based on what I know, this seems rather unlikely, as our knowledge of education is still so shaky that attempts to let students generalize beyond the specific knowledge taught in courses is generally doomed to failure; and integrated science may not be much more effective than any other 'multidisciplinary' university programme. It will probably teach students some tricks that they may be able to reproduce, but deep understanding and subsequent innovative thinking ('far transfer', as it's called in the training lingo) may be outside of the capabilities of even Nobel-prize-possessing faculty to just 'teach in the course'; students can probably only develop that during long subsequent self-study and specialisation.

This somewhat bleak conclusion, however, does not mean that interdisciplinary science is useless. If it dispels the fears that many young people have about being or becoming a scientist, if it helps young people to get a solid education to get a job (multidisciplinarity, though a bit awkward in academia, is often appreciated in industry as there one needs to collaborate to a much greater extent with people with different backgrounds), it is probably a valuable addition to the Princeton curriculum. And, as the young talented mathematicians and reseach neurologists in Bloom's study ('Developing Talent in Young People') have shown: it does not really matter if one tries out diverse areas during their bachelor studies; sooner or later they found 'their field', and their passion for it made them excel in it. 'Integrated science' may be a counterproductive (un)specialisation for a PhD-level young scientist. However, for freshmen, it may serve as an attractive springboard to eventually help them find the field they'll love.

zondag 21 maart 2010

Book review: Mindset – the new psychology of success (Carol Dweck)

Summary: Stanford Professor writes about her research on how people's beliefs about ability and the effects of training influence their success. People who think that their intelligence and ability are fixed consider each failure as a sign that they are inherently incompetent and unloveable, making them stressed, defensive, and challenge-avoiding. People with a growth-mindset, who see ability as a result of training, tend not to get discouraged as easily and often get better results in the long run.

There's an old adage that one should never judge a book by its cover. Of the many books I have read in my life, this would be the book to which that is probably most applicable (barring my misprint of The Baron of Munchhausen with the Don Quichote cover). Words like "the new psychology of success" reek of a loud-mouth semi-literate author who has read one or two books on popular psychology and is now marketing 'scientifically sound' workshops. In reality though, the book is written by a prominent researcher (who is indeed an actual psychologist) who summarizes and popularizes the scientific research of herself and her group.

Though the book has over 240 pages, this is probably a sort of minimum required by the publisher, since the core idea is actually quite compact: if people believe that their results are the sole consequence of their instrinsic, unalterable, genetic endowment they are generally less motivated to learn, and succumb more easily to stress or depression. People who, however, believe that their abilities are the result of learning and experience, are much more likely to learn and to endure adversity without too many ill emotional effects.

While this sounds rather common-sense-like, the beauty of professor Dweck's book is in how it carefully uses both biographical data and scientific research to strengthen the reader's understanding of the true implications of this finding. After you've read 'Mindset', you will understand much better why John McEnroe was famous for his tantrums (he had a very fixed mindset, a tennis loss meant that he was inherently worthless), as well as why a four-star chef like Bernard Loiseau committed suicide. You'll learn that Chinese students who think that intelligence is unalterable don't follow remedial English courses, but also that American medical students who believe in innate ability flunk chemistry much more often than students who consider early failure rather as a sign that they haven't worked hard enough or that they should try other learning strategies. You'll also learn some things that are counterintuitive, such that you should never praise children for being smart or talented.

The subtleties of praise, for example, are neatly illustrated by one of Dweck's core experiments, in which she divided preschoolers in two groups, both of which had to solve puzzles. After a certain time had expired, the children had to report how many puzzles they had solved. They then got praised by the experimenter. The children from the first group were praised with sentences like "You got seven out of ten! You must be very smart!", the children from the second group however heard "Seven out of ten! You must have worked very hard. You can be proud of yourself."
Of course, the real experiment began only then.

After being praised, the students were asked whether they'd like to solve another set of puzzles, and were allowed to choose either puzzles which were as difficult as the first set, or puzzles with a greater difficulty level. What did you think the kids chose? Most of the children praised for their intelligence chose the 'standard' puzzles, the children being praised for working hard chose the more difficult ones.

In reality, all children got the more difficult puzzles (after all, one should never trust an experimental psychologist), which made the 'clever' children break down and burst out in tears – suddenly they were not smart anymore. In contrast, the 'hard workers' thoroughly enjoyed themselves. When finally a third set of puzzles were given, of the same difficulty as the first set, the 'intelligent' children performed worse, the hard workers in contrast had improved.

Explaining one's successes and failures on the basis of work and experience may therefore be a much more sensible strategy than clinging to concepts of innate ability. It is sad that people like Bernard Loiseau and many Chinese top students at elite universities have ended their own lives as a response to their self-perceived too-low ability, and almost as sad that many other people are giving up activities or not challenging themselves (which would make them grow) because they believe they can't change their skills. So, while the book's title may indeed contain far too much hot air, and the book is definitely at least 100 pages thicker than it should have been, I think that every parent and teacher should know or learn its core principles by heart, and teach them to those they want to prepare for life. One's mindset is unlikely to be the only component of success, but managing it well may nevertheless allow one to achieve a higher performance, and definitely do it with a lot less stress, and much more joy.