zaterdag 10 april 2010

Moving...

Hello all,

I'll try send a personal message to all people whom I've told about this blog, but if you're curious why I haven't posted an update in three days: I'm transferring these posts slowly to the definite location of this blog, on http://blogs.nature.com/ericwubbo/. Of course, I'm adapting/editing the texts a bit while I'm transferring them, which doesn't give too much time to write entirely new stuff, but somewhen next week there will be truly fresh and sparkly new posts to be found on http://blogs.nature.com/ericwubbo/. You've been warned...

Hoping to see you all there!

Eric-Wubbo

dinsdag 6 april 2010

Are modern scientists duller?

Summary: Last year, the journal Medical Hypotheses contained an editorial titled “Why are modern scientists so dull?” It claimed that 'revolutionary' scientists, who are both smart and creative, are going extinct because universities select more and more for hard-working, agreeable people instead of intelligent or creative ones. But is this really true?

The journal Medical Hypotheses is a journal that allows thinkers to express their unconventional ideas in a non-peer-reviewed way. It therefore contains a number of articles which must not be taken very seriously (such as an article disputing the link between HIV and AIDS), but sometimes the recalcitrant thoughts expressed by writers and editors can be interesting to ponder. And the particular paper I'd like to ponder in this blog post is the 2009 issue 72, page 237-243: 'Why are modern scientists so dull? How science selects for perseverance and sociability at the expense of intelligence and creativity'

At first sight, this may just seem a science-flavoured equivalent of the old complaint that in the past scientists were as not soft and pampered as the spoiled 50- to 60-year old brats who currently populate our universities. No, in the olden days, to be a scientist you had to walk barefoot to university through the snow and the blizzard, uphill both ways! However, I think it is my duty as a scientist (and my pleasure as a talent science investigator) to give any opinion, no matter how unpromising at first sight, a decent hearing and analysis.

The paper itself is about 6 pages long, but the gist of it is that scientists are more and more selected for agreeableness ('nice people') and conscientousness ('hard workers'), at the cost of intelligence and creativity. After all, if you pick the ten most agreeable people in your class, it is probable that they are not the ten most intelligent people or the ten most creative people as well.

The first issue one can have with the paper is the implication that modern scientists are dull, at least duller than the scientists of an undefined past. The problem here is lack of quantitative data: even if we could somehow define 'cleverness' by IQ (which is doubtful in itself, as certainly not all high-IQ people are star scientists, think for example of Marilyn Vos Savant who has a claimed IQ of over 200 but no real achievements to her name) we don't have reliable IQ-estimates of the average scientist in the 17th, 18th or 19th century to compare with the current university population. So it is always dangerous to get stuck comparing the average university professor nowadays (say a professor Jones, who is a decent authority in his area and quite a competent teacher and administrator but almost certainly not Nobel worthy) with Isaac Newton or Dmitri Mendeleev. Such a comparison would even make an intelligent man like prof. Jones seem rather dull in comparison. To perform a really fair comparison, one should either compare the average scientist of the past with the average scientist now, or the elite scientist of the past with the elite scientists now. As long as such a comparison is not made, the assertion that modern scientists are duller than the scientists of the past stands on rather shaky ground.

However, the author sidesteps the issue whether modern scientists are actually duller by arguing that they at least should be duller than in the past. And it may be interesting to take a closer look at his line of reasoning.

The author´s main argument for claiming scientist dullness is that the time needed to become an independent practitioner of science is increasing all the time. Whereas in the past one often could start independent research in one's early or mid-twenties already (some scientists already got full professorships around age 21), nowadays most people below 30 are strung along by elaborate PhD-studies and a series of postdocs, in which they simply do what other people tell them, instead of being able to follow their own passion. And even when they attain assistant professorships and beyond, the necessity to get funding limits their research to those areas that are palatable to government agencies and charitable organisations that they need to finance their investigations. In short: nowadays scientists are much less free than in the past, which may deter independent, creative spirits who would be the most likely generators of new ideas and innovations.

First of all, I would agree that scientists nowadays may in general not be in the most luxurious of positions, especially as it seems that funding is shifting more and more from universities giving resources to their scientists in a 'no questions asked' manner towards external agencies which can require large amounts of paperwork, reducing the time which a scientist can spend on actual research. However, relative to companies a university scientist still has quite a lot of freedom; and it may be that this relatively larger freedom may still influence creative people to prefer working at universities rather than companies.

It is unlikely however, that the creative freedom of scientists is really suffering a major impact from longer training times and a greater dependence on external funding. First of all, supervisors greatly differ in the amount of freedom they allow their junior researchers – while there undoubtedly exist a fair number of professorial control freaks, a truly smart and creative person will try to find a supervisor who allows a fair amount of freedom as long as at least some of the work is publishable; after all, those supervisors exist as well.

Regarding external funding: external funding so far does not seem to have the effect of making scientists do truly different work from what they'd prefer, it merely seems to induce university-based researchers to 'spin' their own favourite subject in such a way that it seems useful and fundable. So while increasing necessity of obtaining external funding may at first sight seem to limit the researcher's freedom, in reality most of the research only seems adapted towards societal usefulness, and in reality is almost exactly the same work as the work the investigator would do spontaneously, or at the very least chosen in such a way that it is still rather interesting to the investigator him- or herself.

In summary, then, I don't think that current scientists are much duller than the average scientist in the past. There are some things that may hamper their doing science to a greater degree than used to be the case in the past (such as the increased amounts of time needed to get funding), still, for creative people science still offers much more freedom than most companies.

That does not mean that science cannot be made more engaging, and politicians and scientists alike should probably seriously reconsider some aspects of current science, especially the funding aspect. Still, I think the scientists will have some good suggestions for that - whatever the Medical Hypotheses editor claims, they are probably still smart enough.

zaterdag 3 april 2010

Why ten years of experience may not be much better than two

Summary: most people have over 10 years experience in a field of work, like the 10-year rule requires for world-class excellence. Most employees, however, are far from 'world-class'. It is, of course, possible that employees, like normal people, just get stuck in routines that are good enough for practical purposes, but certainly not optimal, as I discussed in my last post. However, this seems odd for companies to condone, as they put a premium on top performance. Still, if we look closely at companies, it becomes less and less surprising why learning to excel in one's work can be so hard.

Most people, especially in more traditional callings, have worked many thousands of hours at it. Yet only a very small minority is considered to really excel. Is this because of innate talents, or something else? In my previous post I discussed how mere repetition of action sequences may not in itself lead to mastery, as one needs a constant drive for improvement to really eliminate poor habits and replace them with better ways of doing things. Unfortunately, in companies, there seem to be several factors making the life of the self-improving employee harder than it could be.

1) despecialization. In contrast to sportspeople and artists, most employees in a regular company need to be more or less multifunctional. Consider a manager at a medium-sized company. The manager needs to have a decent understanding of both the global goal of his (or her) division, as well as knowing the basics of what the people and machinery can and cannot do. He must be proficient in dealing with his subordinates, as well as in dealing with his superordinates (which are different skill sets – billionaire steel baron Andrew Carnegie was great at cozying up to his superiors, but too impatient and demanding to be an efficient supervisor for his workmen). A good manager must also be a great networker, keep an eye on the future of the industry, and be good at written communication and giving presentations. And doubtlessly there are many more subjobs that I am forgetting to mention here. A typical individual might reach high levels in one (or perhaps two) of these skills, but excelling in all of them is probably impossible.

2) lack of time. In many cases, learning and exercising is seen as an unproductive way to spend time (which, I admit, may have to do with too many poorly designed training courses). However, the results of not investing in employee skills are especially apparent in underpaid jobs like schoolteaching (at least in the Netherlands); there simply is no time available for training the teachers, and no extra buffer time during regular work hours in which they can try out new behaviours, which are always slower in the beginning; time pressures too easily lead a budding improvement to be abandoned for the old, less effective, but automated and hence faster method. Compare it to learning to type with ten fingers - how many people (ehm... like me) abandon their training because it takes too long to become proficient at it, while working pressures and deadlines are mounting?

3) risk avoidance. Companies, understandably, don't like failure since it tends to cost them money, as well as reputation (which may cost them even more money by not getting future assignments). Therefore, many companies opt for 'overkill'- they hire someone so competent, or make the task so easy, that the employee will not easily fail. Unfortunately, the old saying, 'if you never fail, you aren't aiming high enough', is true, for lack of failure produces lack of feedback. If Tiger Woods had practiced his putting near a golf hole 1 m in diameter, he might have shot many more hole-in-ones, but he would not have become such a great golfer. There are, admittedly, very good reasons why companies don't want their employees to fail ever – unfortunately, this wish can interfere with the other goal of having people excel.

4) 'impossible' jobs. Some jobs, such as stock market prediction, may be inherently unlearnable. For other jobs, the information to reach a good decision may be too expensive (or even impossible) to obtain. There are also limits on the things that even the best practitioner can do - even the best doctors cannot cure metastatized lung cancer (in most if not all cases), since medical science has not found the cure yet. There will always be a gap between the performance of a beginner and the maximal theorical performance possible; if this gap is small, however, expertise won't help much at all.

5) Lack of feedback. If a worker in a DVD-player factory systematically places a certain screw wrong, and it would get the player broken in 10 years, probably no one would notice this error or correct him for it. And without any feedback, no learning takes place. It may very well be that the banking crisis was brought upon us because of the people who approved the loans were different people than those who had to mop up the damage after the bad loans defaulted. Such lack of feedback could cause the 'loaners' to smugly give out more loans in the assurance that their judgement was infallible.

6) Variation over time. Most people slowly change jobs in a company (get promoted, usually) or their work changes. The more their work changes, the less their previous knowledge helps. Essentially, they need to start over and over again to learn a new jobs all of the time, which may prevent them from ever accumulating enough skill to be an expert in any one of them.

All this, however, does not mean that companies are bad places to be; companies have probably contributed more to human wellfare than all the geniuses in the Encyclopedia Brittanica combined (or at the very least made it possible to have the geniuses' discoveries touch the lives of us all). The contributions of many great workers are unfortunately seldom recognized, probably due to the distribution of the work which often makes it impossible to assign an invention or a performance to one person alone. Nevertheless, the strengths and weaknesses of companies can teach us a lot about human learning. To that learning I will return soon, but first for something a bit different, coming Tuesday 6-4-'10.


*this post is greatly indebted to Geoff Colvin, and his book "Talent is overrated". Geoff Colvin treats the reasons why companies are often such poor training grounds in more elaborate detail than I can do here. His suggestions for improving the situation are a bit more tentative, but they certainly warrant a later discussion.

Why practice doesn't make perfect

Summary: In any field, 10 years seems to be the minimum time needed to attain world-class level performance. However, most employees and even quite a lot of hobbyists have more than ten years or 10,000 hours experience in a certain field -why then are so many of them just mediocre at it? Is it lack of innate talent, or perhaps something else?

A while ago I was discussing with some acquaintances the rule that ten year of practice is necessary to achieve world-class performance. One of them said: then we must be world-class, for we have worked in our field for over ten years!

My acquaintances may have been world-class in their field (I wouldn't know the ranking order in their profession). Generally observing people around me, however, I must unfortunately conclude that 10 years of practicing anything does not necessarily guarantee top performance. Not all people who have played tennis for ten years can get into the semi-finals of Wimbledon, and not all professors win Nobel prizes (and are even not nominated for them!), even if they have been active in their fields for over twenty years. Apparently, while ten years of experience in something seems a necessary condition, it apparently is not sufficient to make one's performance excellent.

To understand why one's performance does not necessarily improve with practice, one should first realize that great performance is the exception rather than the rule in human learning. After all, most of us do most things decently, but rather mediocrily. Einstein might have been a great theoretical physicist, but he was not a very good teacher, husband, cook, actor, public speaker, juggler, driver, and so on. He wasn't even very good at experimental physics. The general pattern in human learning would be that we improve our performance in something until we reach a sufficient level, and then we stop improving.

At first sight, that may seem silly. Why stop improving when we could continue getting better?

Actually, the reason to stop improving is probably entirely practical. Our brain just 'automates' a process that we do well enough. This means that the different steps of the process are fired off' in the right order without us needing to think about it. This has some very nice advantages. First of all, it doesn't cost too much brain power, we can perform it efficiently under stress and time constraints, even when fatigued, and we can free the rest of our brain for planning and learning other things. It is incredibly handy that we can spend the time we are tying our shoes or brushing our teeth pondering how to ask our boss for a raise, instead of trying to create improved ways of shoe-tying (or just having to continually concentrate on how to move the toothbrush) all the time. Having habits and automated reflexes for things that are relatively unimportant or easy saves valuable brain power for things that are important and complex; this is similar to a good process engineer who would not pay much attention to the color of the paint in the factory hall, in order to be more aware whether the meters are indicating that an explosion is imminent.

The process of learning, in most cases, is like a river starting to form. During the first few attempts the water will try all sides to find a good way; but after a while the stream finds a decent way, and as that path is transporting more and more water, it hollows the ground, which creates an even more attractive trajectory for water, which draws even more water, and so on. The final path may not be the best path available- most likely, it is just the first accidentally encountered path that was good enough. However, it is good enough. And for most purposes, that is sufficient.

As an analogy, consider a young German from Berlin who wants to visit Amsterdam. Since there are no road pointers in Berlin indicating in which direction Amsterdam lies, he decides to execute the old heuristic and drive to Rome first, since, as the saying goes, all roads lead to Rome. In Rome there may be a sign or a friendly Italian pointing the way to Amsterdam, so the German learns to reach Amsterdam via Rome.



Since Amsterdam is such a great place, the German travels there many times during the following years. He learns the route (via Rome!) by heart, and buys a faster car, so he gets in Amsterdam quite a bit faster than his first tentative journey. But of course, he is still taking a huge detour and will continue to do so until he decides to find a faster way, or an acquaintance accidentally points him towards a road map.

While this story may seem somewhat absurd, similar trial-and-error and consolidation of the first more-or-less effective pathway seems to be the way in which the brain learns. A good-enough path is stored, and as it is traversed many times, it becomes faster and automatic; but it may not be the best path (or method) possible. The only way to continue improving is to regularly ask oneself (or others/books/teachers) whether it would be possible to increase performance by doing things differently.

So... are my two acquaintances mentioned before really at the top of their field? As I have tried to make clear, not necessarily. Ten years of experience may just be one week of experience repeated 520 times just deepening the inefficient initial riverbed; or it may be a systematic effort that gradually elevates the practitioner to world-class levels. As a rule of thumb, however, work in general does not tend to build much skill; for a part because most jobs consist of doing the same things over and over, but also for other reasons, which can also learn us much about talent and excellence (or lack thereof). But that may be good to discuss next time.