summaryrefslogtreecommitdiff
path: root/transcripts/open-science-summit-2010/jorge-hirsch.mdwn
blob: 697fc119dc43079d43196ccaba48f444801354ed (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
I would like tot hank Joseph for inviting me to this interesting meeting. I am
a physicist and um, um, well, I've heard abuot interesting things. I don't
think I am sold on open science, maybe I am only one year (here?), of course I
am interested in getting scccience to dddo as much as it can, annd if that's
opppen science then that's great. So, um, what I want to talk about is, um,
how is science doing these days? I think it's in some sense very well,
measured by some measures that I number of papers beign written or so, and in
some sense it's doing badly and that's one of the reasons I was interested in
coming to this meeting. This session is on reputation, I have done some work
on that, and even though my main interest is in pyhsics. Reputation is very
important. Rewards are very important in doing scientific work. Um. So,
basically, there's three reasons why we want to decide on reputation of
scientists, we want to decide on distributed resources like grant money,
equipment money, we want to decide who gets hired to do science, who adavnces
in their positions, and we want to decide who gets rewarded and who gets
prizes. These are the reasons for looking into these questions, and I might
say it's rare to find a scientist who hasn't thought about getting a Nobel
prize for his work, so if we're advocating open science, maybe the Nobel
prizze should not be reesttricted to the recipients, because once you have
large collaborations it becomes a problem.


Work I have done that. The h-index. And uh, it's really very simple and a
trivial idea. The reason it got attention is because it is better than what
got used before, which really wasn't that good- the impact factor. Let's see.
Which one is the one that works here. So, as a, you probably know, the impact
factor measures how many times a journal article is cited, the impact factor
of aj ournal is a measure of the impact of that journal in the last three
years or something, so the impact factor was introduced in the 1960s and it
became very important and most used to judge not only journals but also
scientists, so the criteria for advancement and for success was to publish and
still is to publish in journals with the highest impact factor, like Science
and Nature and so on. And of course that, um, has its flaws and a lot of
people recognize the flaws. The main flaw I suppose is that obviously it does
not really fairly reward a good paper because you have on the average- you
have a very bad papers- that don't really contribute much, and there's a lot
of, and the big barrier in publishing in a high impact journal having to do
with the way a paper gets reviewed and accepted, and so, anyway, I think many
people recognize the impact factor is a problem. So I came up with the idea of
an h-index, which basically simply counts the citations of individual papers
no matter where they are cited or where they are published, so it's really
very simple thing that I think got attention about is that it's just a single
number that quantifies in a way both the productivity and impact of a
scientist and that's explained in this view graph, so you plot the number of
citations that your paper has received in decreasing number of papers, so
paper 1 the highest paper, the citations, the next paper this many citations,
and the .. more citations than that same number, so it's really really very
simple idea um and I thought about it several years ago and we talked about it
and used it and sometimes when we were for example looking at who to hire in
our department, and anyway, I thought maybe I'd write up a little paper, and
not knowing whether or not the paper would be publishable, I hosted it at the
preprint archive (arxiv) and it got immediate attention and so it became quite
popular. Here's an example. Within the most highly cited physicists, and then
the paper, 133, 150 citations, so .. it's nothing very profound abuot it,
Albert Einstein has an h-index of 150 which is actually very high for people
working in this time period, so it's kind of interesting that Albert Einstein
was- a couple of very important things that contributed a lot. This h-index
has created a lot of activity aruond it. What I find most remarkable is that
it really extended to be used in all kinds of disciplines both natural
sciences, social sciences, physics and- signals the papers that um, that I
talk about it, and so on. But, anyway.


That's the past. But I mean, so, what does this tell us? One thing it tells us
is that it's incredibly important for scientists to measure themselves. I
personally never expected that this would have the impact and attention that
it got, but it says that- it's very important to figure out how to make it
worthwhile for scientists to be more open and to share information because
really it's all about the credit, and you know, the uh, uh, it's really
provident in open science, and I don't have it in this point a lot of good
ideas, but I would like to make a point about, so what's wrong with all this.
The h-index and impact factor, it measures something very limited. What other
people are saying or basically paying attention to, it does not necessarily
mean that this work is groundbreaking, that it is going to be lasting. It only
means that it is getting a lot of attention *now*. there are lots of
bandwagons in different areas in science, people racking up the citations, but
is that really the best way to move science forward? The h-index has much
ggggood as itt makes it a liittle moore democcratiic to ppublishh in a
journall that is not very weell knoown bbbut sstilll get peoplee to ppay
atttention to it,, but itt stilll ahss the prooblem tthat we ddon't hhave good
ways of advanciing or rrecoggnizing science ithhat is peerhaps nnot so much
maainstreeam and that is that I personally in my own career that I have seen -
i have gone through periods where I work in mainstream areas, and then there
are some where I think it is moer groundbreaking, and they do not get
attention because they are not mainstream, annd that's one of the maiin
probblems in sciencce todaay, bueecccause iit's ffocuseed on thhe next
graaaand application and the promotttion and so on..... People buuuild on what
other people have done instead of recognizing the fundamental problems instead
of the things that are generallllly thought to be right and so I really
perhaps, there is a way of opening up science so that more people involved to
make it easier to go against some of these traditions that might not be
correct, but of course one has to be careful- when one oepsn up something,
then itm ight introduce more noise into the signal to noise ratio, how do we
get it down, and we need to figure it out and devise tools so that we can
really take advantage of the new opportunities for these new ways of sharing.
Thank you very much.