Esta semana leo una apasionada (y algo airada) discusión en INCYT sobre ResearchGate, Elsevier y demás; por otra parte, me llega un manuscrito de queja sobre los criterios impuestos por la ANECA, y no se me escapa una mueca a caballo este sonrisa y hartazgo. Las quejas sobre sexenios, factores de impacto y anecados han sido una constante a lo largo de mi trayectoria investigadora. Algo más que lógico, por otra parte, teniendo en cuenta que me dedico a la bibliometría y que durante años estuve evaluando currículos académicos y preparando anecas con mis colegas de Sexenios.com. Sin embargo, veo cada vez con más irritación, cómo ahora que estoy más alejado de ese mundo, cualquier seminario, charla, curso o discusión con profesores e investigadores sobre Acceso Abierto, altmétricas y derivadas, siempre acaba terminando del mismo modo (ya sea de manera implícita o explícita): evaluaciones, anecas y sexenios. Y me irrita por dos motivos que se ven claramente reflejados en la discusión de INCYT y en el manuscrito sobre las acreditaciones respectivamente. Read more “Rabietas y pataleos académicos”
Autores: Nicolás Robinson-Garcia y Richard Woolley.
Siempre que se habla de financiación y evaluación de la actividad científica, el Reino Unido aparece como el lugar donde mirar y analizar qué funciona y qué no. Lo que sucede en las islas británicas suele ser visto con expectación por la comunidad que trabaja en evaluación de la ciencia, por la gran cantidad de recursos que se destinan a dicha evaluación, cuyos resultados determinan buena parte de la financiación total que las universidades van a recibir. Este complejo sistema de evaluación, que tiene lugar cada siete años y distribuye más de un billón y medio de euros entre las universidades británicas, se conoce por sus siglas en inglés REF (Research Excellence Framework). Visto desde fuera, lo que más interés suscita es el intenso debate que se genera entre investigadores y gestores de investigación cada vez que se comienza con el diseño del siguiente ejercicio de evaluación. Cualquiera que esté interesado en el tema no tiene más que seguir la gran cantidad de entradas que el Impact Blog del London School of Economics publica sobre el tema.
Nota ThinkEPI publicada el 5 de marzo de 2018 en la lista de distribución IWETEL.
Rafael Repiso, Universidad Internacional de La Rioja, Miembro del Grupo ThinkEPI
Nicolás Robinson-Garcia, School of Public Policy, Georgia Institute of Technology, Atlanta GA
I wrote to Luria to save me
J.D. Watson (1968)
En su relato acerca del descubrimiento de la estructura del ADN, son muchas las menciones que hace James D. Watson a su mentor Salvador Luria y el papel clave que jugó en buscarle la financiación necesaria para que pudiera mantenerse durante su etapa postdoctoral en Europa. Primero Copenhague y finalmente, tras un infructuoso intento por ir a Londres y una pequeña estancia en Nápoles, Cambridge, donde conoció a Francis Crick dando lugar a uno de los grandes y tal vez el más conocido de los descubrimientos del siglo XX. Salvador Luria, premio Nobel de Medicina en 1969 por sus descubrimientos sobre el mecanismo de replicación de los virus, siempre acudía al rescate de su discípulo ya fuera buscando financiación o introduciéndole a otros colegas para que Watson pudiera seguir viajando y expandiendo sus conocimientos. La libertad para viajar, experimentar, fracasar y reinventarse fue clave en el descubrimiento de la estructura del ADN. Read more “Movilidad científica, un fenómeno con múltiples caras”
Empiezan a aparecer nuevas noticias sobre el REF2021, el ejercicio evaluativo más importante para el sistema científico en el Reino Unido. Se tratará de la segunda ocasión en la que se evalúa a las universidades británicas según el sistema actual. Aprovechando la proximidad de dicho ejercicio, aprovechamos para comentar los resultados de su primera implementación allá en 2014 e introducir algunas reflexiones sobre las importantes novedades que introduce en el sistema de evaluación británico. Read more “Los resultados del REF2014 del Reino Unido marca el camino a seguir en la evaluación científica”
Cañibano, C. & Bozeman, B. Curriculum vitae method in science policy and research evaluation: the state-of-the-art. Research Evaluation, 18(2), 86-94
This paper reviews the use of CV analysis in science policy. The value of CVs lies in the fact that they serve as personal services advertisement and the fact that researchers are strongly encouraged to provide timely and accurate data. Until early 1990s CV analysis has been used anecdotally and as complementary. However, the Research Value Mapping programme developed by Bozeman and Rogers among others, has fostered its expansion as a solid methodology. Contrarily to other methodological approaches, CV analysis is characterized by being theory-driven. There are three main research topics in which this method has been applied: Career trajectories, mobility and mapping collective capacity. However, CV analysis is not free of many methodological limitations, namely: availability, heterogeneity, truncation, missing information, and coding inconsistency. They suggest solving part of this problems by complementing the data with other sources such as bibliometric data or survey data.
Dietz, J.S. & Bozeman, B. (2005). Academic careers, patents, and productivity: industry experience as scientific and technical human capital. Research Policy, 34(2), 349-367
This paper intends to analyze productivity differences based on career paths of scientists within industry, government and academia who have ended up in academia. The paper is framed within Bozeman’s STHC framework. They argue that most studies have focused until then either in industry or academia and few on the collaboration patterns between academia and industry, but always considering researchers as either academic or industry, instead of acknowledge the diversity of career patterns observed in their trajectories. One of the arguments made is that by favoring capacity (in this case seen as richness in career trajectories) one favors knowledge production. Hence their first hypothesis is that those with more diversified career patterns will be more productive and confront it to another hypothesis which states that scientist who always worked in academia will be more productive. While the former is based on social capital grounds (more ties, more connections, more productivity), the latter is based on job priorization and incentives, as publication is one of the main tasks of scientists. Two alternative hypotheses are also formulated: 1) early career experiences in academia will lead to more productivity, and 2) publishing before PhD will also warrant being more productive in the future. They observe that precocity and homogeneity in career patterns has a weak positive relation with productivity while years in industry and time of PhD. They compared productivity means between those groups who moved from industry to academia and viceversa, before and after the moving and observed increased productivity in movements. While the framing of the paper is really strong and inspiring, its results are not sufficiently convincing.
- Homogeny variable. They ‘quantify’ careers based on how distant they are from the norm based on the probability of a given trajectory being similar to the always academic one.
- Education and traning precocity. Based on PhD year and whether they had academic experience soon in their career and if they published before PhD.
Cover photograph: Workers of the world, unite! at https://flic.kr/p/5znVpk
Last week we got published in PLOS One a paper entitled ‘The unbearable emptiness of tweeting – About journal articles‘. Such provocative title was difficult to go unnoticed and almost immediately and ironically, people started to engage with it in Twitter commenting, criticizing and praising the paper. Among critiques, there were some regarding to the field analyzed (Dentistry), the fact that we focus our tweet analysis on the top 10 most tweeted papers or the feeling that our main message was that tweeting about research was a complete waste of time.
The reasons for such reaction might as well be many, but my take is that it is not due to the novelty of our findings (which I believe were not that surprising) but for being so bold on our conclusions. Altmetrics, as it also happens with Open Access, seem to be surrounded by a certain aura by which researchers seem to scare to criticize its limitations based on the assumption that they may lead to a complete disregard from the scientific community. It is as if we now there is something in them and therefore, we do not want to make them look too bad. Hence limitations are always presented with extreme care. In my opinion, this is dangerous. Bibliometricians and researchers working on altmetrics or social media metrics might be aware of their limitations, but many are not and are seeing these new metrics (with great potential, that is out of question) as true ‘saviors’ to the ill-fated Impact Factor.
and this is a shame because i thought altmetrics would become a new IF or individual’s H index. Obviously needs optimised for this to happen
— Eilidh (@EilidhPinkChic) August 25, 2017
Of course the paper does not position itself against the use of tweeting scientific literature, nor it states that all tweets relating to scientific literature are meaningless, but that current metrics based on Twitter are too flawed. That is not to say that meaningful metrics and approaches cannot be developed from Twitter and in fact there are many doing a fantastic job on better understanding Twitter and its value for research evaluation to develop more meaningful methodologies and metrics.
Recently the LSE Impact blog posted and entry by Richard Woolley and I where we comment on the dangers of trying to link scientific excellence and societal impact. Assessing the societal impact of research is now the big challenge in research evaluation. Until recently, evaluative and policy efforts were placed on promoting the so-called ‘excellent research’, following the logic that it is the best research the one that can lead to social change and respond to current societal challenges. But the UK 2014 REF has been a game-changer by introducing a complex peer review system by which committees assess the impact of submitted case studies in which researchers explain how their research has contributed to society (in which ever terms they find suitable).
The result is a complex system in which quantitative indicators are relegated leaving mixed feelings as to the process followed and its success. Still, it is a worthy initiative and the first attempt to assess the societal impact of research at a national scale. In our post entry, we part from the premise that the assessment made by the committees is acceptable and explore the relationship between the scores received by each submission unit for their scientific output and societal impact. What we find is that such relationship between scientific excellence or quality and societal impact is not always correlated, suggesting that there are many ways of having and impact in society without doing excellent research and the contrary, doing good research does not neccessarily always leads to having a societal impact.
Collaboration through co-authorship is a long studied field of work in scientometrics. The notion of international collaboration has been widely acknowledged through bibliometric data as a positive factor to improve the citation impact and visibility of publications. What is more, the share of international collaboration is an indicator of success used in many evaluation exercises at the individual level (e.g., Ramón y Cajal in Spain) and it is included in the set of indicators pre-calculated in many bibliometric suites such as Clarivate’s InCites.
At the country level, international collaboration can be seen as a way of bridging with other countries or belonging to the global community. However, little is known on the mechanisms that lead to this or the type of collaboration needed to do so. With regard to developing research systems, international collaboration can serve as a means to increase investment and capacity building1,2. Still, is international collaboration good no matter with whom or how many. Can we observe differences in terms of fields or type of development depending on the partners countries collaborate with?
My colleagues and I have recently got accepted a paper3 at the Globelics Conference 2017 in which we explore types of international collaborating partners for selected countries in South-East Asia. We part from the concepts of multilateral and bilateral collaboration developed by Glänzel and de Lange4,5 to analyze the international collaboration trend of six leading science systems among the ASEAN group of countries (Thailand, Singapore, Malaysia, Philippines, Vietnam and Indonesia). While the analysis is yet very preliminary, we do observe differences on the major partners and fields of these countries depending on the type of multi or bilateral collaboration they have with the country. We also find different temporal trends on the evolution of international collaboration between these six countries, suggesting different developing stages of their national system. We hope to explore and expand these analyses in the future.