We have earlier discussed some aspects of scientific publishing, namely authorship and choice of journal. Now I want to discuss some other aspects, namely that of publication pressure and quality assessment. In the previous episode of the film, Nicholas stressed the need to publish as much as you can. Now everyone in academia is familiar with the slogan, publish or perish. If you want to succeed in such a highly competitive world as that of science, you simply need lots of publications. However, as suggested, publication pressure has some serious drawbacks. To give but one example, there's an increasing tendency to squeeze out as many papers as possible from a single study. A simple way to do this is to slice up your results in several segments just large enough to gain publishable results, which you then publish separately. This habit is known as salami publication. It is an easy way to boost your productivity. Now why is this tendency harmful? Well, for one, the fragmentary nature of these publications make the result less reliable and also more difficult to analyze. Only using part of the evidence actually amounts to distorting the evidence and therefore to misleading the scientific community. Moreover, it puts undue strain on the review process as you simply need more reviews to access the work and it also abuses the editor's and the readers' time. In general, it reduces the quality of the output. Let us take a closer look at quality assessment in science. This assessment may take two forms. There is the formal assessment, the so-called peer review process, but there's also an unintended assessment based upon the number of citations of a paper. Firstly, papers are judged by scientific peers in their role as reviewers. Their judgment on the quality of the paper plays an important role in the decision of the editor to either reject or accept the paper. Usually a paper is assessed by two or three anonymous reviews selected by the editor of the journal. They provide a written critique of the submitted manuscript. They offer suggestions for improvement, and they make a recommendation to the editor ranging from rejection of the paper to full acceptance. Often, however, they recommend a minor or major revision. Now recently, peer review has come under scrutiny. In the past, reviewers have often failed to identify sloppy work or cases of fraud. They simple lack the time to scrutinize each and every aspect of the paper and they assume that all the data are authentic. Science is largely based on trust, and such trust also extends to the reviewers themselves. However, reviewers often face potential conflicts of interest. Such conflicts are created when the authors under review are former trainees, or collaborators, or simply good friends of the reviewer. Now the only proper conduct in such cases is to decline the assignments to review the paper. Or consider a case where a scientist is asked to review a paper reporting results in an area that overlaps with the reviewer's own research. There may well be information in the paper that would benefit the reviewer, but clearly it should not be put to such use. But this, of course, is a situation rife with temptation. And then reviewers can also try to delay the publication of the work of competitors by imposing unreasonable demands or even for rejection of the paper, simply in order to get their own work published in advance of the competing paper. And finally, reviewers maybe use the power by demanding citations of their own work. And this brings us to the second less direct way of quality assessment by scientific peers, namely citation scores. By citing a paper, scientists are assumed to underline the importance of the paper. Counting citations is an easy and straightforward, if somewhat dubious way, of measuring the quality of a scientist's work. A popular measure nowadays is the so-called H-index, named after Jorge Hirsch, who came up with the original idea. If the scientist has an H-index of 12, this means that she has published 12 papers, each of which has been cited 12 times or more. Nowadays, many productive scientists flaunt their H-index on their website. But of course, one has to be careful in using such measures. In general, measures like the H-index differ strongly for different disciplines. So one can't use them for cross comparisons between different fields. Moreover, the H-index does not differentiate between first authors and other authors whose contributions may have been pretty small. Also, the H-index can easily be manipulated through self-citations. And finally, citations may be predominantly critical, they don't imply approval. Now, citations are also used to judge the quality of journals. The higher the average number of citations per article, the higher the impact factor of the journal. Now the impact factor of a journal has itself become a quality measure of the papers published in such a journal. In the world of science, there is a growing unease with Bibliometric quality assessments by institutions and funding agencies in hiring, promotion, or funding decisions. In practice, however, it is extremely difficult to reduce the use of such quantitative measures. After all, counting is much easier and less time consuming than reading and reflecting upon other people's work.