Peer review is the foundation for the assessment of the value of a scientific work. It is the basis for accepting or rejecting scientific articles and grant proposals. In an indirect or direct way, peer review, not only influences the decisions regarding the distribution of funds for research but is also fundamental for careers of researchers in the scientific community. It is a tool that is used by managers of research funds. The work is empirical in nature. The main goal of the thesis is the analysis of the limitations of the peer review process and empirically testing nine hypotheses derived from the literature review. The work is structured into three parts. The first, theoretical, part consists of a cognitive model of reviewing and the analysis of the factors that could hinder the flow of this process. This includes individual response style (leniency/severity level, differentiating dimensions), halo effect (lack of differentiation of grades on the partial dimensions of the assessment); serial position effect that arise when evaluating series of objects, and the influence of the information overload. The second, empirical part consists of description of qualitative (in-depth interviews with 35 experienced reviewers) and quantitative studies. Quantitative studies consist of the analyses of existing data (assessments of 673 abstracts competing for conference grants) and four experimental studies carried out by the Author, conducted on a total of N=912 participants. Part of the research was carried out as part of the NCN scientific project entitled "NCN Preludium nr UMO-2016/21/N/HS4/00528 on:" Review in times of flood (overflow), consequences for the management of funds for research ". The logic of the argument presented in the dissertation was supported by the implementation of 4 research tasks. In task 1, differences between reviewers in terms of the average and variance of grades issued were confirmed in 5 different data set. The results were unambiguous: reviewers differ significantly in their response style - some are less and others more severe, which means that the random factor associated with the choice of the reviewer significantly affects the assessment of the grant proposal or publication of the text. The fact that usually at least two people review the work does not protect against "drawing" two harsh or two lenient reviewers. Therefore, it is recommended that competing projects and scientific texts should be evaluated by the same set of reviewers. In such an evaluation scheme, the "strict" reviewer will be equally harsh to all competing participants. But using same reviewers do not solve all the problems, of the serial position effect - especially in a situation where the reviewer cannot return to previously assessed objects. The first step was to check the occurrence of the serial position effect on the two sets of analyzed data. The first analysis (testing hypothesis la) was carried out on a set of assessments of 673 abstracts competing for grants covering the costs of participation in an international conference. Abstracts rated among the first fifteen (each reviewer rated an average of 61 abstracts) had a significantly lower average, than those evaluated later. Hypothesis la "Conference abstracts rated at the beginning receive lower grades than other abstracts" has been confirmed. In the next step, two hypotheses were tested in an experimental study showing that the influence of the serial position effect on the assessment depends on the quality of the object: weak objects gain when are evaluated at the beginning of the series, but good lose. This is related to the avoidance of extreme assessments at the beginning of the series evaluation (the hypotheses lb, lc and Id have been confirmed). The aim of the next study was to examine if the introduction of BREAK can eliminate the serial position effect. An attempt to eliminate this effect by introducing a break in one of the groups (during which the participants of the research performed an additional task of assessing the aesthetics of logotypes) ended in failure, so hypothesis le did not gain empirical support. Research task 2 was related to the attempt to eliminate the HALO effect in reviewers' evaluations. HALO effect manifests itself in a very high correlation of partial grades [positive / negative assessment is manifested in positive / negative assessments on all dimensions]. It should be noted that reviewers are required, not only to accept or reject a given project / publication, but also to assess the objects on multiple dimensions / partial criteria of the assessment. At the basis of this practice lies a tacit assumption that the existence of partial evaluations will objectify the assessments and objects will be described in the form of multidimensional profiles. In practice, even if the dimensions are assigned different weights (see, for example, the assessment of 673 abstracts), decisions are made on the basis of the average. The necessity of making many partial assessments is, on the other hand, a high cognitive burden for the evaluator - it is no wonder that the HALO effect is invariably shown in the literature and has also replicated in the doctoral dissertation. In the next step, a hypothesis was formulated that the strength of HALO effect may be influenced by the way of evaluating a series of objects: OBJECT evaluation (when evaluating object 1 on all dimensions, then object 2...0r DIMENSION evaluation (when evaluating objects on dimension 1, then all objects on dimension 2...). For this purpose, a specially designed experimental study was carried out, which confirmed the hypothesis (H2) that dimensional assessment, in contrast to one-object assessment, will reduce the strength of the HALO effect. The third research task concerned the impact of cognitive overload, that many academics are prone to due to the uncontrolled increase of information. Cognitive load is defined in the literature by various terms including: information overload, flood of information, information smog (data smog) or overflow. One of the main sources of overload are scientific publications, the number of which is growing at an alarming rate - around 15,000 international scientific journals are published annually. Not only the number of publications and existing journals increased, but also the volume and complexity of publications increased, including the number of citations. There has been an increase in the number of cited publications in all scientific disciplines, up to eight times in some journals. An analysis carried out on almost a thousand articles from seven journals has shown that more and more authors use two or more references to justify the same argument, thus increasing the total volume of publications. This increase is not surprising, because the increase in the number of footnotes in the article is given as one of the strategies that increase the chances of publication. The subject of the experimental study in task 3 was to examine the consequences of using two different citation standards: in the middle of the text (eg АРА, Harvard, MLA) vs at footnote (e.g. Chicago / Turabian, Oxford). It was assumed that footnotes placed in brackets in the middle of the text, often dividing the sentence into two or more parts, unnecessarily overload the reader's mind making it difficult to synthesize the meaning of the sentence. Two hypotheses were tested in this task in experiments: (H3a) The middle citations cause greater distraction of the attention of the "reviewers" than the footnotes; (H3b) The evaluation of the project is influencedby the psychoenergetic state of the “reviewer”: it is increased by a stronger motivation, it decreases with: a higher degree of fatigue and a higher degree of distraction. Both hypotheses have gained empirical support, although it is worth adding that this research shouldbe replicated on real reviewers, and not only respondents who took on their role. The dissertation also included the results of the fourth research task, which involved conducting in-depth interviews with 35 experienced reviewers (including 15 from leading Western universities) from various scientific disciplines. All interviews have been transcribed (170 pages total of text) and are presented synthetically in Part XX. Interviews indicated, among other things, the citation standard to which they are used to. Most of the reviewers, in disciplines using the middle standard, would not believe the results of experimental studies suggesting that it leads to unnecessary overload. It can not be ruled out that their minds automatically ignored the contents of the middle brackets. The dissertation ends with three recommendations for those who manage of regulations regarding the review process. Due to the commonly occurring differences in the severity of reviewers, either (1) start with the calibration process by ordering reviewers to evaluate projects/publications with a pre-determined value and exclude from the review process those whose assessments diverge significantly (in plus or in minus) from a pre-determined value, or (2) order the assessment of all competing pr oj ects/publications in a given issue of the journal to the same, for example, a three- person team of reviewers, forcing them to assess dimensional rather than object- oriented and paying for the work, which would significantly shorten the reviewing time. When evaluating individual partial dimensions, the order of the assessed objects shouldbe rotated to avoid the order effect. It should also be remembered that multiplying partial criteria unnecessarily overloads reviewers who start to follow the principle of evaluative conformity (good or bad grades in all dimensions). Although it has not been examined in this research program, it can be anticipated that it is worth minimizing the number of partial criteria.