- Issue
- Journal of Siberian Federal University. Humanities & Social Sciences. 2024 17 (6)
- Authors
- Astanin, Victor V.
- Contact information
- Astanin, Victor V. : Bank of Russian Moscow, Russian Federation;
- Keywords
- Dataset; GPT; Big Data; anti-corruption; dissertations; publications; scientific research; RSCI; methodology of scientific knowledge; ethics and motives of a scientist; corruption risks; corruptibility; conflict of interest; procurement; predicate offences; prevention; legislation; artificial intelligence; Dataset; GPT; Big Data; blockchain technology
- Abstract
The article is devoted to the analysis and assessment of the applied consistency of modern published scientific research, presented in the form of dissertations and articles on anti-corruption issues. The statistics of indexed publications with the calculation of the coefficient of time expenditure are given and the problems of patenting the results of thematic developments are noted. The ethical and motivational principles that guided scientists in the past are examined in extrapolation to the problems of contemporary following. In this regard, the typical shortcomings of the current anti-corruption research are illustrated, associated with a simplified selection of methods of knowledge, the limited empirical data used, the incorrect borrowing of primary sources of knowledge and errors in the management of the conceptual apparatus, the scholasticism of the content of the works, the lack of interdisciplinarity and applied significance of research, formalized in normative proposals, which in total form a crisis in their usefulness for legal science and practice. The author draws attention to the reserves of stimulating anti-corruption scientific developments in the possibilities of using artificial intelligence technologies. The experimental testing ground of their application is considered on the example of the procurement sphere, where the identifiers of corruption risks recorded by scientists can acquire an applied nature of use without human participation for their prompt and widespread detection based on the operation of interconnected cloud systems for calculating their content and signs of illegality from the context of the Big Data array of public services (information, reference, legislative, methodological, law enforcement). The final algorithm of artificial intelligence is proposed by the author in the generation of procedurally significant documents by neural networks for the purpose of preventive response to the detected risks of corruption, preventing the occurrence of adverse legal consequences, ignoring which the objectives are transformed into the provision of evidence of the committed offence and determination of legal liability measures, for their application by competent authorities and officials
- Pages
- 1163–1173
- EDN
- VEVYDY
- Paper at repository of SibFU
- https://elib.sfu-kras.ru/handle/2311/152981
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).