Ethics of using tools based on Artificial Intelligence (AI) technologies
Editorial policy in the part of ethical warnings does not deny the use of AI technologies, which are rapidly developing and increasingly used both in research and practice, providing new opportunities for obtaining and processing information. At the same time, it is important to adhere to the principles of responsible and ethical use of AI technologies in academic activities.
When using AI tools, it is important to consider that:
- generative AI cannot be the only source of information;
- the content of the manuscript should be the result of your own original scientific work;
- If the author uses any AI technologies in the course of the study or during the preparation of the article, in the "Methodology" section it is necessary to indicate this, indicating which AI tool has been used, with a description of how exactly it was used;
- it is necessary to critically evaluate the results obtained from AI, always verify the information using alternative reliable sources.
It is strictly forbidden to pass off AI-generated content as your own. At the same time, the responsible use of AI tools to streamline and develop one's own ideas (and not to replace the author's intellectual activity) can improve the quality of work.
The editorial staff checks all manuscripts for the use of generative AI using appropriate technical services. At the same time, the authors, for their part, should remember that AI generation is accompanied by a number of problems, including:
- AI-generated text, answers to questions, formulas or calculations may look plausible, but contain critical errors, not be factually accurate, can create fake quotes and links;
- the data on which AI models are trained may be irrelevant, therefore, further generation will multiply errors;
- AI-generated text with high probability will infringe copyright because AI uses published thoughts and ideas of human authors in the form of texts without referring to them, which can be considered as plagiarism; therefore, there are risks of copyright infringement of ideas, texts, images and other copyrighted materials;
- also always relevant issues of morality, information security, etc.: AI models at your request can process large amounts of sensitive data, including personal information, commercial and important corporate data, and use this information for the subsequent model training, distributing information to the outside.
The following ways of using AI models that violate the principles of academic integrity and research ethics are recognized as unacceptable, for example:
- presenting AI-generated text or AI-paraphrased content from other sources as one's own work – using AI to automatically generation of texts or paraphrasing of existing content without proper indication of sources violates the principles of authorship and is considered as plagiarism;
- reworking the author's own article with the help of AI tools in order to create the appearance of a new publication violates the ethics of publications and is recognized as self-plagiarism;
- creating false data and presenting it as confirmation of one's own research (data fabrication). The generation of false data by AI and its use as the basis for scientific conclusions is a serious violation of academic integrity and can cause negative consequences on the quality of research and the reputation of the researcher.