Artificial Intelligence (AI) Policy

                                           (For Authors, Reviewers, and Editors)

 

Purpose

This policy defines the acceptable and unacceptable use of Artificial Intelligence (AI) tools, including generative AI (any application of AI), in the preparation, submission, peer review, and editorial handling of manuscripts.

The journal is committed to research integrity, transparency, and ethical publishing.

Policy for Authors

Acceptable Use

Authors may use AI tools for:

Language editing and grammar improvement; Improving readability and structure; Formatting assistance; Translation support

AI may also be used for coding assistance or statistical support provided full human verification is done.

Mandatory Disclosure

If AI tools were used in any stage of manuscript preparation, authors must disclose:

Name of tool; Version (if applicable); Purpose of use

Required Disclosure Statement (example)

“The authors used ….(Application name).. for language editing and clarity improvement. All content was reviewed and verified by the authors.”

Failure to disclose AI usage may lead to rejection or retraction.

Prohibited Use

The following uses are strictly prohibited:

AI-generated data, images, tables, or results without disclosure

Fabrication of references using AI

AI-generated scientific conclusions without human validation

Listing AI tools as authors

AI tools cannot meet authorship criteria.

Responsibility

Authors remain fully responsible for:

Accuracy of data; Originality; Ethical compliance; Proper citation; Avoiding hallucinated references

AI-generated errors are not a valid defence.

                                                                  Policy for Reviewers

Reviewers:

Reviewers can not upload confidential manuscripts into public AI tools

Reviewers can not use AI to generate full peer-review reports

Reviewers can use AI only for language polishing of their own review

Confidentiality must be strictly maintained.

                                                              Policy for Editors

Editors:

Editors may use AI tools for plagiarism screening support

Editors may use AI for language evaluation

Editors must not rely solely on AI for decision-making

Editorial decisions must remain human-led.

                                                              Policy for Publisher

AI Detection and Screening

The journal may use:

Plagiarism detection software (e.g., Crossref Similarity Check powered by iThenticate)

AI-content detection tools

Manual editorial review

 

High AI probability scores may trigger:

Major revision

Author clarification

Rejection

AI detection results are advisory, not sole grounds for rejection.

Data & Image Integrity

AI-generated images, figures, or graphical abstracts must:

Be clearly labelled as AI-generated

Include disclosure of tool used

Not manipulate scientific results

Undisclosed synthetic images will result in immediate rejection.

Ethical Compliance

This policy aligns with recommendations from:

Committee on Publication Ethics (COPE)

International Committee of Medical Journal Editors (ICMJE)

Violations and Actions

If AI misuse will be detected:

Minor violation → Revision request

Major violation → Rejection

Post-publication discovery → Retraction + institutional notification

Repeated violations may lead to author blacklisting.

Policy Effective Date

Effective from: [20 February, 2026]

Applies to all new submissions.