x

Generative AI and AI-Assisted Tools Policy

Cybernerves recognizes the increasing role of generative artificial intelligence (AI) and AI-assisted tools in academic research and writing. These technologies can support authors in drafting, analyzing, and presenting their work. However, their use raises important considerations related to transparency, accountability, and academic integrity. This policy outlines the appropriate use of such tools by authors, reviewers, and editors contributing to GJCIA.


1. Use of Generative AI by Authors

Authors may use generative AI tools in manuscript preparation under the following conditions:

  • The use of AI tools must be transparent and comply with ethical standards.
  • Authors must clearly disclose the use of AI tools in their manuscript, specifying how the tools were used (e.g., text generation, data analysis, visualization).
  • Authors remain fully responsible for the accuracy, originality, and integrity of the manuscript, regardless of the tools used.

2. Disclosure Requirements

All AI-assisted contributions must be clearly disclosed in the manuscript. Required information includes:

  • The name of the AI tool used: e.g., ChatGPT, DALL·E, etc.
  • A description of how the tool was used: e.g., “Generated text for the introduction,” “Created illustrative figures,” or “Supported data analysis.”
  • Confirmation: that all AI-generated content has been thoroughly reviewed and verified by the authors.

This information should be included in the acknowledgment section or a footnote.


3. Prohibited Uses of Generative AI

  • AI tools must not be listed as authors or co-authors. Authorship is reserved for individuals who made significant intellectual contributions and can take full responsibility for the work.
  • AI tools must not be used to fabricate, manipulate, or falsify data or results.
  • Failure to disclose the use of AI tools will be considered a violation of Cybernerves’ ethical guidelines and may lead to manuscript rejection or further investigation.

4. Reproducibility and Verification

  • Authors must ensure that all outputs generated with the help of AI are reproducible and verifiable.
  • This is especially important for AI-generated visualizations, technical content, or data analyses.
  • Authors should retain documentation of their AI use and be prepared to provide it upon request.
  • Supplementary materials demonstrating the reliability and validity of AI-generated content are strongly encouraged.

5. Licensing and Copyright Compliance

  • AI-generated materials (text, images, code, etc.) must comply with all applicable copyright and licensing laws.
  • Authors must ensure that no AI-generated content infringes upon third-party intellectual property rights.
  • Proper attribution must be given for AI-generated elements, and their use must adhere to the terms of service of the respective AI tool.

6. Reviewers’ Use of AI Tools

  • Peer reviewers are expected to evaluate manuscripts based on their own expertise, independent of AI tools.
  • The use of generative AI during the review process is discouraged unless explicitly authorized by the editorial office.
  • If a reviewer uses an AI tool, they must disclose its use and provide a rationale. The editorial office will assess whether the use was appropriate.

7. Ethical Accountability

  • The use of AI tools does not absolve authors, reviewers, or editors of their ethical responsibilities.
  • All parties must uphold academic integrity, transparency, and trustworthiness in scholarly publishing.

Cybernerves is your central hub for cybersecurity community education, research and consulting.

Contact Us

7004 Security Blvd., Suite 300, Windsor Mill, MD, 21244

Call: +1 443 978 2715

Email: support@cybernerves.com