Generative artificial intelligence and editorial ethics: a roadmap for Health & New Media Research
Article information
Abstract
Generative artificial intelligence (AI) is rapidly transforming scholarly publishing, raising new ethical challenges for journals that address health and digital media. This editorial argues that Health & New Media Research requires a clear and explicit AI editorial policy to protect research integrity and public trust. Drawing on guidance from COPE, ICMJE, WAME, and major publishers, it summarizes the emerging consensus that AI tools cannot be authors, that their use must be transparently disclosed, and that humans remain fully accountable for all content. The editorial discusses both the potential benefits of generative AI, such as reducing language barriers and supporting research workflows, and the associated risks, including hallucination, overgeneralization, fabricated citations, and new forms of misconduct. Special attention is given to the heightened risks of AI use in health-related communication and data-sensitive research. Core principles for authors are proposed, alongside practical guidance on responsible AI use. The editorial also outlines corresponding policy directions for editors and peer reviewers, emphasizing confidentiality and non-delegable human judgment.
Introduction
Generative artificial intelligence (AI) has moved from the margins of academic work to its center in only a few years. Large language models (LLMs) such as ChatGPT, Gemini, and other conversational systems are now embedded across the research lifecycle, including idea generation, literature triage, coding support, statistical explanation, manuscript drafting, and even responses to peer review (Ganjavi et al., 2024; Mondal et al., 2024; Sodangi & Isma’il, 2025). In this environment, a manuscript that has not been touched by AI tools is increasingly the exception rather than the norm.
Major organizations in the scholarly ecosystem have begun to respond with explicit guidance. The Committee on Publication Ethics (COPE) issued a position statement on “Authorship and AI tools,” affirming that AI systems cannot be listed as authors and that any use of AI must be transparent and accountable (Committee on Publication Ethics [COPE] Council, 2024). The International Committee of Medical Journal Editors (ICMJE) updated its Recommendations in January 2024 to include specific instructions on how AI use should be acknowledged and how AI may be used in manuscript review (International Committee of Medical Journal Editors [ICMJE], 2024). Major publishers have all published AI-specific policies for authors, editors, and reviewers that stress disclosure, human responsibility, and limits on AI-generated content (ACS Publications, 2024; Flanagin et al., 2023; Springer Nature, n.d.; Taylor & Francis, n.d.).
For a journal such as Health & New Media Research (HNMR), which sits at the intersection of health information and digital media, these developments are especially salient. Generative AI does not only assist researchers behind the scenes. It increasingly participates in producing, circulating, and interpreting the very health related content that HNMR studies. This editorial argues that explicit AI related editorial policies are now an ethical necessity. Drawing on international guidance and emerging empirical evidence, it outlines why AI policies are needed, what consensus has already formed, where risks are especially acute for health and new media scholarship, and which core principles should guide HNMR’s policies for authors, editors, and peer reviewers.
The Emerging Consensus, Opportunities, and Risks of Generative AI in Scholarship
Across major publishers and editorial organizations, a broad consensus has formed around several key points. Generative AI tools cannot be credited as authors. Their use must be transparently disclosed. Human authors, editors, and reviewers remain fully responsible for every aspect of scholarly content. COPE’s position statement explicitly concludes that AI tools do not meet authorship criteria because they cannot provide consent, assume accountability, or manage conflicts of interest (COPE Council, 2024). The ICMJE’s updated Recommendations similarly require that any use of AI in drafting or analyzing manuscripts be acknowledged, and they emphasize that authors retain full responsibility for the integrity and originality of their work (ICMJE, 2024). Major publishers have adopted policies that prohibit listing AI as an author, require disclosure of tool use, and in many cases restrict unlabelled AI-generated figures or data, particularly in biomedical research (ACS Publications, 2024; Flanagin et al., 2023).
At the same time, generative AI is widely recognized as both an opportunity and a source of risk. On the positive side, AI-based writing tools can help authors—especially those for whom English is not a first language—improve clarity and read-ability, lower barriers to participation in international scholarship, and streamline routine tasks such as copy-editing or drafting non-substantive sections (Ganjavi et al., 2024; Mondal et al., 2024; Tang et al., 2024). AI coding assistants and LLM-based summarization tools can speed early exploration of data and literature, provided that humans ultimately verify and revise the outputs (Sodangi & Isma’il, 2025; Tang et al., 2024).
However, recent studies and editorial experience underscore serious limitations. Large language models are prone to hallucinations and overgeneralization: they produce fluent, confident statements that are not fully supported by the underlying evidence. Peters and Chin-Yee (2025) demonstrate that when summarizing scientific articles, leading LLMs frequently omit limiting details and instead generalize results beyond what the original studies justify, making LLM-generated summaries nearly five times more likely than human summaries to overgeneralize scientific conclusions. In health-related domains where nuance and caution are critical, this generalization bias is particularly concerning.
Generative AI also introduces new forms of integrity risk. Models can fabricate citations and even entire articles, and they can generate passages that closely paraphrase existing work without clear attribution, blurring traditional boundaries between legitimate paraphrasing and plagiarism (Lubowitz, 2023; Mondal et al., 2024; Sodangi & Isma’il, 2025). In this environment, the emerging consensus does not treat AI as a neutral convenience that can simply be ignored. Instead, it is framed as a powerful technology whose benefits depend on clear disclosure, robust human oversight, and firm ethical boundaries.
Why Health & New Media Research Needs Special Care
HNMR operates at a distinctive nexus because it examines how health information, behaviors, and systems interact with digital media, platforms, and technologies. This positioning amplifies the ethical stakes of AI use in at least three ways.
First, health information is safety critical. Misrepresentations, exaggerated claims of effectiveness, or inaccurate risk communication can have direct consequences for patient behavior, health seeking decisions, and public trust. Generalization bias, hallucinated details, or oversimplified summaries produced by LLMs can therefore cause more than abstract epistemic problems; they may contribute to harmful decisions in clinical practice, public health, or self care (Peters & Chin Yee, 2025).
Second, new media environments accelerate and amplify content. Health related findings published in HNMR are likely to be translated into social media posts, infographics, videos, or chatbot scripts. Once released into algorithmic ecosystems, even subtle distortions introduced by AI in the original manuscript can be magnified as messages are compressed, repackaged, and recontextualized. Editorial policies for a journal positioned at this intersection therefore need to anticipate the downstream effects of AI shaped communication, not only the immediate effects on scholarly prose.
Third, AI appears in HNMR’s orbit both as a research tool and as a research object. Authors may use LLMs to code qualitative data, classify social media content, or generate synthetic comparison texts. At the same time, they may study AI driven systems such as health chatbots, recommender algorithms, or automated moderation of health misinformation. WAME’s recommendations explicitly address this dual role by emphasizing that chatbots cannot be authors, that their use must be acknowledged, and that editors and reviewers should also disclose any use of AI in evaluation (Zielinski et al., 2024).
Finally, health and media research frequently relies on sensitive personal data, including clinical narratives, social media posts, or interviews about health experiences. Uploading such data to publicly hosted LLMs may violate institutional review board approvals, consent agreements, or data protection laws, because many systems store prompts and may reuse them for training (Mondal et al., 2024; Tang et al., 2024).
Core Principles for an AI Policy for Authors
HNMR’s author facing AI policy can be guided by a small set of core principles that are already visible in international guidance but need to be tailored to a health and new media context.
The first principle is transparency. Authors should clearly state whether they used generative AI tools, which tools were used, including model and version where known, and for what purposes they were employed. This information should appear in the manuscript itself, typically in the Acknowledgments section when AI is used for language support and in the Methods section or a dedicated subsection when AI directly affects study procedures, analysis, or reporting (ACS Publications, 2024; Flanagin et al., 2023; Tang et al., 2024). A short, standardized AI use statement can make this straightforward for authors and transparent for readers.
The second principle is human responsibility. AI tools cannot be authors, and responsibility for the manuscript resides entirely with human authors. Human authors must be able to explain and justify all methods, analyses, and interpretations, regardless of whether AI contributed draft text, code, or suggestions. They remain responsible for detecting and correcting hallucinations, overgeneralizations, and fabricated references.
The third principle is reproducibility and verifiability. When AI tools materially affect research methods—for example, by coding qualitative data, classifying social media posts, or generating synthetic stimuli—authors should provide enough information for others to understand and, where feasible, approximate the procedure. This includes describing model names, prompting strategies, parameter settings, and the extent and nature of human checking or correction (Ganjavi et al., 2024; Peters & Chin Yee, 2025; Sodangi & Isma’il, 2025). Even though models evolve over time, basic documentation supports critical appraisal and responsible reuse.
The fourth principle is integrity of data, images, and results. AI generated or AI manipulated data, tables, or figures should be treated with special caution. A prudent stance for HNMR is that AI generated images or datasets should not be presented as if they were direct empirical observations, and AI generated material should only be used as primary content when the AI itself is the object of study, in which case outputs must be clearly labeled and fully documented (Flanagin et al., 2023; Lubowitz, 2023). In health related manuscripts, unmarked AI manipulation of diagnostic images, graphs, or patient facing materials would be incompatible with research integrity.
The fifth principle is protection of health related content. Because the consequences of error are serious in this domain, HNMR should adopt a conservative approach to AI generated language in sections that discuss risks, benefits, or recommendations for practice or policy. Authors should be encouraged to draft these passages themselves, using AI at most as a language polishing aid, and they should cross check all statements against the original evidence base (Mondal et al., 2024; Peters & Chin Yee, 2025; Tang et al., 2024; Zielinski et al., 2024).
Practical Guidance for Authors
These principles can be translated into concrete expectations in HNMR’s instructions for authors. Generative AI tools may be used in limited, clearly defined ways that are compatible with research integrity when those uses are disclosed and carefully supervised.
Authors may use AI tools to improve grammar, spelling, and clarity of text they have drafted, to assist with translation between languages as a starting point for human editing, and to suggest alternative formulations for non substantive phrasing. When AI is used in these ways, authors should read and edit the resulting text carefully, ensure that meaning and nuance are preserved, and include a brief AI use statement naming the tool, indicating the approximate date or version where possible, and specifying the purpose of the assistance (ACS Publications, 2024; Lubowitz, 2023; Mondal et al., 2024; Tang et al., 2024).
When AI tools assist with data analysis or coding, authors should be able to explain exactly how AI was used, how its outputs were validated, and how human judgment interacts with automated suggestions. If AI proposes code, analytical models, or classification schemes, authors should understand and document the final procedures, provide or describe the human checked code or criteria, and discuss any limitations introduced by AI assistance (Ganjavi et al., 2024; Peters & Chin Yee, 2025; Sodangi & Isma’il, 2025).
Certain uses of AI should be clearly prohibited. AI tools should not be listed as authors or co authors under any circumstances (COPE Council, 2024; ICMJE, 2024; Springer Nature, n.d.; Taylor & Francis, n.d.; Zielinski et al., 2024). AI should not be used to fabricate data, results, or citations; any such fabrication would fall under existing policies on falsification and fabrication (Ganjavi et al., 2024; Lubowitz, 2023). AI generated images or tables should not be presented as if they depict real clinical, observational, or experimental data, unless AI outputs are explicitly the object of study and are clearly labeled as such (Flanagin et al., 2023; Zielinski et al., 2024). Authors should also avoid uploading identifiable or sensitive health data into publicly accessible AI tools in ways that conflict with ethics approvals, consent forms, or privacy regulations (Mondal et al., 2024; Tang et al., 2024).
When manuscripts investigate AI systems themselves, such as health chatbots or algorithmic curation of health information, additional transparency is required. Authors should clearly separate AI systems that are being studied from AI tools used to write or analyze the paper. They should provide sufficient methodological detail about experimental prompts, configurations, datasets, and evaluation criteria to allow readers and reviewers to understand how the AI systems functioned within the study and to interpret results responsibly (Peters & Chin Yee, 2025; Sodangi & Isma’il, 2025; Zielinski et al., 2024).
Violations of these expectations should be addressed through HNMR’s existing procedures for dealing with research misconduct and publication ethics concerns. COPE’s guidance on corrections and retractions provides a useful framework for deciding when a correction, expression of concern, or retraction is appropriate (COPE Council, 2024; Ganjavi et al., 2024).
Policy Directions for Editors and Peer Reviewers
AI related editorial ethics do not concern authors alone. Editors and peer reviewers also need guidance, because they handle confidential materials and exercise significant influence over the scholarly record. HNMR’s policies should make clear that AI tools cannot replace human editorial judgment and must not compromise confidentiality.
Peer reviewers should treat manuscripts and supplementary materials as confidential documents and therefore should not upload them to publicly accessible LLMs or other third party AI tools that may store, train on, or redistribute the content. Reviewers may, if they wish, use AI tools to polish the language of their own reports, but they should not rely on AI to generate substantive evaluations of a manuscript’s methods, results, or significance. If a reviewer uses AI in a way that materially shapes the content of a review, the reviewer should inform the handling editor, and the reviewer remains fully responsible for the fairness, accuracy, and tone of the report (Flanagin et al., 2023; ICMJE, 2024; Zielinski et al., 2024).
Editors likewise handle confidential manuscripts and should refrain from submitting full texts to public AI systems that may store or train on the content. If an editorial office uses institutionally hosted or privacy preserving AI tools, editors should ensure that appropriate data protection safeguards are in place and that their use is consistent with the policies of HNMR’s publisher and the expectations of authors. AI tools can assist with limited administrative tasks, such as screening for missing sections, checking formatting, or flagging obvious inconsistencies, but they should not be used as the sole basis for editorial decisions. Editors remain accountable for acceptance and rejection decisions and should not delegate that responsibility to algorithmic systems, even when those systems provide summaries, similarity scores, or recommendations (ACS Publications, 2024; COPE Council, 2024; ICMJE, 2024; Zielinski et al., 2024).
These directions for editors and reviewers align with emerging international guidance and reinforce a common theme: AI can be a useful adjunct in the editorial process, but it cannot replace human responsibility or compromise the confidentiality and integrity of peer review.
Conclusion
Generative AI is not a passing trend in scholarly communication. It is reshaping how research is conducted, written, reviewed, and read. Authors increasingly rely on AI tools, sometimes explicitly and sometimes through features embedded in writing and analytics platforms, while editors and reviewers face a growing stream of AI-assisted submissions whose provenance and integrity may be difficult to assess.
International bodies, together with major publishers, have begun to define the contours of responsible practice. AI is not an author, and its use must be openly disclosed. Human judgment and accountability remain central. Certain uses—especially the fabrication of data and unmarked generation or manipulation of scientific images—are incompatible with research integrity (ACS Publications, 2024; COPE Council, 2024; Flanagin et al., 2023; Hoch & Clarke, 2025; ICMJE, 2024; Zielinski et al., 2024).
For Health & New Media Research, these general principles must be adapted to a context where health information, digital platforms, and AI technologies are deeply entangled. That adaptation requires clear, field-sensitive policies that enable legitimate, equity-enhancing uses of AI for language editing and workflow support, that set strict expectations for transparency, documentation, and human oversight, that protect the integrity of data, images, and health-related conclusions, and that provide coherent guidelines not only for authors but also for editors and peer reviewers.
Such policies should be treated as living documents subject to regular review as technologies and practices evolve. By adopting principled, transparent, and context-aware guidelines now, Health & New Media Research can help ensure that AI strengthens—rather than undermines—the reliability and social value of health and new media scholarship.