Complete Story
 

01/11/2024

Hanging in the Balance

Generative AI versus scholarly publishing

Editor’s Note: Today’s post is by Gwen Weerts. Weerts is the journals manager at SPIE and the editor-in-chief of the SPIE Society magazine, Photonics Focus. She joined SPIE in 2008 and has 16 years of experience in scholarly publishing. This post was originally published in Photonics Focus on Jan. 1, 2024.

In 1454, Gutenberg’s prototype printing press began commercial operation, and the publishing industry was born. With it came a host of new concerns that are still relevant six centuries later: literacy, plagiarism, information censorship, proliferation of false or unvetted information and — most worrisome to the Catholic church at the time — who should have access to information, what type of information could they have and what type of people should be allowed to have it?

Many of these worries have been stirred up again at specific flashpoints in history, and most recently by the November 2022 release of ChatGPT-3. Though chatbots existed before GPT-3, that iteration introduced realistic conversation and a surprising capability for idea generation that previous iterations lacked. In the past year, development of large language models (LLMs) has been rapid (we’re already on GPT-4), and their role in society — and scholarly publishing, in particular — has been debated with equal parts anxiety and excitement. In this article, we’ll weigh these issues on the balance.

Please select this link to read the complete article from The Scholarly Kitchen.

Printer-Friendly Version