Trust is crucial in the writing industry. Readers invest time and emotions in books, forming a bond with authors. However, doubts about AI's role in writing authenticity emerge as technology advances.
Kerser Brewin wrote a book about AI, "God-Like." Having written about such a topic, questions about the use of AI are inevitable. Brewin grapples with AI advancements while maintaining the integrity of his craft. AI tools help with writing, blurring the line between human and AI authorship. Despite their benefits, there is a growing need for transparency in their use.
Hence, Brewin consciously decided to be completely open with his readers. Upon completing his latest book, "God-Like," which explores the history of AI, he recognized the necessity of addressing the inevitable questions surrounding AI assistance in his writing process. Thus, he crafted an AI transparency statement to be included at the beginning of his book, outlining the extent to which AI was utilized.
Brewin carefully addressed four crucial aspects regarding utilizing AI in his writing process in his transparency statement. These include whether any text was generated using AI, if any text was enhanced through AI assistance, whether AI was employed to suggest text, and if text underwent AI-powered correction, alongside the extent of human discretion in accepting or rejecting proposed changes.
Brewin revealed that AI was solely utilized for text correction in the creation of his book, with human judgment governing the acceptance or rejection of AI-generated suggestions. While acknowledging that this transparency statement may not be flawless, Brewin emphasized its significance as a cornerstone for fostering open dialogue and accountability in integrating AI tools within the writing domain.
Research indicated that much of AI usage in writing remains concealed, driven by concerns of judgment or implications on the authenticity of the creative process. However, embracing transparency fosters trust and allows for a more honest exploration of the intersection between human creativity and technological advancement.
READ ALSO: Fiction to Reality: How Science Fiction Shapes the Core Principles of AI Ethics
At a virtual panel discussion hosted by PowerNotes in September 2023, educators shared various approaches to integrating AI tools into assignments. Jason Gulya from Berkeley College adopts 'mega prompts' for ChatGPT usage, emphasizing student engagement and dialogue with the tool. Catrina Mitchum from the University of Maryland noted challenges in incorporating AI into pre-designed online courses due to limitations in learning management systems.
Lance Cummings from the University of North Carolina Wilmington advocated allowing AI to be used in writing assignments, creating opportunities for learning through trial and error. Laura Dumin from the University of Central Oklahoma voiced support for AI as a partner in lifelong learning, emphasizing flexibility and process-oriented approaches.
Overall, educators highlighted the importance of preparing students for AI partnership skills, which will be valuable across academic disciplines and future careers.
In the same month, the US Senate's subcommittee on consumer protection discussed proposals to enhance AI usage transparency. Suggestions included disclosing when AI is employed and developing tools to assess risks associated with different AI models. Bell and Julia Stoyanovich from New York University's Center for Responsible AI advocated for transparency labels akin to nutrition labels, offering insight into algorithmic decision-making processes.
They also proposed data visualization to elucidate how AI operates and makes decisions. These measures aim to foster trust between AI users and providers. Bell and Stoyanovich authored an Algorithm Transparency Playbook, offering guidelines for companies to adopt transparent practices. However, tech lobbying groups, such as the Computer and Communications Industry Association, cautioned against Senate regulation of AI, citing concerns about stifling innovation.
In a landscape where AI looms as a powerful yet potentially divisive force, writers must establish trust through transparency. Ignoring the need for transparency risks eroding the foundational trust between writer and reader, ultimately undermining the artistry of storytelling.
In his book, Brewin explores AI as a transformative force akin to the atomic bomb. It demands adaptation and understanding to coexist with humanity. By openly acknowledging the tools at our disposal, we can navigate this new reality with integrity and ensure that the art of writing remains a beacon of trust and authenticity in an ever-changing world.
RELATED ARTICLE: Japanese Literary Prize Winner Admits Using AI in Writing Novel
© 2023 Books & Review All rights reserved.
© Copyright 2024 Books & Review. All rights reserved. Reproduction in whole or in part without permission is prohibited.