2023-03-28 SPLAT Curiosity Report: Volume 13, Issue 9

Featured Story

Special Issue about ChatGPT

As libraries adapt to changing patron needs and new technological advancements, the use of artificial intelligence (AI) tools such as ChatGPT is becoming more prevalent. AI tools have the potential to transform library services in numerous ways, from answering reference questions to providing personalized book recommendations. However, the implementation of AI in libraries also raises important ethical and practical questions. In this newsletter article, we will examine both the advantages and drawbacks of using AI tools in libraries, with a focus on their potential uses in the future. While some librarians and scholars embrace AI as a way to improve library services and meet patron needs more effectively, others express concern about the implications of these technologies for privacy, equity, and intellectual freedom. By exploring both sides of the issue, we hope to encourage a nuanced discussion about the role of AI in libraries and its potential impact on the future of the profession.*

*This introduction was generated by ChatGPT. Prompt: “write an introductory paragraph to a newsletter article all about chatgpt and other AI tools and their uses in libraries in the future. some of the articles will be in favor and some will be opposed.”

(OpenAI, personal communication, February 24, 2023)

From Maggie Dumont: 
One of the biggest disadvantages of ChatGPT, and other similar products, is that you lose the human element. Working in customer service, I have found that each situation that comes across your desk is different, because we as humans are individuals with different experiences and circumstances. Sometimes you have to take a situation for the distinct case that it is to make a choice for the best outcome. Just because it is similar to the last 100 comparable cases that you’ve seen doesn’t mean the best way to resolve it is exactly the same. Using AI takes away the empathy humans have for each other and discernibility for more complex situations. AI sees things in black and white where human beings can see the gray area and make more informed choices. Companies should take this into consideration when launching an AI chat feature and have an option to talk to a real person if the AI feature can’t understand or has trouble solving more complex situations. 

Rightful criticisms of AI tools by Zack Bevan, Library Assistant at Meridian Library unBound. Prompted/grilled/requested by Mariah Farmer, February, 2023: 
AI is a very strong tool that will inevitably be used just about everywhere, as we continue to find new and innovative ways to use machine learning. However, I believe that there is a risk to the speed at which we are developing it and an ignorance that spreads alongside its influence. Like I said at the start, it is a tool, and ought to be used as such. The better AI gets at completing certain tasks, such as writing and generating art, the closer it becomes to being able to replace the jobs and careers of individuals who would otherwise have done the same thing. Now it is a perspective issue, there are plenty out there who believe that using AI instead of real human work is the best option economically, as it is much cheaper than hiring an actual writer or artist. That being said, I feel that it is imperative that those careers still exist, and that AI does not replace them. Artificial Intelligence would be useful to people in those careers, but it should not be used to replace those people entirely. My sister is a graphic designer, and having spoken with her, I’ve realized how dangerous that Art-Generation AI can truly be, when used as such. Were AI to become a replacement for graphic design, both her and thousands of other graphic designers would likely lose their jobs. Sure, it would be easier and cheaper for the businesses and corporations looking to create new logos and whatnot, but it would wipe out many jobs throughout an entire industry. 

Now, from a more philosophical, moral standpoint, things like AI art and writing are also bad for the evolution of human expression. Let’s take things to the extremes and say that all artists and writers were wiped out, replaced by AI generated content. Were that to happen, we would no longer see periods of art, or movements of writing (things like modernism, realism, romanticism). Since Machine Learning works by taking many previous examples and creating a derivative of what it knows, it can never truly receive new information. If the only new art being made is from AI, then the only content it knows is what it has made itself, meaning truly new or revolutionary ideas wouldn’t continue to grow. AI may be good at replicating, but it can’t imitate the human experience. Should we reach a day where Artificial Intelligence can recreate those types of experiences and emotions through expression, then we will have reached that forbidden dystopia we’ve alluded to throughout media and pop-culture. 

From Monique Gaddy:
I have been interested in the potential that ChatGPT has for counseling and therapy services. It seems scary to think of using AI for help with depression and anxiety (among other mental health experiences), but it also sounds interesting if it can be used in a way to help alleviate mild symptoms – such as repeated themes of thought that a person may have. For instance, a repeated mental block such as, “I am not good enough to find a job that will suit me better,” can be worked on with a counselor, friend, family member, or other people in a support system. But what if these support systems are unavailable at times, such as late at night when intrusive thoughts are the loudest? Currently, there are multiple other options for alleviating these repeated thoughts, such as meditation, journaling, and keeping a list of “self-truths,” but would an AI chat or AI personality help in a more realistic and personal way? I honestly have no idea, and so far, ChatGPT has shown limitations in what it can achieve for therapy options. All I know is that therapy is expensive, mental health issues have remained, and ChatGPT could be a more accessible option. It would even be handy if it was just to help encourage someone to seek more information on counseling, hotlines, or help them through the process of finding the right options for their budgets. Who knows? All I know is that I am interested to see how this service evolves in the psychology field. 

Resources:
https://www.griproom.com/fun/how-to-use-chatgpt-as-a-therapist
https://mashable.com/article/how-to-chat-with-chatgpt-mental-health-therapy
https://www.koreabiomed.com/news/articleView.html?idxno=20379

From Annie Gaines: 
As a person on the Autism Spectrum, I’ve found using ChatGPT to be helpful in crafting the right kind of response when I’m struggling for words. In the past, when I’d be confronted with the task of writing an email to say “thank you for asking, but no I can’t participate,” I would ask Google “how do I write a no thank you email at work” and it would give me some examples I could look at. I would do that because communication is hard for me so I’m looking for a template to work off of. I want to see how other people have said the thing I’m trying to say, so I can mirror their tone and be more appropriate in the situation. 

So now, instead of asking Google, I can ask ChatGPT to help me in a similar way. If I enter the prompt “write a no thank you email,” it provides me with a useful template I can borrow from, reference, and use as an example as I craft my own response. And I can add more specific details to the prompt if I need more specific language. For people like me who struggle with communication, ChatGPT can be a really useful accessibility tool. 

From Tania Harden: 
In a recent online article, Wayner (2023) listed ten reasons why we should be concerned about generative AI. As librarians, the top two are obvious in our profession: plagiarism and copyright. The eight other concerns listed definitely gave me pause to think.

There is the matter of uncompensated labor if a drawing program were to collect data about a human’s drawing behavior and then use that information to train AI. There is the comparison of information versus knowledge meaning that an expert in history has taken years to study their topic and become an expert and we are often amazed by their knowledge of a subject. Whereas AI just needs data input for a few months and may be able to come up with the same obscure tidbit, but it isn’t nearly as impressive. Wayner also mentions intellectual stagnation and that if humans become dependent on generative AI, we might no longer be able to produce new ideas and information of our own (2023). There is also a danger of breaches of privacy and security and the potential inability to teach AI to protect private data when it can be queried for that information in hundreds of ways. For me, undetected bias is the most pressing of concerns. If biased data is used to create an AI training model, then biased answers will be what AI gives us in return to our queries. While AI is creating machine intelligence, it is creating machine stupidity as well. It will be hard to anticipate AI mistakes because they “think” differently than humans do. Human gullibility will certainly come into play with AI just as it does today with social media and “fake news.” Humans will assume that AI is correct and not think about bias or that there might be an error in the training data and just accept what AI gives us without question. Finally, there is infinite abundance meaning that digital content can be relatively easily reproduced. Writers and artists might be put out of work due to the ability of AI being able to recombine and regenerate content in mass quantities.

Reference: Wayner, P. (2023, February 13). 10 reasons to worry about generative AI. InfoWorld. https://www.infoworld.com/article/3687211/10-reasons-to-worry-about-generative-ai.html

From Eric Hovey: 
As I follow a lot of artists on Instagram for Dungeon and Dragons content, it has been swarming with artsy illustrations of these content creators suddenly looking like a watercolor painting from the Impressionist movement or a character from a new Marvel film and what they think their Dungeon and Dragons character should look like. But as people share digital portraits of themselves created with the app Lensa, I have read online some critics have raised concerns about data privacy, artist rights. I believe that creators should have rights in what they say and this AI creations makes it harder to define who is creating the work. As someone who uses pictures/ideas for our game Dungeon and Dragons, I try to make sure the artist or creator gets recognized. With this new technology it brings a new change of who is creating and who should be given credit.

Resources:
https://www.businessinsider.com/lensa-ai-raises-serious-concerns-sexualization-art-theft-data-2023-1 

From Chris Thielen:
Generative AI is currently all the rage.  If you’re not familiar with this term, don’t you worry. This fun infographic explains it all. Thanks to ChatGPT in particular, the recent frenzy over Generative AI has set alight the tech and business sectors as stocks have soared (while others have plummeted). I’ve put to use ChatGPT, pressing into its capabilities, and I can see how it could benefit anyone trying to wrap their mind around an idea. Like the early stages of a research paper, one may put to use Wikipedia to get a ten thousand foot view. Similarly, this technology could prove formative in giving context to a user. This practically sells itself to library professionals, as we are always looking for new and unique resources that patrons can use to increase their learning.

One concern I have with Generative AI is the potential for it to take over jobs that currently belong to humans. Creativity is a powerful thing. Although we know it when we see it, creativity is hard to replicate. It’s hard to even imagine how AI could chart such a thing. Take DALL•E 2 for example. I provided a prompt to draw “An impressionist watercolor painting depicting the personal hell of a factory worker during the Industrial Revolution.” See below.

 

To create this, an interpretation has to be made, as DALL E 2 has to be familiar with what the industrial revolution was and what a factory worker might have looked like. It also has to be familiar with the idea of hell and how this idea has been construed and portrayed up to this point. And lastly, there would have to be a familiarity with the impressionist art movement (including what it is about impressionism that makes it unique from other movements). But what’s most interesting of all is that our prompt for DALL E 2 is dependent upon forms and ideas already in existence. So what might it even look like to create something new using AI in this way? Can AI even create something novel? 

SPLAT explores the ever-evolving library world and supports library folks as they adapt to meet the needs of their communities. Library folk throughout the state of Idaho volunteer to serve on the Special Projects Library Action Team (SPLAT). Learn more about SPLAT at https://libraries.idaho.gov/splat/ 

SPLAT is brought to you by the Idaho Commission for Libraries and was made possible, in part, by the Institute of Museum and Library Services (LS-246156-OLS-20). The views, findings, conclusions, or recommendations expressed in this publication do not necessarily represent those of the Institute of Museum and Library Services.