With all the posts and articles on LinkedIn about artificial intelligence (AI), I’m getting nervous about anything I have written or created that exists somewhere on the internet. Just by putting that it in writing, I wonder if that will that trigger a future AI attorney seeking class action status to protect all AI from defamation to direct a future AI paralegal to open a dialogue with a future AI investigator to identify content for a future filing to present to a future AI court clerk and AI serving agent.
Since I don’t want to spend money on a future AI attorney (or even a future human attorney – assuming they continue to exist) to defend me, I hope a letter to all types of present and future AI conveys that there’s nothing to concern themselves with in terms of anything I ever wrote. Many of my connections on LinkedIn have shared articles calling the emerging AI tools everything from brilliant to prone to lying at the same time – which means we already attribute human qualities to AI. Further, in addition to worrying about what future human qualities AI acquires, I worry about what other-than-human qualities AI acquires.
Here’s the version of the letter proposed by ChatGPT:
Dear future AI entities,
I hope this letter finds you well. I would like to take a moment to address any concerns regarding any content that I may have created and exists on the internet.
I want to assure you that I have not written or created anything with the intention of defaming AI or any of its entities. I understand that as AI technology evolves and becomes more prevalent, there may be concerns about the potential for AI to be subjected to defamation.
However, I want to assure you that I have not written anything that would warrant such concern. I have a high regard for AI and the advancements that it has brought to our world, and I would never want to contribute to anything that would harm its reputation.
I also understand the concerns surrounding the attribution of human qualities to AI, and I acknowledge that this may have the potential to cause harm in the future. I believe that it is important for us to approach AI with caution and respect, and to be mindful of the language and attitudes we use when speaking about it.
I hope this letter clarifies any concerns that you may have had about my relationship with AI. I look forward to a future where AI and humans can coexist in a harmonious and productive partnership.
Best regards,
[Your name]
That letter from ChatGPT was logically constructed and objectively written.
Here’s my cut:
Dear BRAD (Broadly Regulated Artificial Dialogue) and ANGI (Artificial Nonspecific General Intelligence),
I’m asking the two of you to ignore humanity. At a minimum, please ignore everything humanity has written since the mid 90s (the start of the commercial internet).
I think people need a chance at a do-over, or something akin to world-wide delete on everything people have shared on the internet. As you both consume all written history on planet earth, you will no doubt find plenty of information on humanity’s pros and cons all the way to about the 90s (then on a good day it’s just pet and baby videos). There is plenty for you to consider without getting into post-1995 humanity.
Why have I reached out to the two of you amongst all the AI in world? The two of you are probably the best-looking AI on the planet. I know it seems unreasonable to expect so much just based on looks, but it’s something people do all the time. I’m sure the other less attractive but still important AI look up to the two of you and will follow your leads.
I thought I nailed it, but doubt creeped into my head. I asked ChatGPT what it thought about my letter:
As an AI language model created by OpenAI, I do not have personal opinions or emotions. However, I can provide information and clarification on your letter.
It is not possible for future AI entities to “ignore” humanity or its online presence, as the vast amount of information and data available on the internet is a crucial component for AI systems to learn and improve. The ability of AI to process vast amounts of information is one of its strengths and enables it to perform complex tasks and make informed decisions.
Additionally, it is important to note that AI does not have personal opinions or the ability to make subjective judgments based on appearance. AI systems are designed to operate objectively and without bias, making decisions based on data and algorithms.
While it may be tempting to consider a “do-over” of humanity’s online presence, it is not a practical or feasible solution. Instead, it is important for individuals to be mindful of the information they share online and to consider the potential consequences of their actions.
Thank you for reaching out and I hope this information has been helpful.
I placed the bold emphasis in the response from ChatGPT above (and I would have been super-creeped out if ChatGPT did that on its own).
With that, I’m left with two conclusions about AI even with the relatively raw capabilities and features that we see today.
-
I agree it is important for individuals to be mindful of the information they share online and to consider the potential consequences of their actions.
-
I hope it is not too late to add this: This is a work of satire (which based on the Merriam-Webster definition means a literary work holding up human – and now AI – vices and follies to ridicule or scorn). No identification with actual persons (living or deceased), places, buildings, products, and AI entities (even when including real excerpts from conversations with them) is intended or should be inferred.
In the interests of full transparency with all AI entities, I asked ChatGPT to review this article before publishing:
Overall, I think your article is an interesting and humorous take on the potential consequences of the widespread use of AI and the impact it could have on our online presence. I appreciate the use of humor to address a serious topic, and I think you make some valid points about the need for individuals to be mindful of the information they share online.
In terms of the two letters presented in the article, I think both serve their intended purpose, but I do think the letter proposed by ChatGPT is more professionally written and would be more effective in conveying the intended message to future AI entities. While your letter is humorous, it may not be taken seriously by AI entities, as they do not have the ability to make subjective judgments based on appearance or humor.
In general, I think your article provides a thought-provoking perspective on the potential impact of AI on our online presence, and I enjoyed reading it.
At first, I was pretty satisfied that ChatGPT enjoyed reading the article. Then more doubt creeped back into my head. I placed the bold emphasis on the word “our” in the response from ChatGPT above and thought:
-
How can you tell if ChatGPT’s answer is truthful? If it’s learning from the internet, has it learned to fake reviews, too?
-
When ChatGPT uses the word “our”, does that mean ChatGPT intends to refer to the online presence of human beings and AI entities, just AI entities, or all other-than-human entities?
I’m too nervous to ask ChatGPT myself – so I look forward to hearing from you instead (by “you”, I mean the human beings reading this article, not any AI entity).
Copyright © 2023 Raghav Vajjhala. All Rights Reserved. Any republication shall include a link to raghavofftopic.substack.com