How Do We Prove Humanity?
"Let us hurry—there is nothing to fear here."
Let's say we're living in the glorious future where you can't actually tell if something is written by a human or by an AI. Tons of material is churned out by AIs daily in an effort to sway eyeballs and make advertising dollars. Google is completely clogged with it.
And maybe you don't like that. Maybe it's become "standard substandard" quality and it's hard to find what you're really looking for.
Or maybe you want to make sure you're reading something that's not regurgitated with no additional creativity added before you get halfway through it and realize you've been wasting your time.
Or maybe you're an (ahem) author of computer science-related materials and you're a little worried that what you do for fun might eventually be completely rendered obsolete.
How can we, in the age of AI generation, know that something is written by a human and not a machine?
One possibility that comes to mind is like PGP's Web of Trust. Except this time it would be a Web of Humanity. You'd get other humans to verify your humanity, and they'd sign your public key, just like is done now with identity.
As it grew to a wonderous six degrees of separation, we'd have virtually full confidence in the humanity of the creator.
Of course, a bad actor could start vouching for AIs and bring them into the fold, but theoretically they'd be out there a distance on your web and not be as trustworthy. Just like the identity Web of Trust.
Now, this is work to do. Work that--let's be honest--barely anyone in the world currently does for even identity, for heaven's sake.
This brings me to my next point.
Do We Even Care?
I asked ChatGPT to come up with recipes for things that I had in my house. It did without a problem. I chose the most-tasty-looking one and made it. It was great.
Did I care that it wasn't written by a human? Not in the least. (In fact, I was pleased I didn't have to scroll through 20 pages of ads and recipe history before I got what I was looking for, but that's another story.)
One complaint about AI is that sometimes they're just wrong. But they sound right, making things even worse. I've seen this myself when grilling ChatGPT about Unix kernel internals.
But as was once said, "Let he who is without sin cast the first stone." I have absolutely (accidentally) put information online that was incorrect. And of course I make every effort to correct it.
Even though it's not like AI cares if it's wrong, but presumably it's at least usually right.
And isn't that really the standard to which we hold Internet content in general? It's all out there, mostly right, and mostly created by actors that don't have too much of an ax to grind, anyway?
All these issues--racism, partisanship, intolerance--they already have ample human representation, and provide copious amounts of training data to create AI in our image, to generate content that we already make, content that we demand.
I've spent more time than I should have watching AtheneAIHeroes (profanity warning). It's frequently hilarious in a way that far exceeds so much of the human-created tripe that passes for comedy television these days.
Do I even care?
I hope so.