Artificial Intelligence and Public Comment

I can see AI writing 90% of television and movies but they'll all be Mall Cop or Miss Congeniality or another 400 seasons of the kardishians, MILF Island, etc.

Citizen Kane won't come from it. Neither will a giant of cinematic ingenuity like Nate & Hays. The worlds created by people like Tolkien and Lucas won't come from AI. The Boulder Monitor will never replace their most excellent columnist with AI because AI can't taste the $*)Q!#@$ gravy on the Chicken Fried steak to adequately convey it's texture, aroma and flavor.
You're, once again, overly optimistic.

It can and it will. And even if it can't, we won't be able to tell.
 
AI's flaw, whether it is writing or coding or whatever, is it tries to follow the known rules and takes no risk. In the end, everything it produces leans toward the food equivalent of plain white toast. Getting to the point where it takes risks and tries something new might be hard because history shows a lot of the things that lead to the greatest payoffs were ideas that didn't look great at the beginning.
...for now. That won't be the case forever.
 
I'm just old enough to have lived through a few of these existential crises.
I recently listened to a podcast with the author of If Anyone Builds It, Everyone Dies, and he painted a pretty compelling picture of an existential crisis if Artificial Superintelligence is ever achieved. Now lets be clear, LLM's/chatbots are a long way from that, but basically the three possible scenarios he lays out are:
1) It fails, artificial superintelligence is never achieved. This is the best case scenario. But there are a lot of people betting a lot of money against it.
2) It succeeds, there are a couple people who hold the levers to control it and they are basically god-emperors who control everything and everyone.
2) It succeeds, and nobody can control it. In its quest for progress, however it defines that, there is a certain point where humans are an impediment. It may not be malicious, like nuking everyone, just like extinction of species has often times not been intentional malevolence, but our quest for progress ultimately consumed the places or things they needed to survive.

I had a hard time poking holes in the argument, but would love to hear why he's wrong.
 
Last edited:
...for now. That won't be the case forever.
That's the question. Does the feedback loop prevent it from breaking out of the feedback loop?

I'm just old enough to have lived through a few of these existential crises.
This one seems faster. My belief is change has to happen at a speed society can adapt to it. This one looks like a high speed train running us all toward mediocrity. If we replace human labor with computers fast enough, people will lose their heads.
 
Yes, Skynet is a distinct possibility. But - so is Star Trek.

Humans rise to the level of technology. I think our shared narrative shows that.
The fact that Elon shared many of the concerns of the author and ultimately jumped on board, because if god-emperors are inevitable he's damn well gonna be one of 'em, doesn't give me a ton of hope. If AI is controlling the municipal water system, and has to make the decision in a drought between sending water to the data center that powers it or sending water to the people of the community, which do you think it's going to do? That's not a terminator situation, it's more like an AI making a rational self-preservation decision, and then people killing each other because that's what we do.
 
The fact that Elon shared many of the concerns of the author and ultimately jumped on board, because if god-emperors are inevitable he's damn well gonna be one of 'em, doesn't give me a ton of hope. If AI is controlling the municipal water system, and has to make the decision in a drought between sending water to the data center that powers it or sending water to the people of the community, which do you think it's going to do? That's not a terminator situation, it's more like an AI making a rational self-preservation decision, and then people killing each other because that's what we do.

1763582662656.png
 
One of the reasons I see it as existential aside from the functional possibilities of misalignment, taking everyone's jobs, etc. - is a spiritual one.

It seems entirely possible that there's nothing special about our brains in respect to intelligence. LLM's are "grown" by being trained on data, but are not designed per se. Even those who are experts on them don't know what they are doing on the inside. The most recent studies show we are on, or past, the verge of AIs performing novel research. Literally generating novel hypothesis and achieving discovery. I do think a day may come where the most moving music you've ever heard, the most enthralling movie you've ever seen, the most compelling argument in any instance - will not be written by a human soul.

You'll hear assertions that LLMs are just algorithmic regurgitators, but you never hear the answer to the question that is begged as to what that makes us. It may just be that at some point the things we have thought were very special and irreplaceable human attributes are simply emergent at some level of complexity.

I do hope that hunch is wrong.
 
Last edited:
One of the reasons I see it as existential aside from the functional possibilities of misalignment, taking everyone's jobs, etc. - is a spiritual one.

It seems entirely possible that there's nothing special about our brains in respect to intelligence. LLM's are "grown" by being trained on data, but are not designed per se. Even those who are experts on them don't know what they are doing on the inside. The most recent studies show we are on, or past, the verge of AIs performing novel research. Literally generating novel hypothesis and achieving discovery. I do think a day may come where the most moving music you've ever heard, the most enthralling movie you've ever seen, the most compelling argument in any instance - will not be written by a human soul.

You'll hear assertions that LLMs are just algorithmic regurgitators, but you never hear the answer to the question that is begged as to what that makes us. It may just be that at some point the things we have thought were very special and irreplaceable human attributes are simply emergent at some level of complexity.

I do hope that hunch is wrong.

So if that is the case, is AI it's own, independent life form deserving of the same liberties we receive as humans? At this point reproduction would be internal for AI.

Do we owe our creations freedom if they exhibit the same mental heights as humans and create Happy Gilmore 3?
 
So if that is the case, is AI it's own, independent life form deserving of the same liberties we receive as humans? At this point reproduction would be internal for AI.

Do we owe our creations freedom if they exhibit the same mental heights as humans and create Happy Gilmore 3?
Happy Gilmore 2 was such a disappointment if AI goes for 3, I say we pull the plug and piss on the motherboard.
 
Happy Gilmore 2 was such a disappointment if AI goes for 3, I say we pull the plug and piss on the motherboard.

This is the crux of the modern human.

Our mass produced art is schlock that perhaps a computer could write better. But the stuff that actually takes soul, and intent and experience to write - AI won't be able to compete. The thing that makes art so special is the human condition. The thing that makes mass-produced art popular is that it makes you feel safe and warm - it's familiar and it pulls at the stereotypical heart-strings without exploring real emotion or digging deeper into the human psyche - like Tropic Thunder did.
 
This is the crux of the modern human.

Our mass produced art is schlock that perhaps a computer could write better. But the stuff that actually takes soul, and intent and experience to write - AI won't be able to compete. The thing that makes art so special is the human condition. The thing that makes mass-produced art popular is that it makes you feel safe and warm - it's familiar and it pulls at the stereotypical heart-strings without exploring real emotion or digging deeper into the human psyche - like Tropic Thunder did.
Ai will not only compete, but win. And you won't ever know it.

You sound very evangelical in your opinion, so I won't bother to link anything. But there are a lot of great minds that disagree with your stance. Afterall it already exceeds human performance in almost every task that it's capable of engaging in.
 
Ai will not only compete, but win. And you won't ever know it.

You sound very evangelical in your opinion, so I won't bother to link anything. But there are a lot of great minds that disagree with your stance. Afterall it already exceeds human performance in almost every task that it's capable of engaging in.

I'll be dead by then and enjoying a technology respite in the great hereafter.

So long, and thanks for all the fish!
 
You'll be dead?

You do know that ChatGPT has only been around for less than 3 years. In that time, it's gone from almost completely useless to an incredibly powerful tool that is directly leading to changes in the labor force across a wide range of disciplines. It's growth in capacity will almost assuredly follow a sigma curve. The question then becomes, where are the curve are we now? A or B?
1763588426644.png
 

Forum statistics

Threads
117,525
Messages
2,159,700
Members
38,256
Latest member
Shizam
Back
Top