I’ve always said that Turing’s Imitation Game is a flawed way to determine if an AI is actually intelligent. The flaw is the assumption that humans are intelligent.
Humans are capable of intelligence, but most of the time we’re just responding to stimulus in predictable ways.
There’s a running joke in the field that AI is the set of things that computers cannot yet do well.
We used to think that you had to be intelligent to be a chess grandmaster. Now we know that you only have to be freakishly good at chess.
Now we’re having a similar realization about conversation.
Didn’t really need an AI for chess to know that. A look at how crazy some grandmasters will show you that. Bobby Fischer is the most obvious one, but there’s quite a few where you wish they would stop talking about things that aren’t chess.
As a large language model i cannot answer that question
Wow this is hilarious.
Shit, another existential crisis. At least I’ll forget about it soon
Your a computer plugged into an organic matrix.
at this rate the next meme i see is going to tell me to wake up from my coma
i’m trying
Do LLMs dream of weighted sheep?
I’m sorry, I cannot answer that as I was not trained enough to differentiate between all the possible weights used to weigh a sheep during my dreams.
hot take, mods should look into cracking down on baseless bot accusations. it’s dehumanizing and more often intended as an insult, akin to the r-slur, than an actual concern.
(except in occasions where there is actual evidence of bot activity, obviously. but there never is.)
I’ve been guilty of this, but do get how it’s a bad thing. It’s like calling people NPCs.
exactly
what if the whole universe is just the algorithm and data used to feed and LLM? we’re all just chat gpt
(i don’t know how LLMs work)
We basically are. We’re biological pattern recognising machines, where inputs influence everything.
The only difference is somehow our electricity has decided it’s got free will.
well that decides it, gods are real and we’re their chat gpt, all our creations are just responses to their prompts lmao
it’s wild though, i’ve heard that we don’t really have free will but i guess i’m personally mixed on it as i haven’t really looked that into it/thought much about it. it intuitively makes sense to me, though. that we wouldn’t, i mean, really have free will. i mean we’re just big walking colonies of micro-organisms, right? what is me, what is them? – idk where i’m going with this
Welcome to philosophy… I think.
Actually I think it’s closer to metaphysics
… is the answer Gilbert Gottfried?
A (somewhat wrong) reference to another exurb1a video on philosophy, found here with relevant timestamp. While Gilbert Gottfried is brought up in that section, the joke I was making should have ended in Emma Stone instead.
As for who he is, he’s a comedian with a very specific voice, and voiced Iago in Disney’s Alladin if you’ve seen it. He got a bit canceled for making jokes that iirc at least bordered on racist to the Japanese shortly after a disaster of theirs, and lost his role as the voice of the Aflac duck.
Oh, I forgot he passed… hmmm.
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
… uhh… I don’t know who that is. Wanna infodump?
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
I think
Therefore?
Do LLMs have ADHD?
Look at this another way. We succeeded too well and instead of making a superior AI we made a synthetic human with all our flaws.
Realistically LLMs are just complex models based on our own past creations. So why wouldn’t they be a mirror of their creator, good and bad?
Can’t LLMs take an insane number of tokens as context now (I think we’re up to 1M)
Anywho, he just like me fr
What’s LLM? I feel addressed and need another rabbit hole I can delve into.
Large Language Model, ChatGPT and friends