Everyone likes to believe they’re thinking independently. That they’ve arrived at their beliefs through logic, self-honesty, and some kind of epistemic discipline. But here’s the problem - that belief itself is suspiciously comforting. So how can you tell it’s true?
What if your worldview just happens to align neatly with your temperament, your social environment, or whatever gives you emotional relief? What if your reasoning is just post-hoc justification for instincts you already wanted to follow? That’s what scares me - not being wrong, but being convinced I’m right for reasons that are more about mood than method.
It reminds me of how people think they’d intervene in a violent situation - noble in theory, but until it happens, it’s all just talk. So I’m asking: what’s your actual evidence that you think the way you think you do? Not in terms of the content of your beliefs, but the process behind them. What makes you confident you’re reasoning - not just rationalizing?
It’s funny. I’ve seen research about LLMs “reasoning” and “introspecting” that has shown that they make up stories when you ask them why they answered questions in certain ways that don’t match how their neurons actually fired, and a common response in the comments is to triumphantly crow about how this shows they’re not “self aware” or “actually thinking” or whatever.
But it may be the same with humans. There’s been fun experiments where people would have neurons artificially stimulated in their brains that cause them to take some action, such as reaching out with their hand, and then when you ask them why they did that they’ll say - and believe - that they did it for some made-up reason like they were just stretching or that they wanted to pick something up. Even knowing full well that they’re in an experiment that’s going to use artificial stimulus to make them do that.
I suspect that much of what we call “consciousness” is just made up after-the-fact to explain to ourselves why we do the things that we do. Maybe even all of it, for all we currently know. It’s a fun shower thought to ponder, if nothing else. And perhaps now that we’ve got AI to experiment with in addition to just our messy organic brains we’ll be able to figure it all out with more rigor. Interesting times ahead.
I’m not terribly concerned about it, though. If it turns out that this is how we’ve been operating all along, well, it’s how we’ve been operating all along. I’ve liked being me so far, why should that change when the curtain’s pulled back and I can see the hamster in the wheel that’s been making me work like that all along? It doesn’t really change anything, and I’d like to know.
You might be referring to the split-brain experiments, where researchers studied patients who had their brain hemispheres separated by cutting the corpus callosum – the “bridge” between the two sides.
In these experiments, text can be shown to only one eye, allowing researchers to communicate with just one hemisphere without the other knowing. The results are fascinating for several reasons, especially because each hemisphere demonstrates different preferences and gives different answers to the same questions. This naturally raises the question: “Which one is you?”
Another striking finding, similar to what you were referring to, is that researchers can give instructions to the non-verbal hemisphere and then ask the verbal one to explain why it just performed a certain action. Since it doesn’t know the real reason, it immediately starts inventing excuses – ones the researchers know to be false. Yet the participant isn’t lying. They genuinely believe the made-up explanation.
As for consciousness, I think you might be using the term a bit differently from how it’s typically used in philosophical discussions. The gold standard definition comes from Thomas Nagel’s essay What Is It Like to Be a Bat?, where he defines consciousness as the fact of subjective experience – that it feels like something to be. That existence has qualia. This, I (and many others) would argue, is the only thing in the entire universe that cannot be an illusion.
Nope, I would have described the split-brain experiments if that’s what I was referring to. I dug around a bit to find a direct reference and I think it was Movement Intention After Parietal Cortex Stimulation in Humans by Desmurget et al. In particular:
I did misremember the fact that they only felt the intention to move, they didn’t actually move their limbs when those brain regions were stimulated.
A related bit of research I dug up on this reference hunt that I’d forgotten about but is also neat; Libet in the 1980s, who used observation of the timing of brain activity to measure when a person formed an intention to do something compared to when they became consciously aware that they had formed an intention to do something. There was a significant delay between those two events, with the intention coming first and only later with the conscious mind “catching up” and deciding that it was going to do the thing that the brain was already in the process of doing.
Probably, I’m less interested in philosophy than I am in actual measurable neurology. The whole point of all this is that human introspection appears to be flawed, and a lot of philosophy relies heavily on introspection. So I’d rather read about people measuring brain activity than about people merely thinking about brain activity.
You can argue it all you like, but in the end science requires evidence to back it up.
Then what do you mean when you’re using the word “consciousness”? Whose definition are you going by?
Loosely, the awareness of our own actions and the reasons why we do them. The introspective stuff that the research I linked to is about.
The specific word doesn’t really matter to me much. Substitute a different one if you prefer. Semantic quibbling is more of what I leave to the philosophers.
You’re calling it “semantic quibbling,” but defining terms isn’t a sideshow - it’s the foundation of a meaningful conversation. If two people are using the same word to mean different things, then there’s no actual disagreement to resolve, just a tangle of miscommunication. It’s not about clinging to labels – it’s about making sure we’re not just talking past each other.
And on the claim that consciousness – in the Nagel sense – is the one thing that can’t be an illusion: I don’t think you’ve fully appreciated the argument if your first response is to ask for scientific evidence. The entire point is that consciousness is the thing that makes evidence possible in the first place. It’s the medium in which anything at all can be observed or known. You can doubt every perception, every belief, every model of the universe - but not the fact that you are experiencing something right now. Even if that experience is a hallucination or a dream, it’s still being had by someone. That’s the baseline from which everything else follows. Without that, even neuroscience is just lines on a chart with nobody home to read them.
You asked:
And I’m answering that. You literally asked for “actual evidence,” and I gave links to the specific research I’m referencing.
I’m not here to argue with you over the meaning of the word “consciousness” when you didn’t even ask about that in your question in the first place. If you think I’m talking about something other than consciousness go ahead and tell me what other word for it suits you.
Introspective narration or metacognitive awareness seems to better describe what you’re talking about rather than consciousness.