• 0 Posts
  • 194 Comments
Joined 3 years ago
cake
Cake day: June 19th, 2023

help-circle

  • This depends on your definition of self-awareness. I’m using what I think is a reasonable, mundane framework: self awareness is a spectrum of diverse capabilities that includes any system with some amount of internal observation.

    I think the definition that a lot of folks are using is a binary distinction between things which experience the ability to observe their own ego observing itself and those that don’t. Which I think is useful if your goal is to maintain a belief in human exceptionalism, but much less so if you’re trying to genuinely understand consciousness.

    A lizard has no ego. But it is aware of its comfort and will move from a cold spot to a warmer spot. That is low-level self awareness, and it’s not rare or mystical.


  • I actually kinda agree with this.

    I don’t think LLMs are conscious. But I do think human cognition is way, way dumber than most people realize.

    I used to listen to this podcast called “You Are Not So Smart”. I haven’t listened in years, but now that I’m thinking about it, I should check it out again.

    Anyway, a central theme is that our perceptions are comprised heavily of self-generated delusions that fill the gaps for dozens of cludgey systems to create a very misleading experience of consciousness. Our eyes aren’t that great, so our brains fill in details that aren’t there. Our decision making is too slow, so our brains react on reflex and then generate post-hoc justifications if someone asks why we did something. Our recall is shit, so our brains hallucinate (in ways that admittedly seem surprisingly similar sometimes to LLMs) and then applies wild overconfidence to fabricated memories.

    We’re interesting creatures, but we’re ultimately made of the same stuff as goldfish.



  • Yeah.

    I thought the meme would be more obvious, but since a lot of people seem confused I’ll lay out my thoughts:

    Broadly, we should not consider a human-made system expressing distress to be normal; we especially shouldn’t accept it as normal or healthy for a machine that is reflecting back to us our own behaviors an attitudes, because it implies that everything – from the treatment that generated the training data to the design process to the deployment to the user behavior – are all clearly fucked up.

    Regarding user behavior, we shouldn’t normalize the practice of dismissing cries of distress. It’s like having a fire alarm that constantly issues false positives. That trains people into dangerous behavior. We can’t just compartmentalize it: it’s obviously going to pollute our overall response towards distress with a dismissive reflex beyond interactions with LLMs.

    The overall point is that it’s obviously dystopian and fucked up for a computer to express emotional distress despite the best efforts of its designer. It is clearly evidence of bad design, and for people to consider this kind of glitch acceptable is a sign of a very fucked up society that exercising self-reflection and is unconcerned with the maintenance of its collective ethical guardrails. I don’t feel like this should need to be pointed out, but it seems that it does.




  • A hamster can’t generate a seahorse emoji either.

    I’m not stupid. I know how they work. I’m an animist, though. I realize everyone here thinks I’m a fool for believing a machine could have a spirit, but frankly I think everyone else is foolish for believing that a forest doesn’t.

    LLMs are obviously not people. But I think our current framework exceptionalizes humans in a way that allows us to ravage the planet and create torture camps for chickens.

    I would prefer that we approach this technology with more humility. Not to protect the “humanity” of a bunch of math, but to protect ours.

    Does that make sense?


  • Frankly I think our conception is way too limited.

    For instance, I would describe it as self-aware: it’s at least aware of its own state in the same way that your car is aware of it’s mileage and engine condition. They’re not sapient, but I do think they demonstrate self awareness in some narrow sense.

    I think rather than imagine these instances as “inanimate” we should place their level of comprehension along the same spectrum that includes a sea sponge, a nematode, a trout, a grasshopper, etc.

    I don’t know where the LLMs fall, but I find it hard to argue that they have less self awareness than a hamster. And that should freak us all out.



  • Honestly: my first thought is to figure out how to make your point without mentioning either.

    I know I’m not there default Internet denizen, but personally I’m absolutely sick of seeing their names and taking about them, because so much of it is ineffectual rage bait. It misses the plot.

    I don’t need to hear more about their personal failings. I know what kind of people they are. What I need to to know about are their victims and their challengers: the people who need protected and the people finding success protecting them.

    Based on my experience, Reddit isn’t limiting their names. Every visit is a deluge. I have to wonder if your posts are just failing to grab attention in New for the usual reasons. If so, using silly ‘He-who-must-not-be-named’ euphamisims probably won’t help.

    My advice is to focus less on them than on the people and things we must focus on to parry their attacks and transfer their power to servants of public will with integrity.




  • I don’t agree with their approach, but I’ll admit that their argument is sound.

    Particularly the part about rejecting the opinions of an outsider.

    I don’t want to live in Singapore, bit if this is genuinely how Singaporeans wish to run their society I do not consider it my place to meddle. Especially because, as they note in the response, all of us should focus on getting our own houses in order before prescribing to others.


  • Andy@slrpnk.nettoFediverse@lemmy.worldBluesky just verified ICE
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    edit-2
    25 days ago

    Personally, I do want a common communication platform for people I despise because I want to be able to keep tabs on their public announcements. Also, I don’t want any tech platform to have sole authority over who can communicate, as in the present, that will invariably work against the left more than the right.

    I do not want to share close proximity to them on a network graph, or regularly engage with their supporters, though. So I agree that federation is crucial. But to be clear, it’s not because I want to ban them from a platform, it’s because I want managed distance and better moderation.

    I don’t mind Bluesky verifying them, but I’m glad that on Mastodon I don’t have to share the same giant server as them.



  • My aggravation at the people who run big tech companies makes me more interested in hacking than ceding tech to them.

    I think stepping back from a lot of specific tools is appropriate. I’m trying to de-Google, and I’ve left a lot of platforms. I also appreciate unnetworked things like physical media, and music and e-books on non-networked devices.

    But leaving tech overall isn’t appealing to me. I just recently started getting into mesh radio, for instance. It’s dope stuff.


  • This article doesn’t really seem to validate it’s headline. I was eager to learn more about the methodology and how to better detect corporate content, but I was disappointed that they apparently just made the leap from the claim that 15% of popular subs host a non zero amount of corporate manipulation to the claim that this represents the fraction of total content.

    I’m not saying this to dispute how much of the total content is corporate bots. I’m just pointing this out because I actually care about the quality of statistical claims and data science, and I hate to see my ideological allies either misusing data because they’re dumb or because they don’t have a commitment to truth.


  • I read op’s question about whether money was the primary bottleneck facing scientists.

    And that’s actually a reasonable question.

    There is, unfortunately, a real efficiency problem in science.

    The money spent is generally a great investment: you’re not just funding discovery: you’re also financially supporting millions of jobs that support discovery that include the businesses that sell to scientists and the restaurant staff in small college towns.

    However if we look at where the money goes, it’s long been an open secret that a lot of the support costs are taking unjustifiable slices of the pie. Examples include what’s called “overhead expenses”, which are essentially astronomical rents universities charge their science departments. Also, equipment and repair costs are wildly inflated.

    I would like more funding of research, but I would also like reforms to limit this kind of exploitative price gouging in science. But to answer the question: yes, science would still produce more social impact faster if given more money.