• 0 Posts
  • 145 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle








  • ggppjj@lemmy.worldtoScience Memes@mander.xyzwoolly mice is bioweapon
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    8 months ago

    I don’t enjoy the idea of babies given circumcisions, I can’t abide genetic tailoring by a person internationally decried as an unethical practitioner doing things in secret just to stoke his ego to make him the first, bypassing the incredibly necessary ethical safeguards that the industry enforces against themselves.

    I don’t want him to be able to do what he did in the way that he did it because the way that he did what he did allows for monstrosities to be committed in the name of advancing science at any cost with no thought to potential lifelong unknowable direct consequences in the people being treated.

    I don’t have an anti-genetic editing slant, I don’t think the goal is bad.









  • A false statement would be me saying that the color of a light that I cannot see and have never seen that is currently red is actually green without knowing. I am just as easily probably right as I am probably wrong, statistics are involved.

    A lie would be me knowing that the color of a light that I am currently looking at is currently red and saying that it is actually green. No statistics, I’ve done this intentionally and the only outcome of my decision to act was that I spoke a falsehood.

    AIs can generate false statements, yes, but they are not capable of lying. Lying requires cognition, which LLMs are, by their own admission and by the admission of the companies developing them, at the very least not currently capable of, and personally I believe that it’s likely that LLMs never will be.



  • What do you believe that it is actively doing?

    Again, it is very cool and incredibly good math that provides the next word in the chain that most likely matches what came before it. They do not think. Even models that deliberate are essentially just self-reinforcing the internal math with what is basically a second LLM to keep the first on-task, because that appears to help distribute the probabilities better.

    I will not answer the brain question until LLMs have brains also.