• 0 Posts
  • 140 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle



  • ggppjj@lemmy.worldtoScience Memes@mander.xyzwoolly mice is bioweapon
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    2 months ago

    I don’t enjoy the idea of babies given circumcisions, I can’t abide genetic tailoring by a person internationally decried as an unethical practitioner doing things in secret just to stoke his ego to make him the first, bypassing the incredibly necessary ethical safeguards that the industry enforces against themselves.

    I don’t want him to be able to do what he did in the way that he did it because the way that he did what he did allows for monstrosities to be committed in the name of advancing science at any cost with no thought to potential lifelong unknowable direct consequences in the people being treated.

    I don’t have an anti-genetic editing slant, I don’t think the goal is bad.









  • A false statement would be me saying that the color of a light that I cannot see and have never seen that is currently red is actually green without knowing. I am just as easily probably right as I am probably wrong, statistics are involved.

    A lie would be me knowing that the color of a light that I am currently looking at is currently red and saying that it is actually green. No statistics, I’ve done this intentionally and the only outcome of my decision to act was that I spoke a falsehood.

    AIs can generate false statements, yes, but they are not capable of lying. Lying requires cognition, which LLMs are, by their own admission and by the admission of the companies developing them, at the very least not currently capable of, and personally I believe that it’s likely that LLMs never will be.







  • I think the important point is that LLMs as we understand them do not have intent. They are fantastic at providing output that appears to meet the requirements set in the input text, and when they actually do meet those requirements instead of just seeming to they can provide genuinely helpful info and also it’s very easy to not immediately know the difference between output that looks correct and satisfies the purpose of an LLM vs actually being correct and satisfying the purpose of the user.