

Exile is my comfort game, just so wonderful.
Exile is my comfort game, just so wonderful.
I would say under-extrusion would probably make ironing less attractive in this case depending on why there was under-extrusion.Would help, but I think it would still be subpar.
I don’t enjoy the idea of babies given circumcisions, I can’t abide genetic tailoring by a person internationally decried as an unethical practitioner doing things in secret just to stoke his ego to make him the first, bypassing the incredibly necessary ethical safeguards that the industry enforces against themselves.
I don’t want him to be able to do what he did in the way that he did it because the way that he did what he did allows for monstrosities to be committed in the name of advancing science at any cost with no thought to potential lifelong unknowable direct consequences in the people being treated.
I don’t have an anti-genetic editing slant, I don’t think the goal is bad.
Why is this disgraced and denounced genome-editing ethics-lacking guy being spammed so fucking much
First three times I got the wheel it triggered.
I didn’t understand the memes.
Then it never happened again.
“Shit or get out of the kitchen” is my current favorite malaphor.
That’s called a sneakernet. Although, if you’re talking actual pigeons, you may be interested in RFC1149.
I’m gonna teach you a lesson on improv: “yes, and”.
A crucial part of your statement is that it knows that it’s untrue, which it is incapable of. I would agree with you if it were actually capable of understanding.
A false statement would be me saying that the color of a light that I cannot see and have never seen that is currently red is actually green without knowing. I am just as easily probably right as I am probably wrong, statistics are involved.
A lie would be me knowing that the color of a light that I am currently looking at is currently red and saying that it is actually green. No statistics, I’ve done this intentionally and the only outcome of my decision to act was that I spoke a falsehood.
AIs can generate false statements, yes, but they are not capable of lying. Lying requires cognition, which LLMs are, by their own admission and by the admission of the companies developing them, at the very least not currently capable of, and personally I believe that it’s likely that LLMs never will be.
Ask chatgpt, I’m done arguing effective consciousness vs actual consciousness.
https://chatgpt.com/share/67c64160-308c-8011-9bdf-c53379620e40
What do you believe that it is actively doing?
Again, it is very cool and incredibly good math that provides the next word in the chain that most likely matches what came before it. They do not think. Even models that deliberate are essentially just self-reinforcing the internal math with what is basically a second LLM to keep the first on-task, because that appears to help distribute the probabilities better.
I will not answer the brain question until LLMs have brains also.
We did, a long time ago. It’s called an encyclopedia.
If humans can’t be trusted to only provide facts, how can we be trusted to make a machine that only provides facts? How do we deal with disputed truths? Grey areas?
It is incapable of knowledge, it is math, what it says is determined by what is fed into it. If it admits to lying, it was trained on texts that admit to lying and the math says that it is most likely that it should apologize using the following tokenized responses with the following weights to probabilities etc.
It apologizes because math says that the most likely response is to apologize.
Edit: you can just ask it y’all
https://chatgpt.com/share/67c64160-308c-8011-9bdf-c53379620e40
I strongly worry that humans really weren’t ready for this “good enough” product to be their first “real” interaction with what can easily pass as an AGI without near-philosophical knowledge of the difference between an AGI and an LLM.
It’s obscenely hard to keep the fact that it is a very good pattern-matching auto-correct in mind when you’re several comments deep into a genuinely actually no lie completely pointless debate against spooky math.
I think the important point is that LLMs as we understand them do not have intent. They are fantastic at providing output that appears to meet the requirements set in the input text, and when they actually do meet those requirements instead of just seeming to they can provide genuinely helpful info and also it’s very easy to not immediately know the difference between output that looks correct and satisfies the purpose of an LLM vs actually being correct and satisfying the purpose of the user.
I think it’s more convenient to their overall design of modern Windows, IIRC by default it’ll install the running version of Windows to a hypervisor also. For their purposes, for the majority of users, there would be little to no performance losses.
That was where it was uploaded first, the takedowns were for it being later hosted on azure cloud services
There is! Both games are good, this feels more like a maintenance release to ensure the original remains playable. The remake has IIRC an “original game” mode that stays true to the source, but I was pleasantly surprised with the new and altered puzzles in the remake.