• 1 Post
  • 31 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle





  • Even after Automattic acquired it, the site continued to lose money at a rate of $30 million each year, the company’s CEO Matt Mullenweg had said.

    I still wanna know what they’re spending all that money on, because I’m sure it’s not developers or even servers. The idea that they can only be profitable if they’re constantly growing their user numbers is an investor idea that’s doomed to fail eventually and why so many social media sites are crashing right now








  • Not so much as stopped feeling nostalgic for, but realizing that there weren’t as many great games available as I thought that haven’t had better successors or remakes. And for Nintendo consoles, non-Nintendo games that stand the test of time are difficult to find outside of a few franchises that usually have more modern versions on Switch.

    We are just spoiled for choice these days when it comes to games, especially with indie games. And indies these days often have better UX than most mainstream games back then.





  • Oh I see, so more horizontal movement style games? Maybe Moon Hunters?

    I feel like a lot of stuff like that is probably going to fall into the roguelike category. But as a fan of RPGs roguelikes always bug me because I need progression. Soulslikes such as Hunt the Night or Duel Corp might be better, but based on your description that’s probably too much on the action side and not enough on the rpg side



  • But simply knowing the right words to say in response to a moral conundrum isn’t the same as having an innate understanding of what makes something moral. The researchers also reference a previous study showing that criminal psychopaths can distinguish between different types of social and moral transgressions, even as they don’t respect those differences in their lives. The researchers extend the psychopath analogy by noting that the AI was judged as more rational and intelligent than humans but not more emotional or compassionate.

    This brings about worries that an AI might just be “convincingly bullshitting” about morality in the same way it can about many other topics without any signs of real understanding or moral judgment. That could lead to situations where humans trust an LLM’s moral evaluations even if and when that AI hallucinates “inaccurate or unhelpful moral explanations and advice.”

    Despite the results, or maybe because of them, the researchers urge more study and caution in how LLMs might be used for judging moral situations. “If people regard these AIs as more virtuous and more trustworthy, as they did in our study, they might uncritically accept and act upon questionable advice,” they write.

    Great, so the headline of the article directly feeds into the issue the scientists are warning about when it comes to public perception of AI morality