• 0 Posts
  • 7 Comments
Joined 9 days ago
cake
Cake day: January 2nd, 2026

help-circle
  • I assume that trolls try to provoke erratic and disproportionate reactions from others, becoming a part of their own miniature sitcom for their own entertainment. It could be because of a sense of victory upon watching others break down (assuming a zero sum point of view). It could be the viewpoint that trolls are at their own higher level compared to others and understand each other while making fun of the lower levels (a false sense of superiority). Maybe it’s a [case of] holding onto their own beliefs and assuming that they needn’t change themselves if they disrupt all conversations that may cause harm to their own beliefs. It might be attention seeking or an escape mechanism. It could also be a desire to avoid fitting in with everyone else and remaining separate.

    (edit: grammar)


  • There are some generic observations you can use to identify whether a story was AI generated or written by a human. However, there are no definitive criteria for identifying AI generated text except for text directed at the LLM user such as “certainly, here is a story that fits your criteria,” or “as a large language model, I cannot…”

    There are some signs that can be used to identify AI generated text though they might not always be accurate. For instance, the observation that AI tends to be superficial. It often puts undue emphasis on emotions that most humans would not focus on. It tends to be somewhat more ambiguous and abstract compared to humans.

    A large language model often uses poetic language instead of factual (e.g., saying that something insignificant has “profound beauty”). It tends to focus too much on the overarching themes in the background even when not required (e.g., “this highlights the significance of xyz in revolutionizing the field of …”).

    There are some grammatical traits that can be used to identify AI but they are even more ambiguous than judging the quality of the content, especially because someone might not be a native English speaker or they might be a native speaker whose natural grammar sounds like AI.

    The only good methods of judging whether text was AI generated are judging the quality of the content (which one should do regardless of whether they want to use content quality to identify AI generated text) and looking for text directed at the AI user.



  • It’s most likely an error with the nozzle height. The PEI plate not heating up enough shouldn’t cause the adhesion in the photo above (and this is not a first layer problem, as the error is not at a uniform height). Additionally, a few lines are very faintly visible on the plate where they shouldn’t be, indicating nozzle height. Make sure that it is easy to move a piece of paper between the nozzle and the PEI plate when adjusting the height, feeling only a very small amount of pressure as you do so.



  • TL;DR: not possible with random cookies, too much work for too little gain with already-verified cookies

    There is no such add-on because random cookies will not work. Whenever someone has been authenticated, Google decides the cookie the browser should send out with any subsequent requests. Google can either choose to assign and store a session id on the browser and store data on servers or choose to store the client browser fingerprint and other data in a single cookie and sign this data.

    Additionally, even with a verified session, if you change your browser fingerprint, it may trigger a CAPTCHA, despite using a verified cookie. In the case of a session token, this will occur because of the server storing the fingerprint associated with the previous request. On the other hand, if using a stateless method, the fingerprint will not match the signed data stored inside the cookie.

    However, this could work with authenticated cookies wherein users contribute their cookies to a database and the database further distributes these cookies based on Proof of Work. This approach, too, has numerous flaws. For instance, this would require trusting the database, this is a very over engineered solution, Google doesn’t mind asking verified users to verify again making this pointless, it would be more efficient to simply hire a team of people or use automated systems to solve CAPTCHAS, this approach also leaks a lot of data depending on your threat model, etc.