• 0 Posts
  • 65 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle
  • In a perfect world, yes.

    In reality, i knew what i did and why i did it, two years ago, after which i never had to touch it again until now, and it takes me 2 hours of searching/fiddling until i remember that weird thing i did 2 years ago…

    and it’s still totally worth it

    Oh or e.g. random env vars in .profile that I’m sure where needed for nvidia on wayland at some point, no clue if they’re still necessary but i won’t touch them unless something breaks. and half of them were probably not neccessary to begin with, but trying all differen’t combinations is tedious…


  • Or even worse, reading online that there’s some super special item you could have gotten 20 hours into the game if only you didn’t open that one regular chest in the starting area in the first 5 minutes of the game. I forgot which Final Fantasy did this? 9 maybe? Pissed me off to no end, i’m not playing through everything again for this… just seemed mean spirited.

    More generally, when decisions early on influnce later stuff that you have no way of knowing about yet. I’m not going to play your game 50 times to see all options. So either i play with the wiki open to not miss anything, ruining the fun, or i realize later on that i could have gotten something but it’s now forever locked because of earlier decisions, pissing me off.

    Baldurs Gate 3 had a lot of that…


  • oh for going out ours will sit in front of the entry door and look in our direction, even if we’re two rooms away. we really need to pay attention to notice if he suddenly disappears and then check the entry.

    It’s really interesting how you start to be able to distinguish the different kinds of look they give you, like I couldn’t say how but I know if he needs help, needs to go out or if he wants to play depending on how he sits and looks.


  • My dog is pretty smart, but sometimes he’s smart in pretty stupid ways.

    One thing he does is, if he needs help he will sit in front of the thing he needs help with. That’s it, just sit there. Now, he’s a black dog and he will sometimes do this in completely dark corners of the apartment. Maybe he played with his food ball and a treat has fallen under some furniture, he will just sit in front of it in the dark and expect us to help him, just sitting there for 20 minutes sometimes. Usually we only notice once he lets out a sad grumble after having sat there for a long time but I’m sure there’s other times where he just gave up and we didn’t notice at all. And this is not something we taught him, he just figured sitting quietly in a corner is the best way to get attention.

    That and he likes to check if there’s anything going on behind him while on walks, which often causes him to walk head-first into obstacles…



  • I don’t think it’s circular reasoning. more like kicking the can down the road, instead of deciding needs, you need to decide goals. but once you have a goal it helps determining the needs. So it’s a different framing that can help a bit to untangle the mess. Maslow is also just 4 goals in a hierarchy and then the needs for each of them.

    As for how to decide on goals, idk, that changes all the time and I don’t think there’s any hard set rule to figure that out. In the end it’s all just made up 🤷 But I think asking yourself “what are my goals in life” is more productive than asking yourself “what do I need”, at least it comes more naturally to me.


  • I think a need is neccessarily tied to some goal and can’t really be discussed without mentioning the goal.

    If the goal is survival the needs are water, food shelter. if your goal is not to continue living, then e.g. poison would be more of a need than food, water and shelter.

    If the goal is having a fulfilled life the needs also include social contact, intimacy, something meaningful you can spend your time on etc.

    so i don’t think you can just say something is a need, you need to decide what your goals are, probably with some hierarchy of goals, and work backwards from that to the needs. Or conversely, to know if something is a need, think about if not having it would keep you from your goal.





  • I don’t disagree. I meant for users it is incidental. Most users probably wouldn’t buy them with spying as the main purpose(they just also don’t really care that it can spy). making them much more widespread than something where spying was the main use-case, making the problem worse.

    And as someone else mentioned, once you did get it, the temptation for using it for spying is there for a user. Making it worse than e.g. a spy pen imo, as with that you’d need the intent to spy first, and then buy it, but with this, you buy it for whatever reason and then think “oh, I could just spy now” since you already own the device, which I’d argue leads to more overall spying, so to speak. Maybe you see a video online and go “oh, I can just do that, right now, no effort on my part, since I already own this device”.

    And for Meta it’s like tracking cookies on crack





  • I remember reading that hotel TVs are an option. They also have an ad platform, but one intended for the hotel owner to send ads from, not some 3rd party. Not exactly dumb but also not as bad as regular TVs.

    And of course a beamer or PC screen connected to some cheap small form factor PC is always an option, with Kodi or similar on it, i haven’t owned a TV in like 10 years, just using a small linux pc with beamer, and a tv tuner card in the past (nowadays my ISP offers all public channels on IPTV)



  • Well each token has a vector. So ‘co’ might be [0.8,0.3,0.7] just instead of 3 numbers it’s like 100-1000 long. And each token has a different such vector. Initially, those are just randomly generated. But the training algorithm is allowed to slowly modify them during training, pulling them this way and that, whichever way yields better results during training. So while for us, ‘th’ and ‘the’ are obviously related, for a model no such relation is given. It just sees random vectors and the training reorganizes them tho slowly have some structure. So who’s to say if for the model ‘d’, ‘da’ and ‘co’ are in the same general area (similar vectors) whereas ‘de’ could be in the opposite direction. Here’s an example of what this actually looks like. Tokens can be quite long, depending how common they are, here it’s ones related to disease-y terms ending up close together, as similar things tend to cluster at this step. You might have an place where it’s just common town name suffixes clustered close to each other.

    and all of this is just what gets input into the llm, essentially a preprocessing step. So imagine someone gave you a picture like the above, but instead of each dot having some label, it just had a unique color. And then they give you lists of different colored dots and ask you what color the next dot should be. You need to figure out the rules yourself, come up with more and more intricate rules that are correct the most. That’s kinda what an LLM does. To it, ‘da’ and ‘de’ could be identical dots in the same location or completely differents

    plus of course that’s before the llm not actually knowing what a letter or a word or counting is. But it does know that 5.6.1.5.4.3 is most likely followed by 7.7.2.9.7(simplilied representation), which when translating back, that maps to ‘there are 3 r’s in strawberry’. it’s actually quite amazing that they can get it halfway right given how they work, just based on ‘learning’ how text structure works.

    but so in this example, us state-y tokens are probably close together, ‘d’ is somewhere else, the relation between ‘d’ and different state-y tokens is not at all clear, plus other tokens making up the full state names could be who knows where. And tien there’s whatever the model does on top of that with the data.

    for a human it’s easy, just split by letters and count. For an llm it’s trying to correlate lots of different and somewhat unrelated things to their ‘d-ness’, so to speak



  • They don’t look at it letter by letter but in tokens, which are automatically generated separately based on occurrence. So while ‘z’ could be it’s own token, ‘ne’ or even ‘the’ could be treated as a single token vector. of course, ‘e’ would still be a separate token when it occurs in isolation. You could even have ‘le’ and ‘let’ as separate tokens, afaik. And each token is just a vector of numbers, like 300 or 1000 numbers that represent that token in a vector space. So ‘de’ and ‘e’ could be completely different and dissimilar vectors.

    so ‘delaware’ could look to an llm more like de-la-w-are or similar.

    of course you could train it to figure out letter counts based on those tokens with a lot of training data, though that could lower performance on other tasks and counting letters just isn’t that important, i guess, compared to other stuff


  • Of course there are. But I mean, women’s hormones do affect mood during the menstrual cycle (my wife certainly says she’s more iritable before her period), and afaik the hormone therapy is some of the same hormones, so it didn’t seem far fetched at all to me that it could play a role. hence me asking.

    but could as well have been some deep seated anger at the world or similar, or something in between. Mostly I was just trying to think of reasons for why she might not be as bad as she was seeming, benefit of the doubt kind of thing.