

Not if the pool is all of the colors. I haven’t even had all of them before.
https://gatorade.fandom.com/wiki/List_of_Gatorade_Thirst_Quencher_Flavors
There are some that I could distinguish from the others.
Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.


Not if the pool is all of the colors. I haven’t even had all of them before.
https://gatorade.fandom.com/wiki/List_of_Gatorade_Thirst_Quencher_Flavors
There are some that I could distinguish from the others.


Thanks for the added insights! I haven’t used it myself, so appreciated.
Linux has a second, similar “compressed memory” feature called zswap. This guy has used both, and thinks that if someone is using a system with NVMe, that zswap is preferable.
https://linuxblog.io/zswap-better-than-zram/
Based on his take, zram is probably a better choice for that rotational-disk Celeron, but if you’re running Cities: Skylines on newer hardware, I’m wondering if zswap might be more advantageous.


https://en.wikipedia.org/wiki/Apple_II
The original retail price of the computer was US$1,298 (equivalent to $6,700 in 2024)[18][19] with 4 KB of RAM and US$2,638 (equivalent to $13,700 in 2024) with the maximum 48 KB of RAM.
Few people actually need a full 48KB of RAM, but if you have an extra $6k lying around, it can be awfully nice.


TECO’s kinda-sorta emacs’s parent in sorta the same way that ed kinda-sorta is vi’s parent.
I compiled and tried out a Linux port the other day due to a discussion on editors we were having on the Threadiverse, so was ready to mind. Similar interface to ed, also designed to run on teletypes.


It’s a compressed RAM drive being used as swap backing. The kernel’s already got the functionality to have multiple tiers of priority for storage; this just leverages that. Like, you have uncompressed memory, it gets exhausted and you push some out to compressed memory, that gets exhausted and you push it out to swap on NVMe or something, etc.
Kinda like RAM Doubler of yesteryear, same sort of thing.


https://en.wikipedia.org/wiki/Zram
zram, formerly called compcache, is a Linux kernel module for creating a compressed block device in RAM, i.e. a RAM disk with on-the-fly disk compression. The block device created with zram can then be used for swap or as general-purpose RAM disk. The two most common uses for zram are for the storage of temporary files (/tmp) and as a swap device. Initially, zram had only the latter function, hence the original name “compcache” (“compressed cache”). Unlike swap, zram only uses 0.1% of the maximum size of the disk when not in use.[1]
Open-source RAM is better.


Many text editors today just load the whole file into RAM.
been the case for decades
One data point: emacs normally loads the whole file, unless you’re using the vlf package or similar.
TECO and ed might not. Dunno.


Another user in the BlueSky thread showed a photo that appears to be a Best Buy case of RAM, showing a 32GB set of two DDR5 DIMMs going for over $400 USD, a 64GB kit for over $900.
If I hit Google Shopping, which indexes a ton of retailer sites, I can find 2x16GB DDR5 DIMMs for far less than that at various retailers that haven’t jacked up prices yet.
https://www.google.com/shopping?udm=28
My first hit for “2x16gb 32gb ddr5” sorted by price is this:
https://pcpartshawaii.com/products/kingston-fury-ddr5-32gb-2x16gb-5200mhz-cl40-ram
Kingston Fury DDR5 32GB (2x16GB) 5600MHz CL40 RAM KF556C40BBK2-32
$100.00
They say that they have two in stock.
These guys are next lowest:
https://www.barcodediscount.com/catalog/kingston/part-kcp548us8k2-32.htm
Price: $103.06


IIRC from an earlier article, they’re still looking at factors and don’t yet know for sure (I suspect that it might be that Trump tariffs and whether they will stand is an input).


I mean, it’s fine to do so, as long as you have PC hardware that meets your needs. Valve would be fine with it too. As long as it can run Steam, all good. For Valve, I expect that the Steam Machine is to provide an easy-to-set-up option a la consoles that let them move into the living room for people who have an issue with that. If you can already use/configure a PC and have one, then that option is gonna work too.


then why not write modern software like how that was written?
Well, three reasons that come to mind:
First, because it takes more developer time to write efficient software, so some of what developers have done is used new hardware not to get better performance, but cheaper software. If you want really extreme examples, read about the kind of insanity that went into trying to make video games in the first three generations of video game consoles or so, on extremely limited hardware. I’d say that in most cases, this is the dominant factor.
Second, because to a limited degree, the hardware has changed. For example, I was just talking with someone complaining that Counter-Strike 2 didn’t perform well on his system. Most systems today have many CPU cores, and heavyweight video games and some other CPU-intensive software will typically seek to take advantage of those. CS2 apparently only makes much use of one or two cores. Go back to 2005, and the ability to saturate more cores was much less useful.
Third, in some cases, functionality is present that you might not immediately appreciate. For example, when I get a higher-resolution display in 2025, text typically doesn’t become tiny — instead, it becomes sharper. In 2005, most of it was rendered to pixel dimensions. Go back earlier, and most text wasn’t antialiased, and go back further and fonts seen on the screen were mostly just bitmap fonts, not vector. Those jumps generally made text rendering more-compute-expensive, but also made it look nicer. And that’s for something as simple as just drawing “hello world” on the screen.


I was gonna say that he might simply not have been around when Red Alert 2 came out, but
https://www.whitepages.com/name/Samuel-Sott-Axon/Los-Angeles-CA/Pl8a1drMk8b
40s Age Range
So he’s gotta be born no later than 1985.
https://en.wikipedia.org/wiki/Command_%26_Conquer:_Red_Alert_2
Release: NA: October 25, 2000
So he couldn’t have been younger than 15 at the game’s release (and could have been as old as 25).
That being said, that game came out a quarter-century ago, and there are people in the workforce who won’t have been born when it was released. Can’t just assume any more.


I mean, they did make a lot of money, but they also had an extremely high valuation.
https://www.macrotrends.net/stocks/charts/NVDA/nvidia/pe-ratio
NVIDIA PE ratio as of November 20, 2025 is 48.45.
Something like 20 is typical for a mature company. Tech companies have, in the past, often had higher ratios, but that’s based on their expectation to grow a lot rapidly, and expecting NVidia to dramatically grow from their current — already very high — valuation is asking a lot.
If NVidia were a small tech company that was doing well and clearly had a lot of market to expand into rapidly, that would be one thing.
I think that in general, the market has been pretty good to NVidia. Their share price is up 31.22% since the start of the year. 1,247% over the past five years.
There’s Mono. I don’t know what portion of .NET compatibility issues that addresses in 2025.


Thanks for adding it!


Meta’s chief AI scientist and Turing Award winner Yann LeCun plans to leave the company to launch his own startup focused on a different type of AI called “world models,” the Financial Times reported.
World models are hypothetical AI systems that some AI engineers expect to develop an internal “understanding” of the physical world by learning from video and spatial data rather than text alone.
Sounds reasonable.
That being said, I am willing to believe that an LLM could be part of an AGI. It might well be an efficient way to incorporate a lot of knowledge about the world. Wikipedia helps provide me with a lot of knowledge, for example, though I don’t have a direct brain link to it. It’s just that I don’t expect an AGI to be an LLM.
EDIT: Also, IIRC from past reading, Meta has separate groups aimed at near-term commercial products (and I can very much believe that there might be plenty of room for LLMs here) and aimed advanced AI. It’s not clear to me from the article whether he just wants more focus on advanced AI or whether he disagrees with an LLM focus in their afvanced AI group.
I do think that if you’re a company building a lot of parallel compute capacity now, that to make a return on that, you need to take advantage of existing or quite near-future stuff, even if it’s not AGI. Doesn’t make sense to build a lot of compute capacity, then spend fifteen years banging on research before you have something to utilize that capacity.
https://datacentremagazine.com/news/why-is-meta-investing-600bn-in-ai-data-centres
Meta reveals US$600bn plan to build AI data centres, expand energy projects and fund local programmes through 2028
So Meta probably cannot only be doing AGI work.
uncensored
https://lemmyverse.net/communities?nsfw=true
reposts
If you’re browsing “All” instead of “Subscribed” — I recommend building a subscription list of interest — there’s a bot, @bot@lemmit.online that mirrors posts to Reddit to communities on lemmit.online. You can either block the bot or block the instance if you don’t want that.


You can monitor instance downtime at:


I mean, it’s easy to check whether a given instance is using CloudFlare.
$ host lemmy.world|head -n1
lemmy.world has address 104.26.9.209
$ whois 104.26.9.209|grep ^NetName
NetName: CLOUDFLARENET
$
You can browse anonymously on any instance that permits doing so, so if you just want to browse during an outage, you can do that anywhere.
IMHO, having an account on a second Threadiverse instance isn’t necessarily a terrible idea, not just because of CloudFlare outages, but because instances do have outages for various reasons. I have an account on olio.cafe (PieFed, not on CloudFlare) and on lemmy.today (Lemmy, not on CloudFlare) because I wanted to try out PieFed, and I have fallen back to that to post before if lemmy.today has issues.
That being said, I didn’t intentionally try to avoid CloudFlare. I mean, they’re used by a lot of major sites, and I don’t expect them to have a lot of downtime. I mean, every Threadiverse instance has had downtime for some reason or another. I’ve had Internet outages, as well as electricity outages. Not all that common or usually an extended thing, but they happen.
Historically, it was conventional to have a “you have unsaved work” in a typical GUI application if you chose to quit, since otherwise, quit was a destructive action without confirmation.
Unless video games save on exit, you typically always have “unsaved work” in a video game, so I sort of understand where many video game devs are coming from if they’re trying to implement analogous behavior.