Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 17 Posts
  • 3.52K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • It doesn’t matter if you’re, say, Debian, because they’ll just put up some symbolic “not intended for use in state X” and then continue doing whatever they were doing, but if you’re Red Hat and actually selling something like Red Hat Enterprise Linux to companies in the state, stuff like this is actually a pain in the ass.

    And to reiterate a previous comment, the Democrats have a trifecta in both California and Colorado, and the legislation here is something that they are squarely to blame for. I’d really rather that they knock this kind of horseshit off so that I can go back to being upset with the Republican Party.



  • If it happens again and you have Magic Sysrq enabled, you can do Magic Sysrq-t, which may give you some idea of what the system is doing, since you’ll get stack traces. As long as the kernel can talk to the keyboard, it should be able to get that.

    https://en.wikipedia.org/wiki/Magic_sysrq

    You maybe can’t see anything on your monitor, but if the system is working enough to generate the stack traces and log them to the syslog on disk (like, your kernel filesystem and disk systems are still functional), you’ll be able to view them on reboot.

    If it can’t even do that, you might be able to set up a serial console and then, using another system running screen or minicom or something like that linked up to the serial port, issue Magic Sysrq to that and view it on that machine.

    Some systems have hardware watchdogs, where if a process can’t constantly ping the thing, the system will reboot. That doesn’t solve your problem, but it may mitigate it if you just want it to reboot if things wedge up. The watchdog package in Debian has some software to make use of this.


  • I’m not sure that memory — and I’m speaking more-broadly than HBM, even — optimized for running neural nets on parallel compute hardware and memory optimized for conventional CPUs overlap all that much. I think that, setting aside the HBM question, that if we long-term wind up with dedicated parallel compute hardware running neural nets, that we may very well wind up with different sorts of memory optimized for different things.

    So, if you’re running neural nets, you have extremely predictable access patterns. Software could tell you what its next 10 GB of accesses to the neural net are going to be. That means that latency is basically a total non-factor for neural net memory, because the software can request it in huge batches and do other things in the meantime.

    That’s not the case for a lot of the memory used for, say, playing video games. Part of the reason, aside from hardware vendors using price discrimination that PCs (as opposed to servers) don’t use registered memory (which makes it easier to handle more memory) is because it increases latency a little bit, which is bad when you’re running software where you don’t know what memory you’re going to need next and have a critical path that relies on that memory.

    On the other hand, parallel compute hardware doing neural nets are extremely sensitive to bandwidth. They want as much as they can possibly get, and that’s where the bottleneck is today for them. Back on your home computer, a lot of software is oriented around doing operations in serial, and that’s more prone to not saturate the memory bus.

    I’d bet that neural net parallel compute hardware does way more reading than writing of memory, because edge weights don’t change at runtime (on current models! That could change!).

    searches

    Yeah.

    https://arxiv.org/html/2501.09605v1

    AI clusters today are one of the major uses of High Bandwidth Memory (HBM). However, HBM is suboptimal for AI workloads for several reasons. Analysis shows HBM is overprovisioned on write performance, but underprovisioned on density and read bandwidth, and also has significant energy per bit overheads. It is also expensive, with lower yield than DRAM due to manufacturing complexity.

    But there are probably a lot of workloads where your CPU wants to do a ton of writes.

    I’d bet that cache coherency isn’t a huge issue for neural net parallel compute hardware, because it’s going to be a while until any value computed by one part of the hardware is needed again, until we reach the point where we can parallel-compute an entire layer at one go (which…I suppose we could theoretically do. Someone just posted something that I commented on about someone making an ASIC with Llama edge weights hard-coded into the silicon, which probably is a step in that direction). But with CPUs, a big problem is making sure that a value written by one CPU core reaches another CPU core, that the second doesn’t use a stale value. That’s gonna impact the kind of memory controller design that’s optimal.


  • I think that it’s fair to say that AI is not the only application for that hardware, but I also think that carpelbridgesyndrome’s point was that they aren’t really well-suited to replace conventional servers, where all local computing just moves to a server, which is the sort of thing that ouRKaoS was worried about. Maybe for some very specialized use cases, like cloud gaming in some genres. I’d also add that the physical buildings have way more cooling capacity than is necessary for conventional servers, so they probably wouldn’t be the most-cost-effective approach even if you replaced the computing hardware in the buildings.



  • So, Rhynoplaz was after “fancy” stuff, but okay, I suppose that there’s an use case for a MacPaint/MS Paint analog, with a low barrier to just get going.

    Maybe Drawing for GNOME or KolourPaint for KDE? It doesn’t look like either of those support layers. Drawing looks like it lets one drag a rectangular selection to rotate to non-90-degree increments. KolourPaint doesn’t, but it lets one choose to rotate and input an arbitrary number of degrees.

    I haven’t used either, myself, other than just to check now. They’re both packaged in Debian, so they’re probably also gonna be packaged in any Debian-family distro.

    EDIT:

    https://maoschanz.github.io/drawing/

    https://apps.kde.org/kolourpaint/

    Both describe themselves as being “simple” paint programs.




  • I sat down to log into a couple MUDs yesterday. Didn’t stick with it — the combat still hasn’t evolved enough for me — but you could play that on anything capable of displaying text on a screen and running telnet.

    It looks like some crazy person has been doing a TCP/IP stack for the 128K Mac, the first Macintosh ever released, from 1984, as well as a telnet client. So you can technically lug a 42-year-old computer out of an antique store and play currently-being-developed Internet games…though you won’t be getting color, since that came a while down the road.

    If you get some device that can expose a serial console on some system to TCP/IP — not sure how far back you need to go for that — you could technically play it on a teletype from the 1930s.

    The “some device” will have to be later, though, so that’s maybe kinda cheaty.

    Technically, Debian Linux has been run on an Intel CPU from 1971, but it isn’t fast enough to be a practical host for such a teletype in that environment. Even stripped down forms of Linux are going to be “too big” to be such a host.

    It does sound like the Commodore 64 has a package, Novaterm 10, that runs TCP/IP and telnet, but I don’t know whether it can output to the C64’s serial port rather than video display; you could play locally on one of those, but probably not run on a teletype. That being said, it probably shows that it’s technically-possible, since I’m sure that if it can run a virtual terminal program, it has the resources to just dump the text to an actual terminal via a serial port. I’d guess that there’s probably some system out there circa 1980 that someone has probably built that can both run a TCP/IP stack and expose a serial console to a 1930s teletype to play current Internet games.


  • Multiple tabs

    Emacs has various ways to display tabs, but I don’t use tabs in emacs, because it doesn’t scale well to, say, dozens of tabs; normally, each additional buffer I have doesn’t normally have any visual indication onscreen that it exists. I use a couple of other buffer-switching software packages.

    keeps track of lines

    Defaults to being shown in the minibuffer.

    One of the most invaluable parts of it, is that you can close it or a update happens or maybe your PC will get knocked offline. You can come back to Notepad++, open it, and everything will be retained.

    This is called desktop-save-mode in emacs. C-h f desktop-save-mode will show documentation. You can have a single global saved instance, or multiple concurrent instances of emacs saving desktop state for separate projects.



  • I agree with you that it’s a good game, and it’s very playable on an older computer, but it’s actually not the lightest-weight game from a CPU standpoint. I mean, realistically, that thing should be able to get by with very little CPU usage and essentially none if you’re not pushing buttons, but it actually uses a fair bit of CPU time when you’re just sitting there staring at the screen. It’s actually kind of bugged me, because while it’s irrelevant on a desktop, it really consumes more battery on a not-plugged-in-to-wall-power laptop than is necessary, and it’d otherwise be such a phenomenal game for disconnected laptop use.

    Go run top and just leave the game sitting there and it’ll be keeping an average of multiple cores hot on my laptop at 240 Hz running at vsync rate. And the world state isn’t changing – the game is turn-based.

    You can constrain the framerate down to 10 FPS — and that significantly reduces CPU usage, down to an average of 37% of a core, on my system, at the cost of limiting the speed at which the game runs autoexplore, since it will always draw at least one frame in a given state, and at the cost of making the game feel sluggish and unresponsive.

    And you’ll get that CPU usage even if you turn off all the graphical “glitz” have it just showing ASCII.

    My guess is that they probably could probably benefit by (a) having a lightweight visual “display” thread that doesn’t do anything expensive, just update any animations and draw that to the screen, and if there are no animations, not even run a refresh at all, and (b) having a separate “heavyweight” thread for game logic that only runs if the world state has changed (autoexplore, automove, resting, or the player has pressed a key).

    Cataclysm: Dark Days Ahead, which is a similar game (internally a turn-based game that’s basically generating an ASCII grid that can provide some light graphical glitz and tiles) also consumes a lot of CPU time when idle.

    If you want another game of a similar sort that uses a surprising amount of CPU time, Dwarf Fortress. That being said, Dwarf Fortress is real time, so one can’t beat it up as much for consuming CPU time while the player is idle.


  • I don’t know about “all that you’ll need to use”, and this might arguably considered cheating, but I’d take emacs. I think that it’s safe to say that there isn’t another software package that has the same degree of coverage of functionality. I use it for doing statistics notepad work, as a word processor, as a spreadsheet, as an email client, could use it as a web browser if necessary, as a version control client, for interactive diff merging, can use it as an LLM chat client, IRC client, text editor, IDE, orthodox-file-manager-style file manager, media player frontend, agenda manager, outliner etc. If I run M-x list-packages on my copy to run the package manager, it looks like I have 6,794 emacs software packages available in it.

    Unless you’re going to take a broader sense of “piece of software” that would let, say, a Linux distro be taken, I think that it’s pretty hard to compete with.

    EDIT: Maybe in the present-day world, you could manage with a Web browser, if you treat that as being a frontend to essentially all SaaS software, count that as being bundled with the Web browser. I guess you could argue that that might be broader, and you could probably function with basically nothing other than a Web browser on a thin client and get by.

    EDIT2: I guess you could also make an argument that the kernel is more-essential, because without that, nothing else can run, but I assume that you’re basically treating the kernel as a given and just asking about userspace software.



  • My guess — without trying to dig up statistics — is that the single component most-likely to fail in an old PC is gonna be rotational hard drives. Virtually all of my rotational drives have eventually died, aside from a few that were just so small and taking up space where I could mount other things that I no longer bothered using them.

    I’ve seen fans die (not necessarily completely wedge up, but have the bearings go and become increasingly-obnoxious in sound).

    And those are basically the only mechanical components in a computer.

    Behind that, there’s input devices with keyswitches wearing out, but unless you’re using a laptop, replacing the input device is just unplugging the old one and plugging in a new one.

    I’m not gonna say that motherboards don’t fail, but I can’t immediately think of something that would die. Decades back, I remember that there was a spate of bad capacitors that made their way to a bunch of motherboards and would eventually fail, but I haven’t seen anything like that recently.

    searches

    Looks like it was 1999–2007:

    https://en.wikipedia.org/wiki/Capacitor_plague

    The capacitor plague was a problem related to a higher-than-expected failure rate of non-solid aluminium electrolytic capacitors between 1999 and 2007, especially those from some Taiwanese manufacturers,[1][2] due to faulty electrolyte composition that caused corrosion accompanied by gas generation; this often resulted in rupturing of the case of the capacitor from the build-up of pressure.

    High failure rates occurred in many well-known brands of electronics, and were particularly evident in motherboards, video cards, and power supplies of personal computers.

    A 2003 article in The Independent claimed that the cause of the faulty capacitors was due to a mis-copied formula. In 2001, a scientist working in the Rubycon Corporation in Japan stole a mis-copied formula for capacitors’ electrolytes. He then took the faulty formula to the Luminous Town Electric company in China, where he had previously been employed. In the same year, the scientist’s staff left China, stealing again the mis-copied formula and moving to Taiwan, where they created their own company, producing capacitors and propagating even more of this faulty formula of capacitor electrolytes.[3]

    Those would probably be from the DDR/DDR2 era, though.

    I do think that it’s probably possible that some motherboard components might age out. Like, people may want to use newer versions of radio stuff, like WiFi or Bluetooth. You can maybe do that via USB, but the on-motherboard stuff might become more of a liability than the CPU or something.

    I don’t think that I’ve ever personally had other computer components just up and fail other than the 13th and 14th gen Intel CPUs that internally destroyed themselves. It’s always been non-solid-state stuff, things with moving parts, that fail for me. I mean, I’ve damaged solid-state components myself via things that I’ve done, but it’s always damage that I incurred.

    thinks

    Oh, CMOS batteries eventually fail, but they’re usually — not always — mounted on motherboards with holders that permit replacement. I’ve had to replace those.

    I did have a headphones amplifier that was attached to my computer where some solder joints got a bad connection and I had to open it and resolder it, but I don’t know if I’d call that a “computer component” just because it was plugged into a computer.

    thinks more

    I did have the power supply used for a fluorescent backlight in a laptop display start to fail once. But, honestly, my experience has been that unless you actively go in and damage something, most solid state parts will just keep on trucking.



  • I also kind of think that the strongest argument for console gaming is competitive multiplayer, not single player.

    The fact that the consoles are closed and locked down inherently provides resistance to cheating and such, where the open PC world tries to (poorly) replicate a closed environment via kernel anti-cheat stuff. The console world having (well, more-or-less) one option when it comes to hardware means that everyone playing against each other has a fairly-level playing field — same input hardware, and people don’t get an edge from having fancier rendering hardware.

    For single-player gaming, those console strengths become weaknesses — for single-player games, it’s preferable for the player to be able to do things like freely mod games, upgrade hardware to get fancier graphics, provide a lot of options as to what input stuff to use, etc. It doesn’t hurt anyone else for me to have the game running however I want, so I should be able to do so. On the PC, a player gets to enjoy all that.

    If I were a console vendor and I were worried about the PC as a competing platform, I’d think that I’d try to emphasize my competitive multiplayer games, not single-player games.


  • tal@lemmy.todaytoAsk Lemmy@lemmy.worldLemmy Client Question
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 day ago

    Is Summit open source?

    searches

    Huh.

    It has a GitHub project, but that project has no source in it. It’s hosting the compiled releases, but no source.

    https://github.com/idunnololz/summit-for-lemmy

    I don’t see anything actually saying that it it is open source.

    EDIT: Each release does appear to be bundled with a source tarball. I guess that the author just isn’t…actually using the git functionality of GitHub to version-control it. shrugs

    EDIT2: Nope. That source tarball just contains an essentially-empty copy of a snapshot of the GitHub git repo. It’s not a “real” source tarball.


  • The real problem with this sort of thing is that there’s no legal way to avoid it. If you’re operating a motor vehicle on public roads, you need to have a plate visible. You can’t obscure it.

    The laws requiring that visibility were made in an era where it wasn’t possible for someone like Flock to enable anyone who can aim a camera at a road to mass-log and aggregate and data-mine the movement it provides.

    The only real technical solution would be to back out the laws requiring license plates to be visible (and it wouldn’t be perfect, since Flock will still look for identifying oddities on a vehicle and try to log that too, like collision damage). But if you do that, then you lose an important tool for dealing with motor vehicle theft and finding vehicles involved in crimes.

    And there aren’t restrictions on selling or doing whatever companies want with the data. Or with data that they get from facial recognition/gait data in the future, or that sort of thing.

    My own personal preference would be for ALPRs to be generally illegal, outside of maybe some areas where logging is normally done by the government, like at border crossings. That’d be hard to enforce – someone could always run a rogue ALPR and it’d be hard to find — but it’d probably keep the scale down, avoid the mass deployment that makes the surveillance omipresent.

    And I think that it’s worth remembering that even if you are comfortable with, say, Flock’s policy on dealing with data, there’s no guarantee that they aren’t compromised — a lot of very sensitive databases have been compromised in the past.

    In the past, technical limitations permitted a certain level of privacy in society. It just wasn’t technically possible to build mass surveillance at scale, so it didn’t happen. But…as those technical barriers that some of us just took for granted go away, I think it’s worth asking whether we want to engineer in legislative barriers, to ensure that there is a certain amount of privacy provided members of society.