Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 20 Posts
  • 3.7K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle

  • I use a 128GB Framework Desktop. Back when I got it, it was $2,500 with 8TB of SSD storage, but the RAM shortage has driven prices up to substantially more. That system’s interesting in that you can tell Linux to use essentially all of the memory as video memory; it has an APU with unified memory, so the GPU can access all that memory.

    That’ll get you 70B models like llama 3-based stuff at Q6_K with 128K of context window, which is the model max. That’s okay for chatbot-like operation, but you won’t want to run code generation with that.

    For some tasks, you may be better-off using a higher-bandwidth-but-less-memory video card and an MoE model; this doesn’t keep all of the model active and in video memory, only loading relevant expert models. I can’t suggest much there, as I’ve spent less time with that.

    If you don’t care about speed — you probably do — you can run just about anything with llama.cpp using the CPU and main memory, as long as you have enough memory. That might be useful if you just want to evaluate the quality of a given model’s output, if you want to get a feel for what you can get out of a given model before buying hardware.

    You might want to ask on !localllama@sh.itjust.works, as there’ll be more people familiar there (though I’m not on there myself).

    EDIT: I also have a 24GB Radeon 7900 XTX, but for LLM stuff like llama.cpp, I found the lack of memory to be too constraining. It does have higher memory bandwidth, so for models that fit, it’s faster than the Framework Desktop. In my experience, GPUs were more interesting for image diffusion models like Stable Diffusion — most open-weight image diffusion models are less-memory hungry – than LLM stuff. Though if you want to do Flux v2, I wasn’t able to fit it on that card. I could run it on the Framework Desktop, but at the resolutions I wanted to run it at, the poor ol’ Framework took about 6 or 7 minutes to generate an image.

    EDIT2: I use all AMD hardware, though I agree with @anamethatisnt@sopuli.xyz that Nvidia hardware is going to be easier to get working; a lot of the AMD software is much more bleeding edge, as Nvidia got on all this earlier. That being said, Nvidia also charges a premium because of that. I understand that a DGX Spark is something of an Nvidia analog to the Framework Desktop and similar AI Max-based systems, has unified memory, but you’ll pay for it, something like $4k.


  • Okay. It’s going to be a little harder to diagnose it since the problem isn’t immediately visible, but you’ve got all the Linux toolset there, so that’s helpful.

    Is the DNS server you’re trying to use from the LAN machines running on the OpenWrt machine, or off somewhere on the Internet?

    EDIT: Or on the LAN, I guess.

    EDIT2: Oh, you answered that elsewhere.

    I am using my routers DNS, and it’s reachable from my laptop.

    Have you tried doing a DNS lookup from the router (pinging a host by name, say) when you were having the problems?

    If so and it didn’t work, that’d suggest that the problem is the upstream DNS server. If that’s the problem, as IsoKiero suggests, you might set the OpenWrt box to use a different DNS server.

    If so, and it worked, that’d suggest that the issue is the OpenWrt host’s DNS server serving names. It sounds like OpenWrt uses dnsmasq by default.

    If not…that’d probably be what I’d try next time the issue comes up.




  • Okay. So, I don’t know exactly what you’re doing to test that, but I’m going to assume, say, trying to go somewhere in a Web browser.

    First off, I have occasionally seen problems myself where consumer broadband routers that have been on for a long time wind up briefly becoming unresponsive. Probably some sort of memory leak or something. So if you haven’t rebooted the thing and seen whether all your problems magically stop showing up, I’d probably try that. Quick and easy.

    Okay. Say that doesn’t do it.

    When you confirm that the router can reach the Internet during this period of outage, how are you doing that? Going to a management Web UI from a wired-LAN device and trying to ping some host on the Internet?






  • You can probably do it, but I’m not sure how many users you’d get, as I think that it’d be a less-usable interface.

    • You’d need some way to handle voting; that doesn’t intrinsically show up. Maybe you could do it via expecting users to send specially-structured emails.

    • If by “fediverse” you specifically are talking about the Threadiverse — Lemmy, Piefed, and Mbin — then you’re going to have to also deal with a lack of a way to handle responding to a given comment (unless you intend to forward all comments to all posts that a user has subscribed to to an email address, and then just only let them respond to those).

    • Email isn’t natively encrypted, so if that’s a concern and you want to deal with that, you’d need something like a PGP key that users could register, I guess.

    • Email clients don’t, as far as I know — I haven’t gone looking — natively have Markdown support, so either you need to throw out formatting or have some sort of mapping to and from Markdown to HTML. I don’t know if something like pandoc would be sufficient for that.

    • No native “report” functionality. Maybe you could do it via expecting users to send specially-structured emails.

    If what you want is to take advantage of existing native clients, my suggestion is that you’d probably get more mileage out of doing a bidirectional Usenet-to-Threadiverse gateway than an email-to-Threadiverse gateway. That has a much closer mapping in terms of functionality than email. You could do that a lot more efficiently in terms of bandwidth. Your “Usenet group list” would be a set of community@instance name entries, and you map posts to top level messages, and comments to responses to those.

    The major downside there is that I don’t think that any Usenet clients have native Markdown support and you still don’t have voting or native reporting functionality.

    The only obvious benefit I can think of from either Usenet or email is that there are clients for both that support offline functionality, and I don’t know of any Threadiverse-native clients that do. I think the major point I’d raise would be “you could probably do it, but…what do you gain that outweighs the drawbacks?” Like, I think that you’d probably get more good out of just picking your favorite native Threadiverse client and adding code to that (or starting a new one, if you absolutely can’t stand any of the existing ones).


  • tal@lemmy.todaytolinuxmemes@lemmy.worldCloser
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    15 hours ago

    A quarter-century back…

    https://www.linuxjournal.com/article/3676

    World domination. It’s a powerful and faintly sinister phrase, one you might imagine some B-movie mad scientist muttering as he puts the finishing touches on some infernal machine amidst a clutter of bubbling retorts and crackling Tesla coils. It is not a phrase you would, offhand, associate with cuddly penguins.

    When Linus says he aims at world domination, it seems as if he wants you to laugh at the absurd image of a gaggle of scruffy hackers overrunning the computing world like some invading barbarian horde—but he’s also inviting you to join the horde. Beneath the deadpan self-mockery is dead seriousness—and beneath the dead seriousness, more deadpan self-mockery. And on and on, in a spiraling regress so many layers deep that even Linus probably doesn’t know where it ends, if it ends at all.

    The way Linux hackers use the phrase “world domination” is what science-fiction fans call a “ha-ha-only-serious”. Some visions are so audacious, they can be expressed only as ironic jokes, lest the speaker be accused of pomposity or megalomania.

    PC gaming is still only a bit over 5%, but it’s one of the last remaining computing environments where Linux hasn’t become the dominant OS choice, as it’s displaced embedded systems and servers and suchlike. Things have changed a lot.





  • I mean, it’s probably a good idea to have them higher, given that if someone wants to use it with some typical out-of-the-box desktop settings, that’s not unreasonable, but while I haven’t looked at the Ubuntu installer for a while, I strongly suspect that it permits you to do a minimal install, and that all the software in the Debian family is also there, so you can do a lightweight desktop based on Ubuntu.

    My current desktop environment has sway, blueman-applet, waybar, and swaync-client running. I’m sure that you could replicate the same thing on an Ubuntu box. Sway is the big one there, at an RSS of 189MB (mostly 148MB of which is shared, probably essentially all use of shared libraries). That’s the basic “desktop graphical environment” memory cost.

    I use foot as a terminal (not in daemon mode, which would shrink memory further, though be less-amenable to use of multiple cores). That presently has 40 MB RSS, 33 of which are shared. It’s running tmux, at 16MB RSS, 4 of which are shared. GNU screen, which I’ve also used and could get by on, would be lighter, but it has an annoying patch that causes it to take a bit before terminating.

    Almost the only other graphical app I ever have active is Firefox, which is presently at an RSS of 887.1, of which 315MB is shared. That can change, based on what Firefox has open, but I think that use of a web browser is pretty much the norm everwhere, and if anything, the Firefox family is probably on the lighter side in 2026 compared to the main alternative of the Chrome family.

    I’m pretty sure that one could run that same setup pretty comfortably on a computer from the late 1990s, especially if you have SSD swap available to handle any spikes in memory usage. Firefox would feel sluggish, but if you’re talking memory usage…shrugs I’ve used an i3/Xorg-based variant of that on an eeePC that had 2GB of memory that I used mostly as a web-browser plus terminal thin client to a “real machine” to see if I could, did that for an extended period of time. Browser could feel sluggish on some websites, but other than that…shrugs.

    Now, if you want to be, I don’t know, playing some big 3D video game, then that is going to crank up the requirements on hardware. But that’s going to be imposed by the game. It’s not overhead from your basic graphical environment.

    I’d also be pretty confident that you could replicate that setup using the same packages on any Debian-family system, and probably on pretty much any major Linux distro with a bit of tweaking to the installed packages.


  • I assume so. Here’s a video of someone floating a boat (apparently in air) in it, and then sinking it by pouring cups of sulfur hexafluoride over it:

    https://www.youtube.com/watch?v=ee2NaYRnRGo

    If it avoids diffusing into air to the degree that you can scoop it up and pour it, I’d imagine that it’d pour out of one’s lungs the same way.

    But if you just want to get most of it out of your lungs — like, you’ve been breathing it and don’t want to asphyxiate — I imagine that exhaling all the air you can and inhaling air and doing that a few times would probably do a pretty good job, the way the Mythbusters video above did with the helium.



  • I’d guess that most industrial users of helium don’t consume it and could theoretically recover it from whatever process it’s involved in rather than just releasing it.

    EDIT: Hard drives being an exception, as apparently some ship helium-filled; there, it’s actually being consumed during the manufacture.

    EDIT2: I’d also point out that in the long run, we probably do have to be more conservative with our helium supply. We get it from pockets in the earth. It’s actually not all that common; it just happens, though, that we go to a lot of effort to extract natural gas, and that happens to sometimes also come up with helium, so we get that supply. But because it’s not reactive, it doesn’t bond to anything — it stays in gas form. When we let it go, it heads to near the top of our atmosphere and eventually gets lost to solar wind. Many users who today just release it — because why not, as the natural gas people will be providing more, and it’s cheaper that way — probably will need to capture what they’re using if we want helium to continue to be available.