Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 13 Posts
  • 3.27K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle




  • I’d call good and evil human concepts, along with the moral or ethical codes and ideas that define them. They aren’t going to exist in a vacuum: what is called good and what is called evil depends on the social norms of a time and place.

    Is charging interest to lend money evil? Is homosexuality evil? Is not taking on your brother’s widow as a second wife and impregnating her evil? Those are all things that would have been considered wrong by different cultures.

    So I’d say that any question about “good” or “evil” kind of requires asking “good in the eyes of whom” or “evil in the eyes of whom”.

    If you want to ask “did evil exist in the universe 4 billion years ago”, I’d probably say “we don’t have evidence that life existed in the universe 4 billion years ago, and I think that most conventional meanings of good or evil entail some kind of beings with a thought process being involved.”

    If you want to ask “were the first humans good and then become evil”, I’d probably say that depends a great deal on your moral code, but I imagine that early humans probably violated present-day social norms very substantially.

    If what you’re really working towards is something like the problem of evil:

    https://en.wikipedia.org/wiki/Problem_of_evil

    The problem of evil, also known as the problem of suffering, is the philosophical question of how to reconcile the existence of evil and suffering with an omnipotent, omnibenevolent, and omniscient God.[1][2][3][4]

    The problem of evil possibly originates from the Greek philosopher Epicurus (341–270 BCE).[38] Hume summarizes Epicurus’s version of the problem as follows:

    “Is [god] willing to prevent evil, but not able? then is he impotent. Is he able, but not willing? then is he malevolent. Is he both able and willing? whence then is evil?”[39][40]

    So if what you’re asking is “how could good give birth to evil”, I think that I’d probably say that I probably wouldn’t call humans or the universe at large “purely good” for any extended period of time in any conventional sense of the word. Maybe in some sort of very limited, narrow, technical sense, if you decide that only humans or something capable of that level of thought that can engage in actions that we’d call good and evil and the first point in time that there was a being that qualified as human the first second of their activity happened to be something that we’d call “good”, okay, but I assume that that’s not what you’re thinking of.

    I think that you had pre-existing behavior by humans at one point in the past and ethical systems that later developed which might be used to classify that behavior, and not in some consistent way.


  • Maybe check with the tenants and if it’s not theirs, look and see what’s on it.

    I can imagine two scenarios:

    • Whoever put it there was hoping to record someone else. I think that @PonyOfWar@pawb.social brings up a good point that if the battery life is as limited as Pony believes it to be, that seems like it’d seriously limit what one could hope to get. If someone did want more recording time, it’d be really easy to hook that up to a USB powerbank and let it run for far longer and I assume hide it better, which seems like it’d kinda argue against that. If OP is the landlord, maybe they were curious as to what he’d say when he was near the box if they knew that he was going to be in the house. Or maybe it was someone else who wanted to record the tenants, especially if the box is in a living area.

    • Whoever put it there was using it in the way one traditionally uses a voice recorder, to take notes hands-free. If that’s the case, I imagine that it may start with snippets from whoever put it there and forgot it. Like, say that you’re trying to figure out what the switches in the breaker box are linked to. I will admit that it seems odd that, if the battery life is so short, that someone would have left it and OP happened to find it before the battery ran out. But I don’t know why OP was looking at the box in the first place. If it was because there was some electrical problem and OP was coming out to look at it, could be a tenant trying to look into the situation themselves.

      I’d note down something like that in text, but there are people who prefer to use devices like that or voice recorder apps on phones, and if it’s voice-activated, it’s hands-free, unlike jotting down text.


  • [continued from parent]

    The decline of spam

    A number of forums ran into problems with spam at various points. Usenet particularly had problems with it. I rarely see it today, at least not in an obvious form. I think that the decline is in part due to something that many users here often complain about — the centralization of social media. When there were many small sites dedicated to, say, bass fishing or golfing or whatever, admins had limited resources. But on Facebook or Reddit or whatnot, the anti-spam resources are basically pooled at the site level. Also, the site admins have visibility into activity spanning the entire site. Instead of writing a bot to spam, say, forum system X and then hitting each of many different sites using that forum software, one has to spam many different subreddits on Reddit, say, and that’s a lot more visible to someone like the Reddit staff, who can see all of it.


  • There have been a number of major changes that I’ve noticed over my time. Some good, some bad.

    The geographic scale has increased

    Discussion has become international, global. The Internet was not used by the general population in the US in the 1980s and 1990s. Electronic forums tended to be local, on things like BBSes, in an era when local calls and long-distance calls had a pricing model that were very different. They’d more-often deal with matters of local interest, whereas that’s less-likely today. In the 2000s, uptake of Internet-based forums was still limited, even as Internet use grew.

    The average level of technology knowledge among users has decreased

    Until maybe the late 1990s or some, I’d say that personal computer ownership was somewhat-unusual. Certainly many older people didn’t own and use a personal computer. Many people who were doing so were hobbyists or worked in technology-related fields.

    I think that relative to most non-technology-specific platforms, the Threadiverse in 2026 is something of a partial throwback here, probably just because (a) there’s a bit of a technical bar to understand and get using it that acts as a filter and (b) perhaps because some people who use it are into open-source.

    Also, the level of knowledge that formed a barrier to access has come down as software has become much easier to use and less configuration required. In the 1980s, it wouldn’t have been unexpected to need to manually set Hayes modem configuration strings. On the Mac, even obtaining executable software from the Internet was a major hurdle; Macs didn’t ship with software capable of downloading a file and then converting it into an executable, so one had to bootstrap the process by obtaining, offline, software capable of doing so. Setting up a smartphone for Internet access in 2026 mostly involves turning it on and plonking in a credit card number; the user experience doesn’t involve seeing terms like “MTU”, “SLIP”, “PPP”, or “netmask”.

    The level of wealth as a proportion of surrounding society among users has dropped

    One thing that I think that a number of people don’t appreciate in the 2020s is how staggeringly much more affordable telecommunications and computing devices have become. I’d say that the personal computer era in the US really kicked off in the late 1970s. At one point, I had a Macintosh 512K, a computer released in 1983. Now, sure, that’s not the cheapest platform out there even at the time, but it was stupendously expensive by the expectations of most users today:

    https://en.wikipedia.org/wiki/Macintosh_512K

    Introductory price: US$3,195 (equivalent to $9,670 in 2024)[1]

    That’s not including any storage media, a modem, cost of a phone line, the cost of your (almost-certainly-time-limited) access to your Internet service provider. And it was a lot harder to justify Internet access at that point, given what was out there and the degree to which computer use was present in typical life.

    The DOS world was somewhat-more economical, but hardly comparable to the situation today:

    https://en.wikipedia.org/wiki/IBM_Personal_Computer

    Introductory price: US$1,565 (equivalent to $5,410 in 2024)

    Those prices will price a lot of people out.

    Today, you can get an Internet-capable device that has far more hardware that would have had to have been purchased in addition to that included for well over an order of magnitude less. That’s tremendously expanded the income range of people who have Internet access. It means that there’s a much broader range of perspectives.

    The level of education relative to society as a whole has dropped

    For a substantial amount of time, a disproportionate chunk of the people who had Internet access were university students who got access via their university; in the US, this was government-subsidized. That meant that higher education users were disproportionately represented; users on something like Usenet weren’t a very representative sample of society as a whole (even aside from any income/education correlation). The Internet is just about everyone now, not something focused on academia and engineering.

    The use of pictoral elements has increased

    It’s not uncommon to see images (or even video) taking occupying a substantial amount of eyeball space in discussions. Bandwidth limitations used to just make sticking images in-line painful at one point. But there’s also technical bars that dropped, like forum-specific inline images and then emojis entering Unicode. I might have seen the occasional emoticon in the 1990s, but that was about it.

    That being said, ASCII art is something that I rarely see now, but which was more-common in an era when many people were viewing all discussion in monospaced typefaces.

    Messages are much shorter

    Some of this has been due to technical impositions; Twitter imposed extremely short message lengths, but even so, I’d say that Usenet tended towards much longer messages, more akin to letters.

    Rise and fall of advanced markup

    The early systems that I can think of were text-based and didn’t support styling. Then an increasing number of forums started supporting things like BBCode or HTML. But I think that the modern consensus has come mostly back to text, though maybe with embedded images and Unicode, with some supporting Markdown. Lower barrier to use, and I think that in practice, a lot of styling just isn’t all that important to get points across.

    The rise of data-harvesting and profiling

    I think that many people viewed messages as more ephemeral. You could say something and it might go away when a BBS dies or the like. But with sites like archive.org and scrapers and large-scale efforts to profile, electronic forums have more of a permanence than they once did. The Trump administration demands that visitors to the US hand over social media identities, for example. I don’t know how much that weighs on discussion in society as a whole, but it certainly alters how I think about electronic discussion to some degree.

    The rise of generative-AI-generated text in posts

    Probably one of the most-recent changes. Bots aren’t new, but the ability to make extended, really human-sounding text is.

    Cadence of discussion has increased

    Many discussion forums historically didn’t have a mechanism to push notifications to a user that there was activity in a discussion. A user might find out that there was activity the next time they happen to stop by a forum and see that there’s more activity. Today, social media software on a smartphone, wherever a user is, might play an alert sound that there’s been more activity within seconds of that activity.

    Long-tail forums have become more common

    https://en.wikipedia.org/wiki/Long_tail

    The spread of Internet use and the enormous expansion of the potential userbase has made very niche forums far more viable than they historically had been. Because the pool of users is so large, even if only a very tiny percentage of that pool is interested in a given topic, the number of topics for discussion that has a viable number of users interested in it becomes much greater.

    The Threadiverse today is also something of a throwback here, as the userbase is much smaller than something like Twitter or present-day Reddit.

    The rise of deletion

    Many systems in the past didn’t support deletion of messages, or maybe only permitted administrators to do so.

    Part of the problem is that it’s not generally practical to ensure that deletion of a message actually occurs, once that message has been made visible to the broader world. And generally, it’s considered to be a bad idea in computer security to give a user the impression that they have the ability to do something if there are ways to defeat it.

    But in practice, a lot of people seem to want to have the ability to delete (or modify) messages, and I think that consensus is that there’s value there, even if it rests upon a general convention to not go digging for deleted messages.

    The rise and fall of trolling

    I’m talking about trolling in the traditional sense of the word, where someone posts a message that looks like it might be a plausibly innocent message, but contains intentional errors or just enough outrageous stuff to spur many people to respond and start a long thread or argument.

    The idea is that a user trying to do this would “troll” for fish to try to get as many bites as possible.

    Maybe this is just the forums I use, but I remember that showing up quite a bit in the forums I used in the 2000s, like Slashdot. A Usenet tactic was to cross-post — unlike the Threadiverse, responses to a crosspost would typically go to all newsgroups — to forums likely to have users who would argue with each other, like a newsgroup dedicated to Macs and one to Wndows PCs. But I’ve seen less and less of it over time. I’m not sure why. Maybe it’s just that engagement-seeking algorithms and news media have institutionalized ragebait so much that we already have so many generated arguments that it just gets lost in the noise.

    [continued in child]



  • I had one before, but it was stupid cheap, connection was frail and didn’t have good range.

    I’m not sure exactly whether this was protocol improvements or some other form of implementation improvement (antenna location?) but, yeah, I’ve found that the popular Sony WH-1000MX6 headphones have much better ability to talk to my Bluetooth transceiver at range than do a number of older earbuds and headphones I have. The range is closer to, say, a cordless phone or WiFi.

    Depending upon your use case, that may not matter; for a smartphone, more range probably doesn’t matter much. But if you’re talking to a desktop computer, it can be handy.


  • [Moving this text to a separate response, as it really deals with a separate set of issues]

    There’s also the issue not just of US law, but of international law, and I think that that’s where more of the interesting questions come up. Under treaties that the US is party to, at an international level, as the UN rules go, to engage in military conflict, other than individual defense or defense of an ally, the US should seek approval from the UNSC (which it would not get on Venezuela; Russia or China would presumably block this). The US has certainly stretched things — its legal argument that the UNSC authorized action against Saddam Hussein is very questionable, for example, but what one sees is a steady erosion of willingness to follow UN rules. Russia and the US are two of the permanent seat holders on the UNSC. Russia didn’t bother to try to get authorization to invade Ukraine (which obviously other members would veto), and I suspect that the Trump administration won’t on Venezuela.

    The five permanent UNSC seat holders are the US, China, Russia, France, and the UK. Outside of nuclear weapons, Russia’s military power has substantially declined from the Cold War era, and its economy is of limited size. China is much more militarily powerful than it once was, and today, France and the UK are substantially less militarily-capable in most regards than China and the US. Prior to Brexit, I had thought that the EU would federalize and the French and UK seat would then become an EU seat, which would do something to restore some of the degree to which seat-holders had ability to exert military force. But as things stand, the UNSC, which was crafted to include the major military powers in the world, is now substantially out-of-whack with actual military ability. If you have a legal system to avoid conflict because it reflects what would happen in an actual conflict — e.g. instead of having to fight a war because Party 2 would fight you over the matter, you just have a vote instead that would produce a comparable outcome at far less cost than fighting a war — then there is sense in participating in such a thing. I think in practice, though, the major military powers increasingly don’t care what the UNSC says, for two reasons:

    • In some cases, a permanent seat holder may use a veto to increase the political cost of a country engaging in war when it would not actually go to war against the country wanting to use military force. This degrades the stability of the system, encourages parties to disregard it. I think that this is probably the largest flaw in the system as it stands, and that may be fundamental to it.

    • Secondly, in 2026, China and the US in particular are, in most regards, much more militarily powerful than the other permanent seat members, and may simply not be willing to extend them a veto over their military activity. All countries holding a permanent seat are nuclear powers with some form of second-strike capability, which means that war with them is, at least in theory, quite risky. In practice, though, actually using nuclear weapons comes with a lot of drawbacks; they are not a terribly usable weapon. Countries might well be willing to engage in conflict even expecting strong opposition from permanent seat holders, betting that it will not rise to the use of nuclear weapons. The UK is, absent playing nuclear hardball, going to have very limited ability to militarily oppose China if China wants to conduct a conventional land invasion in Asia. Playing nuclear hardball with China is probably going to be pretty risky.

      https://en.wikipedia.org/wiki/Handover_of_Hong_Kong

      During talks with Thatcher, China planned to seize Hong Kong if the negotiations set off unrest in the colony. Thatcher later said that Deng told her bluntly that China could easily take Hong Kong by force, stating that “I could walk in and take the whole lot this afternoon”, to which she replied that “there is nothing I could do to stop you, but the eyes of the world would now know what China is like”.[35]

      In theory, the UK could veto such an action at the UNSC. In practice, China was willing to ignore whether-or-not it had UNSC approval, because it knew that the UK lacked the ability and/or will to back up that veto with military force.

    This isn’t to say that the UNSC system has always been perfect, but the less it maps to actual ability and will to use military force, the more I expect it to be viewed as irrelevant by the major powers. Trump’s action here will probably further weaken it, I think.


  • Not all military actions require Congressional approval.

    Going to Congress takes some time, and so you don’t always have that time.

    I don’t think that case law has precisely hammered out the division at the level of the US Constitution, but in legal terms, a major element is the War Powers Resolution:

    https://en.wikipedia.org/wiki/War_Powers_Resolution

    The War Powers Resolution (also known as the War Powers Resolution of 1973 or the War Powers Act) (50 U.S.C. ch. 33) is a federal law intended to check the U.S. president’s power to commit the United States to an armed conflict without the consent of the U.S. Congress. The resolution was adopted in the form of a United States congressional joint resolution. It provides that the president can send the U.S. Armed Forces into action abroad only by Congress’s “statutory authorization”, or in case of “a national emergency created by attack upon the United States, its territories or possessions, or its armed forces”.

    The bill was introduced by Clement Zablocki, a Democratic congressman representing Wisconsin’s 4th district. The bill had bipartisan support and was co-sponsored by a number of U.S. military veterans.[1] The War Powers Resolution requires the president to notify Congress within 48 hours of committing armed forces to military action and forbids armed forces from remaining for more than 60 days, with a further 30-day withdrawal period, without congressional authorization for use of military force (AUMF) or a declaration of war by the United States. The resolution was passed by two-thirds each of the House and Senate, overriding the veto of President Richard Nixon.

    In practice, the US has not declared war since World War II, though it has engaged in many military conflicts since then. What has happened, for major conflicts, is that Congress has passed some form of military authorization permitting continued combat operations in line with the above act.

    In part, I believe it was the practical pressures of the nuclear weapons era that gave the President more freedom to act. There would be no time to obtain approval from Congress in a number of nuclear weapons scenarios, and if you let the Executive make use of the nuclear arsenal — a really big stick — without Congress’s approval, it seems a bit odd to restrict use of conventional force.

    I also suspect that one factor is that war is also politically risky; if it becomes unpopular, a Congressman — who may be around for a lot longer than a President, who will be out after two terms — may not want to have a declaration of that unpopular war on his record, and would prefer to minimize involvement, so Congress is generally not, for political reasons, adverse to reducing its exposure.

    The issue here is that there’s a certain assumption in that legislation that the freedom given the President is because he requires a great deal of leeway to respond rapidly to unexpected dangers.

    In this case, that wasn’t the case, though I suspect one could make a fair argument that an operation of the sort taken required operational secrecy, and Congressional debate would be at serious odds with it.

    I think that it’s fair to say that the US has taken the position some time before now that ejecting Maduro is okay and a goal — that’s why Venezuela has been under the sanctions, for example, to create political pressure. That hasn’t worked. The question is really whether that policy can or should be shifted from economic pressure to military force.


  • tal@lemmy.todaytoAsk Lemmy@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 day ago

    I’m a bit out of date, but I’ve used Kobos before and been happy with them, if you want an e-Ink device.

    I don’t currently carry one. Yes, I like having an e-Ink display for a number of scenarios. The problem is that I’m just not willing to carry the other computing devices that I lug around and a dedicated e-reader. Sure, there are benefits (much easier to see in bright light, like outdoors, and very limited battery usage), but for me, not great enough to warrant another device.

    EDIT: I can’t personally vouch for them, but IIRC Boox devices have higher specifications, if you aren’t concerned about cost.

    You may or may not want color e-Ink; if you have not looked into this and are considering an e-Ink display, you might want to do so and seriously consider whether you will get enough of a return from color to warrant the tradeoffs. Color e-Ink is not a straight upgrade to grayscale e-Ink displays and a number of people prefer grayscale.


  • Back to the topic at hand - doesn’t it seem strange that only CPU4 finds issues in memtest86? It could be a CPU or even motherboard that got damaged and not the DRAM itself, no?

    I noticed that, but OP said that he ran the thing in three different systems, so I’m assuming that he’s seen the same problems with multiple CPUs. It may be — I don’t know — that memtest86 doesn’t, at least as he’s running it, necessarily try to hit each byte of memory with each CPU, or at least that the order it does so doesn’t have errors from other CPUs visible.

    I also wondered if it might be a 13th or 14th gen Intel CPU, the ones that destroyed themselves over time. But (a) it’s a mobile CPU, and only the desktop CPUs had the problem there, and (b) it’s 11th gen.


  • I am aware of people saying they see great results with the paid search engine “kagi”. Do you use any other better ways to query the fediverse?

    Kagi has a search lens for “Fediverse Forums”, which AFAICT builds a list of Threadiverse instances and searches them.

    There isn’t a great way to replicate that on other search engines, but most of the communities exist on a relatively-small number of instances, and if you’re willing to settle for an incomplete search, you might do all right with a site: search that includes the major Threadiverse instances.

    Note that a number of Threadiverse hosts have shifted to disallowing anonymous access, due to heavy load from webspiders being run by people scraping content for AI, making them unusable which probably means that search engines aren’t indexing them either. It looks like piefed.social is back to providing anonymous access, which I believe it had off for a while, but fedia.io, the main Mbin instance, still has anonymous access off.

    https://lemmyverse.net/instances has a list of instances “smart-sorted”, which puts the major ones up top.

    The top ones are lemmy.world, sh.itjust.works, lemmy.ml, lemmy.dbzer0.com, lemmy.zip, lemmy.ca, programming.dev, feddit.org, sopuli.xyz, and beehaw.org.

    Google supports Boolean search operators, so to search for tigers, search for:

    site:lemmy.world OR site:sh.itjust.works OR site:lemmy.ml OR site:lemmy.dbzer0.com OR site:lemmy.zip OR site:lemmy.ca OR site:programming.dev OR site:feddit.org OR site:sopuli.xyz OR site:beehaw.org tiger

    https://www.google.com/search?q=site%3Alemmy.world+OR+site%3Ash.itjust.works+OR+site%3Alemmy.ml+OR+site%3Alemmy.dbzer0.com+OR+site%3Alemmy.zip+OR+site%3Alemmy.ca+OR+site%3Aprogramming.dev+OR+site%3Afeddit.org+OR+site%3Asopuli.xyz+OR+site%3Abeehaw.org+tiger

    EDIT: Also note that while I don’t use the Lemmy Web UI’s search engine, instances won’t see posts unless at least one user is subscribed to the community in question, so if you want to use that search engine, you might have more luck searching on lemmy.world, which is the largest Threadiverse host and most likely to have seen a given post than mander.xyz’s, your home instance’s.

    If you’re using the Instance Assistant for Lemmy & Kbin in Firefox, it adds a link to the right sidebar on remote instances to view the current given post on your home instance, which might be useful if you’re doing that.


  • The good news is that single-player games tend to age well. Down the line, the bugs are as fixed as they’re gonna be. Any expansions are done. Prices may be lower. Mods may have been created. Wikis may have been created. You have a pretty good picture of what the game looks like in its entirety. While there are rare cases that games are no longer available some reason or break on newer OSes with no way to make them run, that’s rare.

    With (non-local) multiplayer games, one has a lot less flexibility, since once the crowd has moved on, it’s moved on.



  • Ah, fair enough. Long shot, but thought I’d at least mention it on the off chance that maybe it would work and maybe you hadn’t yet tried it. Sorry.

    tries to think of anything else that could be done

    Are you using Linux? Linux has a patch that was added many years back with the ability to map around damaged regions in memory. I mean, if your memory is completely hosed and you can’t even boot the kernel, then that won’t work, but if you can identify specific areas that fail, you can hand that off to the kernel and it can just avoid them. Obviously decreases usable memory by a certain amount, but…shrugs

    I’ve never needed to do it myself, but let me go see if I can find some information. Think it was the “badram” feature.

    searches

    Okay. You’re running memtest86. It looks like that has the ability to generate the string you need, and you hand that off to GRUB, which hands it off to the kernel.

    https://www.memtest86.com/blacklist-ram-badram-badmemorylist.html

    MemTest86 Pro (v9 or later) supports automatic generation of BadRAM string patterns from detected errors in the HTML report, that can be used directly in the GRUB2 configuration without needing to manually calculate address/mask values by hand.

    To enter the address ranges to blacklist manually, do the following:

    Edit /etc/default/grub and add the following line:

    GRUB_BADRAM=addr,mask[,addr,mask...]
    

    where the list of addr,mask pairs specify the memory range to block using address bit matching
    Eg. GRUB_BADRAM=0x7ddf0000,0xffffc000 shall exclude the memory range 0x7DDF0000-0x7DDF4000
    Open and terminal and run the following command

    sudo update-grub
    

    Reboot the system

    If you can’t even boot the system sufficiently to get update-grub to run, then you might need to do a fancier dance (swap drive to another machine or something), but that’s probably a good first thing to try. I’d try booting to “rescue mode” or whatever if your distro has an option like that in GRUB, something that doesn’t start the graphical environment, as it’ll touch less memory.

    EDIT: If your distro doesn’t have something like that “rescue mode” set up — all the distros I’ve used do, but that doesn’t mean that all of them do — or it it can’t even bring “rescue mode” up, because your memory is too hosed for that — then you probably want to do something like hit “edit kernel parameters” in GRUB and boot while adding “init=/bin/bash” to the end of the kernel command line. That’ll start your system up in a mode where virtually nothing is running — no systemd or other init system, no graphics, no virtual consoles, no anything. Bash running on bare metal Linux kernel. Control-C won’t work because your terminal won’t be in cooked mode, everything will be very super-duper minimal…but you should be able to bring up bash. From there, you’ll want to manually bring your root filesystem, which the kernel will have mounted read-only, as it does during boot, up to read-write, with:

    # mount / -o remount,rw
    

    Once that’s done, do your editing of the grub config file in vi or whatever, run the update-grub command.

    Then run:

    # sync
    

    Because you don’t have an init system running and it’s not gonna flush the disk on shutdown and your normal power-down commands aren’t gonna work because you have no init system to talk to.

    Go ahead and manually reboot the system by killing its power, and hopefully that’ll let it boot up with badram mapping around your damaged region of memory.

    EDIT2: It occurs to me that someone could make a utility that can run entirely in Linux to do memory testing to the extent possible inside Linux using something like memtester instead of memtest86, generate the badram string and then write it out for GRUB. That’s less bulletproof than memtest86 because memtester can’t touch every bit of memory, but it’s also easier for a user to do than the above stuff, and if you additionally added it to the install media for a distro, it’d make it easier to run Linux on broken hardware without a whole lot of technical knowledge. I guess it’d be pretty niche, though — doubt that there are a lot of systems with damaged memory floating around.

    EDIT3: Oh, that’s only the commercial version of memtest86 that will auto-generate the string. Well, if you know how to do a bitmask and you can get a list of affected addresses from memtest86, then you can probably just do it manually. If not, post the list of addresses here and someone can probably do a base address and bitmask that covers the addresses in question for you. Stick the memory back into your computer first, though, since the order of the DIMMs is gonna affect the addresses.