

!badmusic@lemmy.world hasn’t seen posts in two years, but it exists.
Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.


!badmusic@lemmy.world hasn’t seen posts in two years, but it exists.


A more active (>1 post per three months) Valheim community.
If you’re talking about !valheim@sh.itjust.works, it looks like !valheim@lemmy.world breaks your average-one-post-over three month bar, though not by that much.


You need a DAC somewhere, though there are headphones that incorporate one so that you don’t need one independent of the headphones. I just don’t know if the parent commenter is referring to ones that incorporate one or not, whether his corner-store-three-pound ones are 3.5 mm or USB C ones.
Very bullish long term. I think that I can with say pretty good confidence that it’s possible to achieve human-level AI, and that doing so would be quite valuable. I think that this will very likely be very transformational, on the order of economic or social change that occurred when we moved from the primary sector of the economy being most of what society did to the secondary sector, or the secondary sector to the tertiary sector. Each of those changed the fundamental “limiting factor” on production, and produced great change in human society.
Hard to estimate which companies or efforts might do well, and the near term is a lot less certain.
In the past, we’ve had useful and successful technologies that we now use everyday that we’ve developed using machine learning. Think of optical character recognition (OCR) or the speech recognition that powers computer phone systems. But they’ve often taken some time to polish (some here may remember “egg freckles”).
There are some companies promising the stars on time and with their particular product, but that’s true of every technology.
I don’t think that we’re going to directly get an advanced AI by scaling up or tweaking LLMs, though maybe such a thing could internally make use of LLMs. The thing that made neural net stuff take off in the past few years and suddenly have a lot of interesting applications wasn’t really fundamental research breakthroughs on the software side. It was scaling up on hardware what we’d done in the past.
I think that generative AI can produce things of real value now, and people will, no doubt, continue R&D on ways to do interesting things with it. I think that the real impact here is not so much technically interesting as it is economic. We got a lot of applications in a short period of time and we are putting the infrastructure in place now to use more-advanced systems in place of them.
I generally think that the output of pure LLMs or diffusion models is more interesting when it comes to producing human-consumed output like images. We are tolerant of a lot of errors, just need to have our brains cued with approcimately the right thing. I’m more skeptical about using LLMs to author computer software, code — I think that the real problems there are going to need AGI and a deeper understanding of the world and thinking process to really automate reasonably. I understand why people want to automate it now — software that can code better software might be a powerful positive feedback loop — but I’m dubious that it’s going to be a massive win there, not without more R&D producing more-sophisticated forms of AI.
On “limited AI”, I’m interested to see what will happen with models that can translate to and work with 3D models of the world rather than 2D. I think that that might open a lot of doors, and I don’t think that the technical hump to getting there is likely all that large.
I think that generative AI speech synth is really neat — the quality relative to level of effort to do a voice is already quite good. I think that one thing we’re going to need to see is some kind of annotated markup that includes things like emotional inflection, accent, etc…but we don’t have a massive existing training corpus of that the way we do plain text.
Some of the big questions I have on generative AI:
Will we be able to do sparser, MoE-oriented models that have few interconnections among themselves? If so, that might radically change what hardware is required. Instead of needing highly-specialized AI-oriented hardware from Nvidia, maybe a set of smaller GPUs might work.
Can we radically improve training time? Right now, the models that people use are trained using a lot of time running comoute-expensive backpropation, and we get a “snapshot” of that that doesn’t really change. The human brain is in part a neural net, but it is much better at learning new things at low computational cost. Can we radically improve here? My guess is yes.
Can we radically improve inference efficiency? My guess is yes, that we probably have very, very inefficient use of computational capacity today relative to a human. Nvidia hardware runs at a gigahertz clock, the human brain at about 90 Hz.
Can we radically improve inference efficiency by using functions in the neural net other than a sum-of-products, which I believe is what current hardware is using? CPU-based neural nets used to tend to use a sigmoid activation function. I don’t know if the GPU-based ones of today are doing so, haven’t read up on the details. If not, I assume that they will be. But point is that introducing that was a win for neural net efficiency. Having access to that function improves efficiency in how many neurons are required to reasonably model a lot of things we’d like to do, like if you want to approximate a Boolean function. Maybe we can use a number of different functions and tie those to neurons in the neural net rather than having to approximate all of those via the same function. For example, a computer already has silicon to do integer arithmetic efficiently. Can we provide direct access to that hardware, and, using general techniques, train a neural net to incorporate that hardware where doing so is efficient? Learn to use the arithmetic unit to, say, solve arithmetic problems like “What is 1+1?” Or, more interestingly, do so for all other problems that make use of arithmetic?


I mean, in that kind of timeframe, there were pretty major shifts in transportation.
For a long, long time, ships up rivers and along coasts was the way serious transportation happened.
Then we had the canal-building era in the US. I assume that the UK did the same.
searches
https://en.wikipedia.org/wiki/Canal_age
Technology archaeologists and industrial historians date the American Canal Age from 1790 to 1855[1] based on momentum and new construction activity, since many of the older canals, although limited by locks that restricted boat sizes below the most economic capacities[b], nonetheless continued in service well into the twentieth century.[c]
By 1855, canals were no longer the civil engineering work of first resort, for it was nearly always better—cheaper to build a railroad above ground than it was to dig a watertight ditch 6–8 feet (2–3 m) deep and provide it with water and make annual repairs for ice and freshet damages—even though the cost per ton mile on a canal was often cheaper in an operational sense, canals couldn’t be built along hills and dales, nor backed into odd corners, as could a railroad siding.
So that was maybe sixty, seventy years before rail was really displacing it.
EDIT: I guess what I’m trying to get at is that I don’t think that rail had a uniquely short era where it was the prime, go-to option compared to other transportation technologies…and I don’t think I’d say that the golden era was short enough to make the technology not a worthwhile investment, even if it was later, in significant part, superseded. A hundred years is a long time to wait around without engine-driven transportation, which would have been the alternative.


Like, the automobile? It looks like the boom in the UK they were talking about was in the 1840s.
https://en.wikipedia.org/wiki/Railway_Mania
Railway Mania was a stock market bubble in the railway industry of the United Kingdom of Great Britain and Ireland in the 1840s.
There were primitive automobiles earlier, but the mass market automobile didn’t come around for a long time after that, and then it’ll have taken longer to get substantial marlet penetration.
searches
https://www.bbc.com/news/uk-42182497
It runs a bit off the edge — I don’t know how far back they had licensing and mileage data.

But extrapolating from those lines, I’d guess that annual distance traveled in the UK in autos on roads surpassed rail only in the 1940s or so, about a hundred years later.
That’s probably outside the investment horizon of people investing in the 1840s — in evaluating whether an investment is worthwhile, they won’t be considering returns a century hence.
That being said, it is possible to maybe consider freight rail, and it’s possible that that works out differently. The US doesn’t use much passenger rail in 2025, but it does do quite a bit of freight rail; the two can be decoupled.
EDIT: It can’t be too much earlier that road traffic could have risen, though, since mass-market motor vehicles weren’t much earlier than that.


In general, a lot of services have absurdly tight time requirements on email validation. A lot of users have graymail setups or other things that will delay email, not to mention polling intervals.
I get expiring temporary credentials in email on the general principle of not leaving credentials lying around, but use 24 hours or something, not minutes. There’s minimal added risk, and it avoids a ton of problems.


Good news!
https://en.wikipedia.org/wiki/Ultimate_fate_of_the_universe
The ultimate fate of the universe is a topic in physical cosmology, whose theoretical restrictions allow possible scenarios for the evolution and ultimate fate of the universe to be described and evaluated.
The heat death of the universe, also known as the Big Freeze (or Big Chill), is a scenario under which continued expansion results in a universe that asymptotically approaches absolute zero temperature.[17] Under this scenario, the universe eventually reaches a state of maximum entropy in which everything is evenly distributed and there are no energy gradients—which are needed to sustain information processing, one form of which is life. This scenario has gained ground as the most likely fate.[18]
In this scenario, stars are expected to form normally for 10¹² to 10¹⁴ (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. Eventually black holes will dominate the universe, but they will disappear over time as they emit Hawking radiation.[19]


You can even get shitty wired earphones for like 3 quid on amazon or at a corner sh
To be fair, if you’re talking 3.5mm headphones/earbuds, you’re likely going to need to add a USB DAC with that too, as most current smartphones don’t have a 3.5mm jack.
That doesn’t fundamentally change the point on cost — you’re talking maybe $6+ more on Amazon — but it does cost more.


I suppose it could, if you wanted to carry a foldable USB or Bluetooth keyboard or keypad.
I think this is a case of “knows enough to be dangerous”.
https://en.wikipedia.org/wiki/Jamie_Zawinski
At Netscape, he developed the Unix release of Netscape Navigator 1.0,[4][5] and later, Netscape Mail, the first mail reader (or Usenet reader) to natively support HTML.[6]
I mean, you can draw the line wherever you want, but I expect that he probably knows more about mail than the average bear.
JWZ is probably one of the better people in a position to set up a commercial mail provider that provides whatever services and address the issue.


The version on F-Droid works on my system.
checks
Looks like the F-Droid project hasn’t seen a new Android build in a decade, and Maxima has seen updates, but…I mean, it’s a pretty mature program. I have a newer version on desktop, but all the stuff I use was added in the first couple decades or so of its life.
EDIT: The git repo is here:
https://github.com/YasuakiHonda/Maxima-on-Android-AS
So I expect that if someone wanted to do builds of newer versions, they could.
But for perspective, the calculator that the article is talking about that the author liked is also a decade-old calculator.
I would not try to buy a product with a built-in battery for life. If you want a battery-powered speaker and want to keep it for a long time, I’d go for something that has standardized, removable batteries.
EDIT: It looks like a lot of the products out there (a) are Bluetooth, which I don’t know if you want or not (but I wouldn’t consider that something that will be a buy-it-for-life feature either, since Bluetooth has steadily seen protocol change over time and I doubt that twenty years down the road, the state-of-the-art in Bluetooth will be what it is today) and (b) have built-in lithium batteries. What you might consider doing is getting a radio with a built-in speaker with aux-in, as it’s easy to find those with removable batteries and without Bluetooth.


It’s all of Nvidia’s customers that are mini-Enrons.
I don’t think that those are faking profit. Well, I don’t have some comprehensive list, and maybe somewhere someone is, but not for the majority of purchases. It’s possible that they won’t wind up being long-term profitable, but there isn’t fraud involved in that.
I don’t think that the Enron analogy is very applicable in general. Like, what people who are critical of the extent of AI investment are worried about is analogous to the dot-com bubble, where the returns to investors from companies didn’t warrant the level of investment and stock prices for many companies shot way up and then fell back down.
EDIT: It’s fair to say that Nvidia is driving demand for its products. But…that may be quite sensible, since if you have a killer app or two explode, it can drive massive demand for the hardware that runs it. If you have capital available and control the best hardware out there, it may well make more sense to use that capital on building more demand for your product than to go and try and improve the product more. There’s only so many chip engineers available that Nvidia can hire, and unless they want to get into the “writing AI software” game themselves, which would have them compete with their customers, I’d think that that’s potentially a reasonable place to put their capital if they’re trying to improve their business potential.
Nvidia sold a lot of hardware when cryptocurrency became popular. I think that it’s probably fairly safe to say that AI applications have considerably more potential to provide utility than does cryptocurrency.


Scientific calculators are an amazing invention that take pocket calculators from being merely basic arithmetic machines to being pocket computers that can handle everything from statistics to algebra.
I use the FOSS maxima on Linux and Android.


Stick a hedge made of cypress trees or something else dense and tall there?



Ah, gotcha.


If I see an old hand-made axe head for sale with or without a handle for a decent price I’ll buy it because they’re not making more of them.
Doing a quick search, I see multiple companies that say that they hand-make axeheads. Now, it’s probably cheaper to get them secondhand, but I don’t think that the supply of new ones will run dry, as a business making handmade axeheads probably doesn’t see a lot of economy of scale.
https://www.lehmans.com/category/axes
We specialize in offering professionally hand-forged axes by makers who have been in the business for decades, so you find the last wood-cutting axe or wood hatchet you ever buy.
Looking at the brands they’re selling:
Gränsfors Bruk, Sweden.
Snow & Nealley, Maine, US.
Hults Bruk, Sweden.
Lehmans, US. “The letter is the last initial of blacksmith who tempered the axe.”, says that the axe is “Maine made”, the handle is made by “an Amish family a few miles from our store”, that the steel comes “from New York”, but doesn’t explicitly say where the blacksmith is.
Maybe there’s a lack of posting activity, but there are a lot of communities.
Looking at https://lemmyverse.net/communities :
!cat@lemmy.world
!aww@lemmy.world
!cats@sh.itjust.works
!oneorangebraincell@lemmy.world
!illegallysmolcats@lemmy.world
!meow_irl@sopuli.xyz
!catsstandingup@lemmy.world
!scrungycats@lemmy.world
!blurrypicturesofcats@lemmy.world
!catswithjobs@lemmy.world
!tuckedinkitties@lemmy.world
!standardissuecat@lemmy.world
!voidcats@lemmy.world
!pallascats@lemmy.world
!catloaf@lemmy.world
!chonkers@lemmy.world
!floof@lemmy.world
!ooobigstretch@lemmy.world
!cattorneys_at_paw@lemmy.world
!russianblue@lemmy.world
!murdermittens@lemmy.world
!britishshorthairs@lemmy.world
!piebaldcats@lemmy.world
!crabcats@lemmy.world
!piratekitties@lemmy.world
!greebles@lemmy.world
!catsnamedtoothless@lemmy.world
!catsonstereos@lemmy.world
!meow@lemmy.world
!torties@lemmy.world
!catsvideos@lemmy.world
!JellyBeanToes@lemmy.world
!floaf@lemmy.world
!kittycrew@lemmy.world
!aww@sh.itjust.works
!awwanimals@sh.itjust.works
!sillycats@lemmy.dbzer0.com
!space_cats@sopuli.xyz
!petdiscussion@sh.itjust.works
!cats@eviltoast.org
!stolen_dog_beds@sh.itjust.works
!mainecoons@lemmy.ca
!stiltycats@lemmy.dbzer0.com
!torties@reddthat.com
!cats@lemmy.cixoelectronic.pl
!Cats@lemmy.org