

He filed a lawsuit in 2024 against his former employer, PrimeFlight, alleging unpaid wages and missed break times.


He filed a lawsuit in 2024 against his former employer, PrimeFlight, alleging unpaid wages and missed break times.


It’s an unrelated Andor reference.
Great show, if you haven’t seen it.


forcing many users to consider the unthinkable
Use a firefox clone and ublock?
(and linux, btw)


I have friends everywhere.


I’d also add:
Sometimes you just want to send someone a random file without needing to create an account for them and walking them through installing an app. You can use Filebrowser to generate a link that they can access to browse that specific file/directory without credentials. You can set these links to timeout after minutes/hours/days/never.
Useful to have a link to all of your services in a portable and shareable form. Very customizable and useful for most homelab assets.


CFD = Computational Fluid Dynamics.
It is kind of what they said, you’re right. I was more pointing how how it could be that they could ‘sense the vibes’ of a CFD result to determine if it is accurate or if the model decided to do something weird. Since it’s a chaotic process and also an artificial one, the starting conditions can yield results that are impossible/not based on reality.
If you look at enough of them you start to notice the kinds of things that go wrong. They would also have a pretty good idea about how their design should perform and if the simulation shows different they’d first want to troubleshoot the simulation before attempting to re-design whatever system they’re creating.


That is an insightful question.
The answer is that we actually understand chaos in a way. It isn’t unpredictable in general, it’s just hard to say how any given situation will evolve but we can understand a lot about how all systems will evolve.
I’m not good at explaining, but some content creators cover this topic pretty well. If you’re interested, here’s a video about it from Veritasium: https://www.youtube.com/watch?v=fDek6cYijxI


It’s a super exciting time for so many fields of science. Transformers are really the key discovery that’s made modern AI what it is today and we’re only barely scratching the surface of possible applications.
The future is going to be weird in ways we can’t even imagine.


Anfinsen won the Nobel in 1972 for showing that the amino acid sequence is what is responsible for the 3D structure of proteins.
Since then we’ve been able to take images of protein’s structures using xray crystallography but that is a painstaking process. The ability to accurately predict a protein’s structure from an amino acid sequence has been an unsolved problem until very recently.
It wasn’t until 2024 that Hassabis, Jumper and Baker won the Nobel for their work in predicting protein structure (using an AI called AlphaFold) and computationally designing new proteins.
The ability to create arbitrary proteins is new and will revolutionize some fields of medicine (like cancer treatment) and, to me, is a much more impressive use of AI.
LLMs are interesting but they are incredibly over-hyped as far as ‘changing the world’ goes, imo.


Those kinds of simulations are inherently chaotic, tiny changes to the initial conditions can have wildly different outcomes sometimes to the point of being nonsensical. Also, since they’re simulating a limited volume the boundary conditions can cause weird artifacts in some cases.
If you run a simulation of air over an aircraft wing and the end result is a mess of turbulence instead of smooth flow then you can assume that simulation was acting weird and not that your wing design is suddenly breaking the rule of physics. When the simulation breaks it usually does so in ways that are obvious due to previous testing with physical models.


I’m failing to see why the creative writing machine is better than a simulation set to ‘rough’.
The problem is that you saw AI and thought LLM.
Machine Learning is a big field, AI/Neural Networks are a subset of that field and LLMs are only a single application of a specific type of LLM (Transformer model) to a specific task (next token prediction).
The only reason that LLMs and Image generation models are the most visible is that training neural network requires a large amount of data and the largest repository of public data, the Internet, is primarily text and images. So, text and image models were the first large models to be trained.
The most exciting and potentially impactful uses of AI are not LLMs. Things like protein folding and robotics will have more of an impact on the world than chatbots.
In this case, generating fast approximations for physical modeling can save a ton of compute time for engineering work.


There’s software that adds achievements to emulated ROMs: https://retroachievements.org/


Yeah, it’s almost certainly a VM.


They’re doing it to mess with the delivery robots who would have a perfect record if not for the bus stops being placed in their path constantly.


Going a millions different ways is more in the FOSS spirit than starting off by saying ‘everyone will use this list of software’. They can’t know what will be the best fit for everyone so they’re approaching it flexibly.
They can iterate in the future or come up with standards as they need, but in the beginning it’s better to try a lot of different things to see what people discover.


What’s crazy is, bus stops, and their associated shelters DO NOT MOVE.
TIL


I haven’t used it in a few years, I use a certain anonymous rodent to get my books now.
If you’re using Mullvad as your VPN Tailscale supports it right out of the box. You could use Tailscale only and use Mullvad’s VPN as an exit node. This is probably the easiest and most out-of-the-box ready solution.
There are 7 million people who would agree with you, if they could.