

Yeah, I daily drive spacemacs. 🙂


Yeah, I daily drive spacemacs. 🙂


The vim key bindings are a lot better.
KDE can probably do this with enough time and effort.
I used to work at a company where this was in the KB. 😐
Zsh is but more for interactivity. The extended file globbing, extended auto completion, and loadable modules are the main reasons I like it. The features really shine when used with a configuration framework like ohmyzsh.
Supposedly, Zsh has a more comprehensive shell scripting syntax, but that’s not a plus since I don’t want to write shell scripts.
Exactly. I’ve been up for 27 hours, but I finally have a booting Gentoo install now. 😃
Gentoo installs are not that bad these days. However, back in 2005, it would take, like, a day or so to compile the kernel on my old Pentium M Thinkpad. I would run through the install, start compiling the kernel, and go to sleep/work/whatever. I would check on it periodically to see if anything went wrong, and eventually it would get to the point where I could reboot and find out I messed something up and had to start over. That was like a week, and then I installed Ubuntu. 😂


FreeIPA covers most scenarios. Kerberos, Dynamic DNS/DNS, LDAP.
GPO equivalency would need some config management tool. Ansible is what RH would suggest, but something with an agent would probably be better.


The most stable system is one that is out of support. No updates == No breakage! 😄


I wasn’t clear and that seems to have cause some confusion. I was talking about the Linux kernel itself, and only the Linux kernel.
There are two sides to the Linux kernel: internal exposed to drivers and such, external syscalls exposed to the public. That’s what I was talking about.
All bets are off with 3rd party software. That’s just a general problem in software development. It’s not specific to Linux, and it’s why vendoring libraries is recommended.
This is why all the 3rd party software is frozen at a point-in-time with fixes backported in distros like Debian or RHEL. It fixes the problems of devs being mercurial. The distro is the SDK. It creates a stable base, and it works rather well.
Unfortunately, most software relies on libc and a compiler. Both of which can be problems, and both of which are external to the Linux kernel. There’s not much which relies on only kernel syscalls.


Basically. Out-of-tree drivers are annoying without an LTS kernel.
There are also out-of-tree drivers which don’t get mainlined for one reason or another even though they are FOSS. OpenZFS has this problem, and now so does bcachefs.


The user land API/ABI is stable to a fault in Linux. The kernel API/ABI is unstable.
Companies are cheap. They hired web devs then tasked them with building a desktop application rather then hiring people to write native apps. They had a hammer and used it to fix every problem they had.
macOS is just as affected by electron apps as a Linux is.
Electron is horrible, but it does bring apps to many an OS once Chromium is ported.
Open protocols or open APIs from the company would fix the non-native app problem.


It is. It’s just not particularly good outside of a X11/Wayland environment.
I think this being worked on though.


How is this applicable to the comment? Companies never figured out how to charge rent for those.
Devs see home computers as a free resource, and the burden is on the consumer to buy a computer which runs their software.


Nah. Web devs will create even more bloated web pages to keep home computing in business.
For real though, most people don’t need that much computing power, and we reached the plateau 12 years ago. That’s why we’re seeing crypto and AI grifts happen. They recentralize decentralized systems. The elites are striking back.
You know the saying“information wants to be free; information wants to be expensive”? This is the expensive part where people try to horde knowledge by making it inaccessible to everyday people.


Awesome! Previously, Evolution was the only Linux client which supported Exchange, and Evolution is… well…. 😕


That’s going on the list. My heart says I don’t need this, but my brain says I do.
A-series would already be at a disadvantage due to being designed for iPhones and the design parameters that entails compared to the M-series.
The M-series Mac Pro was always for companies which were going to rack them and use them in render farms. Normal people was never its intended market. It was more of an Xserve successor.
Apple would need to design a different CPU for the Mac Pro, and the limited market doesn’t make it feasible. Descending the M-series CPUs from the A-series limits what the designs can do.
There are rumors of a CPU split in the Apple lineup. iPhone, iPad, iMac, Mac Mini, MacBook get the A-series, and MacBook Pro, Mac Studio. Mac Pro get the M-series. That would make sense, and might give them some room to expand the “Pro” procs.
I’ve thought about Doom, but I haven’t gotten around to trying it out. Finding the time to sit down and learn it hasn’t been a high priority.