

I also use KeepassXC, and it’s great. I’m interested in setting up Syncthing between my Android, Linux desktop, and NAS. Do you have any tips or articles/resources that you used to set it up?


I also use KeepassXC, and it’s great. I’m interested in setting up Syncthing between my Android, Linux desktop, and NAS. Do you have any tips or articles/resources that you used to set it up?
My employer leases our computers for 3-5 years. I get a new model when my lease runs out. I don’t really mind the guaranteed refresh except having to move all my stuff over. I would be way more pissed if they moved to BYOD.


I love it, but it does not work for everyone
I have my own separate office in my basement with plenty of privacy. I stick to a normal work schedule. And perhaps often overlooked: my team is all remote as well.
The last point is important: if your team is both on and off site, it can be difficult to make sure everyone is included in all the casual information sharing. My team uses a shared Teams chat as a low friction water cooler, which works great for us.
We often jump on a voice call with screen sharing too work together. It works even better than in person because we can both have our own computers instead of one person looking over the other person’s shoulder.
If you have a good manager, they may be able to mitigate this, but it’s more difficult than it sounds. If not handled correctly, this can lead to team segmentation and isolation. Working hybrid can sometimes get around this while still being flexible enough that people can wfh when they need to. For any business it needs to be the decision of the direct managers so they can decide what is right for their team.
That all said, I love not having the 1.5hr commute anymore, no walk-in interruptions, being able to run errands or go to appointments without taking the whole day off etc. It’s a major part of my job satisfaction.
If your commute is reasonable and you get satisfaction from going to the office then maybe you’re happier on site or hybrid. Full time wfh can be lonely at times.
If you hate going in to the office, make sure your environment at home is set up so you can focus and work as effectively as at the office and give it a shot. Talk to your manager. You may need to convince them it’s a good idea first.


This was the first thing that came to mind when they mentioned a program. I very rarely create programs that don’t need to be updated later, unless they’re single use throwaways.
I’ve inherited support for programs that we had lost the source code for, though, and that sucks.
So that’s a no from me.
I see. My concern was with security scanning tools often put on computers by enterprise IT departments but it sounds like that’s not the case here.
In your situation, assuming you’re not finding what you seek with journalctl, I think I would use a tool like vmstat or sar to collect periodic snapshots of CPU, memory, and io. You can tell it to collect data every X seconds and tee that to a file. After you reboot you can see what happened leading up to the crash. You should be able to import the data into a spreadsheet or something for analysis, but it’s not very intuitive and you’ll need to consult man pages for the options and how to interpret them.
There are a lot of good suggestions in this thread. I would lean towards a hardware or driver issue, maybe bad RAM. Unfortunately these things take a lot of trial and error to figure out.
It may not be the raw RAM usage.
My first suspect is the Windows VM especially if it’s running enterprise security software 4GB is probably not enough for modem Windows and it could be trying to use its page file, thrashing your disk in the process.
Are you able to collect some data from system monitor on paging and disk activity? That could help you narrow it down. You can use btop for a quick terminal option if your gui is non responsive (assuming your could switch to a console). Vmstat is another option that you can run in the background to collect stats over time, but it’s not user friendly.


I can only think of two ways if the top of my head:
Both sound pretty brittle to me, though, and I haven’t tested this specifically.
100% this. I was going to post what you said as well. But I will add that in the US, if you use 24 hour time, most people just refer to it as military time. If you tell them the difference they don’t really care.
In the US 24h is virtually never used in a civil context, but in scientific, engineering, and medical contexts it is ubiquitous.
This is what source maps are for. With the right tools you can debug the original source instead of the minified version.
I have recovered many times from a broken window session in Linux by switching to a console with ctrl-alt-fN, logging in, and either killing the offending program or just rebooting gracefully.
In Windows my last resort before the nuclear power button is Task Manager with ctrl-esc or ctrl-alt-delete.


Minor correction: IPv6 uses 128bit addresses.


Both MySQL and MariaDb are named after the developer’s daughters.
I read this as Doge Ram at first.
Tie it to your internet bandwidth usage, so that the bulb starts dimming when utilization goes up and maybe flicker a bit, as if you’re drawing too much power off the grid when you’re downloading stuff.
Thank you for the recommendation. I would consider it again if my day job switched to Linux (unlikely).
I did try Rider on Linux a while back, but just couldn’t get my head around it. I’ve become too used to Visual Studio on Windows (with Resharper).
I don’t do a lot of C# outside of my day job, though, so VS code is fine for my uses.
Unfortunately I can’t help you there. I just use plain old kde plasma on Fedora. If your favorite code editor supports Language Server Protocol (LSP), you can probably get it to do code completion for C# one way or another. Vim, neovim, Kate, and many others do.
I just use VS code with c# extensions on Linux. It works fine. I also use vim with lsp support for C# sometimes.
If you want more, you may also want to check out Rider from Jetbrains.


Same. I’m lucky enough to have two within driving distance. I’m genuinely worried about them staying in business if PC building takes a nosedive thanks to the RAM/SSD prices.


To add to that: health checks in Docker containers are mostly for self-healing purposes. Think about a system where you have a web app running in many separate containers across some number of nodes. You want to know if one container has become too slow or non-responsive so you can restart it before the rest of the containers are overwhelmed, causing more serious downtime. So, a health check allows Docker to restart the container without manual intervention. You can configure it to give up if it restarts too many times, and then you would have other systems (like a load balancer) to direct traffic away from the failed subsystems.
It’s useful to remember that containers are “cattle not pets”, so a restart or shutdown of a container is a “business as usual” event and things should continue to run in a distributed system.
That is helpful, thank you! I will look into the master server option. I can spin up Docker containers on the NAS.