• 1 Post
  • 1.03K Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle
  • I think you choose a poor example.

    When I say long name I wasn’t implying meaningless ones.

    Sooo, that example wasn’t exactly “contrived” - it’s based on a standard I see where I work.

    DB - it's a database!
    DW - and a data warehouse at that!
    ORCL - It's an Oracle database!
    HHI - Application or team using / managing this database
    P - Production (T for Test - love the 1 char difference between names!)
    01 - There may be more than one.
    

    This is more what I’m arguing against - embedding meta-data about the thing into its name. Especially when all of that information is available in AWS metadata.

    [Site][service][Rack] makes sense for on-premise stuff - no argument there.

    I’m just saying long names dont have to be obtuse or confusing.

    Agree


  • In a business with tens of thousands of servers, it makes sense to have long complicated names.

    I’m actually not convinced of this approach. It’s one of those things that makes perfect logical sense when you say it - but in practice “DBDWWHORCLHHIP01” is just as meaningless as “Hercules”. And it’s a lot more difficult to say, remember and differentiate from “DBDWWHORCLHHID01”. You may as well just use UUIDs at that point.

    Humans are really good at associating names with things. It’s why people have names. We don’t call people “AMCAM601W” for a reason. Even in conversations you don’t rattle off the long initialism names of systems - you say “The <product> database”.



  • atzanteol@sh.itjust.workstoSelfhosted@lemmy.worldHow do you document your setup?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    1 day ago

    I get that - it’s difficult to see the point in it until you’ve gone along without it. Especially as a beginner since you don’t have a strong sense of what problems you will encounter and how these tools solve those problems.

    At some point the learning curve for IaaC becomes worth the investment. I actually pushed off learning k8s myself for some time because it was “too complicated” and docker-compose worked just fine for me. And now that I’ve spent time learning it I converted over very quickly and wouldn’t go back… It’s much easier to maintain, monitor and setup new services now.

    Depending on your environment something like Ansible might be a good place to start. You can begin even with just a simple playbook that does an “apt update && apt upgrade” on all your systems. And then start using it to push out standard configurations, install software, create users, etc. The benefit pays off in time. For example - recently (yesterday) I was able to install Apache Alloy on a half-dozen systems trivially because I have a set of Ansible scripts that manage my systems. Literally took 10 mins. All servers have the app installed, running, and using the same configuration. And I can modify that configuration across all those systems just as easily. It’s very powerful and reduces “drift” where some systems are configured incorrectly and over time you forget “which one the correct one?” For me the “correct one” is the one in source control.







  • So going back to the title, what to study? Maybe some specific book? Private classes/courses?

    Networking. If you want to understand the reasoning behind things this is where you start. A good foundation in tcp/ip, the 7 layer network stack, as well as basic network protocols (dns, dhcp, http, etc.) will go a long way toward helping you troubleshoot when things go wrong.

    Maybe throw in some operating systems study as well for when you start to use docker.


  • atzanteol@sh.itjust.workstoSelfhosted@lemmy.worldCheapest 16x4tb NAS
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    edit-2
    23 days ago

    You’re talking a lot of storage - it might be worth investing in some low-end server hardware. A Dell tower or something, maybe one off eBay if you’re looking to cut costs.

    I picked up a PowerEdge T110II a long time ago and it’s been… flawless. Just a simple server with a 4x4TB RAID5. No hardware problems (aside from occasional disk failures over the years), easy to manage. It costs a bit more - but server hardware is often just more reliable and for a NAS that’s job #1. This server just runs.

    I just upgraded the memory in it to 32GB for ~$100USD. Before that it had 8GB. I needed more for restic doing backups. I probably could have gotten away with 16GB but I figured I’d max it out for that price.