I hardly noticed. When I look at the shit companies that are affected I’m happy I’m not their customer.
I keep seeing all this stuff all over Lemmy and FB about the AWS outage and being in the northeast I dont understand how it hasn’t effected me in the slightest but I’ll take the W I guess !
I’m in the northeast as well, I couldn’t watch TV last night thanks to this. Count your blessings I guess.
Lol no kidding. All my friends complaining about the outage are in ri/ ma, I’m in VT. Not sure if I’m on the same servers as the rest of east coast? Don’t even know how to check honestly.
I have ZERO sympathy for companies whose services are affected by this. Because seriously, fuck Amazon.
I can hear the smug grins on homelabber’s/self-hoster’s faces from here.
Oh look, fediverse is still working.
You can share in the smug grindouble grins in self-hosted Fedi instances
Not like their systems never have downtime.
The difference is I can do something about my downtime and fix it.
lol yeah I definitely have more downtime than AWS.
The main differences are:
- I usually control when that downtime is.
- I can inform literally every single person that uses it exactly what is down, why, and approximately for how long.
They’d type up a really angry reply, but they need to fix a their router config real quick
Um, akshually it’s a DNS issue not a router issue.
I think.
It looks like a router issue. But it’s always a DNS issueSaw a wonderful joke on bsky this morning in response to this
ICANN hear it cooooming in the aiiir tonight
I had surgery today and couldn’t pick up my meds at the pharmacy because my insurance uses AWS somewhere in the billing process. We had to pay out of pocket and pray we get reimbursed because they’re expensive. This took 6 phone calls to find out and overall, sucked. I didn’t think AWS going down could affect my damn insurance.
In any sane place, with sane laws, they would just have to approve everything until they fix their own shit.
Jeez, that’s messed up. I’m so sorry! I hope you’re able to get reimbursed without much more trouble, and I hope your recovery goes well!
It’s almost like we shouldn’t rely on just a few central sites. And that everything should be democratized and federated.
I2P
Basically, what if the entire internet was torrents, everyone was seeding / routing to everyone else, oh and also its more private/secure than Tor or VPNs?
Downside is it is quite slow… but if it caught on more widely, that could alleviate somewhat.
I think I’d really need to know how “somewhat” alleviated it is to have any interest, given the status quo is like, a second.
Uh also, double post but whatever:
You know what works fine on I2P?
Just oldschool HTML, with no fucking javascript, no fucking broken meta-media containers, no bazillions of advertisement systems that constitute 80% of the website’s actual ‘size’.
Yep, everything basically looks like MySpace, the 90s.
This is a good thing.
GIFs caught on initially because they are a very lightweight and efficient way to add a simple animated looping element to a webpage.
Now, everything is built for maximum webdev ease, which is also maximally bandwidth inefficient for the end user, so now we need mega server clusters everywhere, for everything.
But the existence of working video hosting websites that work via p2p streaming shows that other ways of doing things like this are fundamentally viable… maybe not quite as snappy, but so, so much more efficient and less wasteful from a totally top down perspective.
The internet was always meant to be decentralized, imo, and then the corpos ruined everything.
If only more companies copied McMaster-Carr’s approach https://dev.to/svsharma/the-surprising-tech-behind-mcmaster-carrs-blazing-fast-website-speed-bfc
So, roughly, the webdev equivalent of a gamedev properly ordering and and optimizing how a scene is rendered.
You know, the kind of thing you would think is an industry standard, but it turns out the industry doesn’t optimize for optimization, it optimizes for current quarter profits.
Uh, no clue, that math on that would be very difficult to calculate, exceedingly complicated.
Have you ever been able to accurately predict the actual speed at which you download a torrent thats the size of a whole days worth of your regular internet usage?
Its basically a dynamic mesh network, you could run the math on a 100 different scenarios, get a 100 different results, and also have no clue which scenario is more or less realistic.
The way I2P works is by step one, encrypting your traffic, step two, bundling that into a bigger packet made out of those network-near you’s traffic, that then has its own encryption around all that, and then that gets sent somewhere else.
So, upside is, even if your packets are intercepted… its basically impossible to figure out which subpart of the bigger packet is whose.
You only have the keys to your part of that bigger packet.
Downside of all this is that all that packet bundling takes time, and routing is dynamically reconfigured, so… yeah, doing a ‘from principles’ estimate is… I dunno, find a chaos mathematician specialist for a more precise answer?
Possibly also worth mentioning: You can use I2P as basically something like Tor/a VPN, to access the non I2P net, the normal internet, you do this by using what is called an outproxy.
Theoretically an outproxy could be just giving all its packets right over to the NSA, but again, you’ve got that kind of encrypted packet sausage going on, and the network is much more complex and distributed than the Tor network’s smaller number of more centralized nodes.
Could possibly run a benchmark if you can convince a few friends to use it and all visit the same site at the same time a few times and see how long it takes and average it and then do the same without I2P to compare
Uh, no, unless by ‘a few friends’ you mean something like 500,000 people.
I2P is significantly slower than not I2P at visiting not I2P websites, something roughly between 10 to 100x times slower.
What was being asked was how much more agile the I2P network itself would become, if many more people used it.
With I2P, more users = more nodes to route packets through = smoother operation for the whole network.
This is the inverse of typical mass network paradigm, where more users = you have to throw more and more servers at handling requests, or, you get… well, basically what is still going on with Amazon US East 01 right now, functionally internet brownouts.
Precisely how much faster I2P would get with say, a million more daily users, thats very complex to even try and estimate.
But visting a normal internet website via I2P is always going to be much slower than via using a VPN or just straight connecting, because when I2P does that, its basically just acting as a giant maze of chained proxies + a way of encrypting packets that nothing else does.
On the flip side of that, you can’t access a .i2p site unless your are using I2P, like how you can’t access a .onion site unless you are using Tor.
Analagous to how… you can’t download a torrent by just downloading the .torrent itself… you need a whole program that can use that file to connect to peers, and that program is what downloads the data.
What do you recommend to read up on it more? I’ve read the wiki and this.. I’m wondering if there’s more to understand about browsing or connecting with content or like minded people.
I am not sure about further ‘reading’, but the youtuber Mental Outlaw has a number of videos that do a pretty decent job of introducing and explaining I2P as a concept, as well as some videos that walk you through at least one way to do the actual setup process.
Though, some of those may be slightly out of date by now, those vids are I think a few years old at this point.
There’s also the difference between I2P proper, which I believe is still done wholly in Java, and I2PD, which is basically the same I2P, but rewritten in … either C or C++.
And, depending on how you’re going to want to use I2P on your system… as in uh, just a shunt or mode for a specific program, vs trying to reroute your whole OS’s traffic through it… that gets messy and complicated fast, depending on your setup, and exactly what you want to do.
Also, depending on your ISP / router situation, you may or may not have to futz with opening a port for I2P on your router firewall.
I’m wondering the same thing! If you get an answer, would you mind letting me know, if it’s not too much trouble? I’ve read a lot about it, but it still feels like I’m missing/not understanding most of it. That may just me a ‘me & my crappy brain’ issue, though.
I literally noticed zero difference. But it sounds bad. Have they tried shoving more AI in there to fix the problem?
75%? Those are rookie numbers. Gotta get that up around 200, 250%, then we’ll really get things started.
You’re like one of those movie guys who gets days into a zombie apocalypse before realizing anything’s wrong
My DNS is a Lightsail instance out west, no issue.
As someone that works at another Amazon AWS dependent org, it also took out us. It was awful. Nothing I could do on my end. Why the fuck didn’t it get rolled back immediately? Why did it go to a second region? Fucking idiots on the big teams side.
I got paged 140 times between 12 and 4 am PDT. Then there was another one where I had to hand it off at 7am because I needed fucking sleep. And they handled it until 1pm. I love my team, but it’s so awful that this even was able to happen. All our our fuck ups take 5-30 mins to roll back or manually intervene. This took them 2+ hours, and it was painful. Then it HAPPENED AGAIN! Like what the fuck.
This is a good reason to start investing in multi region architecture at some point.
Not trying to be smug here or anything, but we updated a single config value, made a PR, and committed the change and we were switched over to a different region in a few minutes. Smooth sailing after that.
(This is still dependent to some degree on AWS in order to actually execute the failover, something we’re mulling over how to solve)
Now, our work demands we invest in such things, we’re even investing in multi-cloud (an actual nightmare). Not everyone can do this, and some systems are just not built to be able to, but if it’s within reach it’s probably worth it.
Last night from 12-4am, it was almost every region impacted so it didn’t help that much.
But we do have failovers for customers that they need to activate to just start working in another region.
But our canaries and infrastructure alarms cannot do that since they are for alerts in the region.
Oof.
your favorite sites
looks at list
nop
I get what you’re saying, and agree, but there were many more, Ancestry.com and findagrave.com and many more were also down (while I’m in the middle of an ancestry fact finding trip). It really was massive.
Ancestry.com and findagrave.com are kinda the funniest examples that could be picked from the sites being affected today. Obviously there’s the parallels of AWS being dead today, but I also can’t imagine there would be a lot of updates to those sites that not being active on there for some amount of time would miss out on some timely update. I totally hate being in the grove when something out of my control impedes my workflow, don’t get me wrong, and can totally see how the outages would be annoying.
Only site that got me was Riverside ☹️
I don’t even want to hear an argument for moving back on prem with how badly Broadcom/VMware ripped our eyes out this year. 350% increase over 2 years ago, and I still have to buy the hardware, secure it in a room, power it, buy redundant Internet and networking equipment, get a backup facility, buy and test a generator/UPS, and condition the damn air. Oh then every few years we have to switch out all the hardware when it stops getting vendor support.
At least everyone was all in the same boat today, and we all know what was broken.
Moving to Nutanix soon. Love their product. Proxmox looks good on paper too, just not mature enough in the enterprise to bet my paycheck on it.
Cloud infrastructure is expensive.
We’re a year or so into our AWS migration, but will have a presence on prem for years to come. Broadcom has seen our last dollar, so for what remains on prem, we will be moving to a different hypervisor. I was kinda hoping that Proxmox could be the one, but it may not shake out that way.
Lucky me needs Proxmox only for self-hosting and loves it :)
AWS salespeople, meeting customers today.
Today should have been easy for them tbh. “See? That’s why you should pay us more money to have active infra on our other regions to failover to!”
Can’t even do any of my work for college until the outage is over
Same. I’m sitting in my college’s library right now trying to work and the outage threw a wrench in all of my plans. I’m thankful I downloaded the files to my hard drive though so I can do most of the work on Pen and Paper
Yeah. I’m really glad that I already finished all my work that was due today. Otherwise I’d be screwed. I was trying to get a head start on some stuff due Wednesday, but I guess I can’t until aws is back up
Worst part for me was when the rewards app at the tea shop wouldn’t work.
I hated it when I was trying to break to avoid hitting that pedestrian at the cross walk, and the brake pedal input does a roundtrip to AWS before activating the wheel brakes. For user statistics, for my safety. Not at all for AI training, we swear.
Oh well, had no choice but to drive-by that old hag.
What kind of tea did you get?
Iced combo of lemonade, plain, and blueberry. Crazy refreshing.
I think that’s called juice lol
sounds good though
“Sir. Sir? Sir. (sigh) have you tried turning the internet off and on again?”