Laboratory planner by day, toddler parent by night, enthusiastic everything-hobbyist in the thirty minutes a day I get to myself.

  • 2 Posts
  • 54 Comments
Joined 1 year ago
cake
Cake day: July 31st, 2023

help-circle
  • In that case (as is the case with most games) the near-worst case scenario is that you are no worse off trusting Valve with the management of item data than you would be if it was in a public block chain. Why? Because those items are valueless outside the context of the commercial game they are used in. If Valve shuts down CS:GO tomorrow, owning your skins as a digital asset on a blockchain wouldn’t give you any more protection than the current status quo, because those skins are entirely dependent on the game itself to be used and viewed – it’d be akin to holding stock certificates for a company that’s already gone bankrupt and been liquidated: you have a token proving ownership of something that doesn’t exist anymore.

    Sure, there’s the edge case that if your Steam account got nukes from orbit by Gaben himself along with all its purchase and trading history you could still cash out on your skin collection, Conversely, having Valve – which, early VAC-ban wonkiness notwithstanding, has proven itself to be a generally-trustworthy operator of a digital games storefront for a couple decades now – hold the master database means that if your account got hacked and your stuff shifted off the account to others for profit, it’s much easier for Valve support to simply unwind those transactions and return your items to you. Infamously, in the case of blockchain ledgers, reversing a fraudulent transaction often requires forking the blockchain.


  • The idea has merit, in theory – but in practice, in the vast majority of cases, having a trusted regulator managing the system, who can proactively step in to block or unwind suspicious activity, turns out to be vastly preferable to the “code is law” status quo of most blockchain implementations. Not to mention most potential applications really need a mechanism for transactions to clear in seconds, rather than minutes to days, and it’d be preferable if they didn’t need to boil the oceans dry in the process of doing so.

    If I was really reaching, I could maybe imagine a valid use case for say, a hypothetical, federated open source game that needed to have a trusted way for every node to validate the creation and trading of loot and items, that could serve as a layer of protection against cheating nodes duping items, for instance. But that’s insanely niche, and for nearly every other use case a database held by a trusted entity is faster, simpler, safer, more efficient, and easier to manage.





  • I agree, this is a good use of the live service model to improve the gameplay experience. Previous entries in the Flight Simulator series did have people purchase and download static map data for selected regions, and it was a real pain in the butt – and expensive, too. Even with FS2020 there is a burgeoning market for airport and scenery packs that have more detail and verisimilitude than Asobo’s (admittedly still pretty good) approach of augmenting aerial and satellite imagery with AI can provide.

    Bottom line, though, simulator hobbyists have a much different sense of what kind of costs are reasonable for their games. If you’re already several grand deep on your sim rig, a couple hundred for more RAM or a few bucks a month for scenery updates isn’t any big deal to you.





  • I did a little digging and it seems like there’s a tiny kernel of fact at the core of this giant turd of a hype-piece, and that is the fact that they electrified this little spur line from Berlin to the new German Tesla factory by using a battery-electric trainset. Which is not a terrible solution for electrifying a very short branch line that presumably doesn’t need frequent all-day service, even if it’s a bit of a janky approach compared to overhead lines. But hand that off to the overworked, underpaid twenty-two-year old gig worker they’ve got doing “editing” at Yahoo for two bucks an article, and I guess it turns into “world-first electric wonder train amazes!”

    For a second, though, I read the headline and wondered if Musk and co. had finally looped all the way around to reinventing commuter rail from first principles after all these years of trying to “disrupt” it with bullshit ideas like Hyperloop and Tunnels, But Dumber.




  • Right now Intel and AMD have less to fear from Apple than they do from Qualcomm – the people who can do what they need to do with a Mac and want to are already doing that, it’s businesses that are locked into the Windows ecosystem that drive the bulk of their laptop sales right now, and ARM laptops running Windows are the main threat in the short term.

    If going wider and integrating more coprocessors gets them closer to matching Apple Silicon in performance per watt, that’s great, but Apple snatching up their traditional PC market sector is a fairly distant threat in comparison.



  • Quite sure – and given that one game I’ve been playing lately (and the exception to the lack of shooters in my portfolio) is Selaco, so I ought to have noticed by now.

    There’s a very slight difference in smoothness when I’m rapidly waving a mouse cursor waving around on one screen versus the other, but it’s hardly the night-and-day difference that going from 30fsp to 60fps was back in Ye Olden Days, and watching a small, fast-moving, high-contrast object doesn’t make up the bulk of gameplay in anything I play these days.



  • At launch the 360 was on par graphically with contemporary high-end GPUs, you’re right. By even the midpoint of its seven year lifespan, though, it was getting outclassed by midrange PC hardware. You’ve got to factor in the insanely long refresh cycles of consoles starting with the six and seventh generations of consoles when you talk about processing power. Sony and Microsoft have tried to fix this with mid-cycle refresh consoles, but I think this has honestly hurt more than helped since it breaks the basic promise of console gaming – that you buy the hardware and you’re promised a consistent experience with it for the whole lifecycle. Making multiple performance targets for developers to aim for complicates development and takes away from the consumer appeal


  • Eh… Consoles used to be horribly crippled compared to a dedicated gaming PC of similar era, but people were more lenient about it because TVs were low-res and the hardware was vastly cheaper. Do you remember Perfect Dark multiplayer on N64, for instance? I do, and it was a slideshow – didn’t stop the game from being lauded as the apex of console shooters at the time. I remember Xbox 360 flagship titles upscaling from sub-720p resolutions in order to maintain a consistent 30fps.

    The console model has always been cheap hardware masked by lenient output resolutions and a less discerning player base. Only in the era of 4K televisions and ubiquitous crossplay with PC has that become a problem.


  • Might just be my middle-aged eyes, but I recently went from a 75Hz monitor to a 160Hz one and I’ll be damned if I can see the difference in motion. Granted that don’t play much in the way of twitch-style shooters anymore, but for me the threshold of visual smoothness is closer to 60Hz than whatever bonkers 240Hz+ refresh rates that current OLEDs are pushing.

    I’ll agree that 30fps is pretty marginal for any sort of action gameplay, though historically console players have been more forgiving of mediocre performance in service of more eye candy.


  • The problem is that the private sector faces the same pressures about the appearance of failure. Imagine if Boeing adopted the SpaceX approach now and started blowing up Starliner prototypes on a monthly basis to see what they could learn. How badly would that play in the press? How quickly would their stock price tank? How long would the people responsible for that direction be able to hold on to their jobs before the board forced them out in favor of somebody who’d take them back to the conservative approach?

    Heck, even SpaceX got suddenly cagey about their first stage return attempts failing the moment they started offering stakes to outside investors, whereas previously they’d celebrated those attempts that didn’t quite work. Look as well at how the press has reacted to Starship’s failures, even though the program has been making progress from launch to launch at a much greater pace than Falcon did initially. The fact of the matter is that SpaceX’s initial success-though-informative-failure approach only worked because it was bankrolled entirely by one weird dude with cubic dollars to burn and a personal willingness to accept those failures. That’s not the case for many others.


  • NASA in-house projects were historically expensive because they took the approach that they were building single-digit numbers of everything – very nearly every vehicle was bespoke, essentially – and because failure was a death sentence politically, they couldn’t blow things up and iterate quickly. Everything had to be studied and reviewed and re-reviewed and then non-destructively tested and retested and integration tested and dry rehearsed and wet rehearsed and debriefed and revised and retested and etc. ad infinitum. That’s arguably what you want in something like a billion dollar space telescope that you only need one of and has to work right the first time, but the lesson of SpaceX is that as long as you aren’t afraid of failure you can start cheap and cheerful, make mistakes, and learn more from those mistakes than you would from packing a dozen layers of bureaucracy into a QC program and have them all spitball hypothetical failure modes for months.

    Boeing, ULA and the rest of the old space crew are so used to doing things the old way that they struggle culturally to make the adaptations needed to compete with SpaceX on price, and then in Boeing’s case the MBAs also decided that if they stopped doing all that pesky engineering analysis and QA/QC work they could spend all that labor cost on stock buybacks instead.