

Because it’s not actually about the kids, the kids are just the excuse. That’s all they ever are.


Because it’s not actually about the kids, the kids are just the excuse. That’s all they ever are.


While this might not all be explicit lobby, they are being lobbied, and there’s an exceptional amount of money being put into it in many ways. And if you don’t think millions and millions of dollars are going into things like SPAC’s to support candidates who will support it via side conversations, or other campaign finance tricks around the world then you just don’t know how things work in the US.
Here’s an example of Yoti spending money supporting the Texas laws.
https://www.supremecourt.gov/DocketPDF/23/23-1122/332630/20241122163435761_23-1122 Amicus Brief.pdf
Yoti In response to COPPA
https://downloads.regulations.gov/FTC-2024-0003-0192/attachment_1.pdf
You really think they spend all that time writing this up and they aren’t having side conversations or nudge nudge wink wink deals going on? The US has a supreme court justice Clarence Thomas blatantly taking money from people and nothing happens.
This is a world wide campaign to support it by these companies who built the tools. There’s is an official industry lobby group as well https://avpassociation.com/
Edit: This is actual lobby record relating to kids online saftey act. https://lda.senate.gov/filings/public/filing/edf57428-13bb-4f91-89af-137bbe898e91/print/
Edit: Here’s Onfido (now Entrust) an explicit lobby: https://lda.senate.gov/filings/public/filing/251cf218-5db4-49bd-8b12-cf3cfa950d15/print/
Edit: Onfido lobbying again in EU over many years: https://www.lobbyfacts.eu/datacard/onfido?rid=193458043836-76&sid=157476
Edit: This is ID.me lobbying the US government, getting hang of the search (which isn’t great) which is millions of dollars in identify verification lobbying. And just think of how much money is being spent behind the money they legally disclosed. Lots.


I think you underestimate how much lobbying is happening by these firms. It’s a 10s of billions of dollars industry, and it’s up and coming hence why it’s suddenly starting to happen everywhere. It’s not happening because its suddenly a good idea all over the world.
Huge money involved.


It absolutely is a direct cause.
These ID verification companies have been lobbying governments to mandate this so they can earn $$$$


I’ve done some climbing before (with ropes) and it’s kinda fun being in a slab like that where you can lean in and not always need to have a hand on the wall. But fuck that guy and no ropes. Even watching recorded videos of him doing things makes me incredibly nervous and uncomfortable. I don’t think we should be encouraging what he does the way we do.


Most opinions on this topic are very much so based on vibes rather than real experience,
Very much so. You can tell from the way certain people talk about it that they’ve never actually used it in any meaningful way.
I don’t think LLM’s will be doing all the programming in a few years. They do keep getting better, but hallucinations are baked into how the system is designed, and unless they can solve that, it does feel like they are starting to reach a plateau. If they can solve it, I don’t think it would be a token based LLM as we know it today either, it would be a wholly other thing that we would need to reassess.
Also, some jobs won’t want to use it for fear of copyright infringement issues, others won’t want to use it as a mean to stay pure. Did you see any of the Claire Obscure Expedition 33 stuff over 1 AI generated placeholder texture that was accidentally left in the game and promptly removed? They’ve now said they just won’t use AI at all so they can remain pure.
I think learning to program is still a really good option even if it might be a little harder in the near future to get a job than before. In an ideal situation, hopefully you’ve found something you want to build for yourself so you can just keep learning off of that while benefiting from it, I find that usually works better motivation wise than building something random you don’t have an attachment to.
That also gives you a project to talk about in interviews, where you can talk about how you built it, what decisions you made while building it, problems you encountered, how you tackled those problems, the steps to make it publicly available etc etc.
Just don’t be too reliant on AI generated code while learning, or like I said with the website it helped me make, I didn’t learn much. You want to build your skills knowing how to use it as a tool, but not needing to use it at all.


So I’m developer, I do mobile apps, and I do use Claude/GPT.
I could be wrong, but I don’t foresee any imminent collapse of developer jobs, but it does have its uses. I think if anything it’ll be fewer lower end positions, but if you don’t hire and teach new devs, that’s going to have repercussions down the road.
I needed to make a webpage for example, and I’m not a webdev, and it helped me create a static landing webpage. I can tell that the webpage code is pretty shitty, but it does work for it’s purposes. This either replaced a significant amount of time learning how to do it, or replaced me hiring a contractor to do it. But I also am not really any better off at writing a webpage if I needed to make a 2nd one having used it, as I didn’t lean much in the process.
But setting it all up also did have me have to work on the infrastructure behind it. The AI was able to help guide me through that as well, but it did less of it. That I did learn, and would be able to leverage that for future work.
When it comes to my actual mobile work, I don’t like asking it do anything substantial as the quality is usually pretty low. I might ask it to build a skeleton of something that I can fill out, I’ll often ask it’s opinions on a small piece of code I wrote and look for a better way to write it, and in that case it has helped me learn new things. I’ll also talk to it about planning something out and getting some insights on the topic before I write any code.
It gives almost as many wrong/flawed answers as right answers if there’s even a tiny bit of complexity, so you need to know how to sift through the crap which you won’t know if you aren’t a developer. It will tell you APIs exist that don’t. It will recommend APIs that were deprecated years ago. The list goes on and on and on. This also happened while I was making the webpage, so my developer skills were still required to get to the end product I wanted.
I can’t see how it will replace a sizeable chunk of developers yet, but I think if used properly, it could enhance existing devs and lead to fewer hires needed.
When I hear things like 30% of Microsoft code is now written by AI, it makes sense why shit is breaking all the time and quality is going down. They’re forcing it to do what it can’t do yet.


So the link I posted was about proving Waymo truly can remote control them if needed even though they deny it, but I would be pretty surprised if Tesla said it wasn’t possible, because their car has the “summon” feature and literally any owner can remotely drive their car with a forward/back button. So regardless of if they do or don’t, they clearly can.


Ya, I don’t buy the hype around AGI. Like a Waymo drove into a telephone pole because of something they had to fix in their code. I’m not doubting there’s AI involved, neural nets, machine learning, whatever, but this isn’t an AGI type level development. Nor do I think they need an AGI to do this.
I’m also not convinced this LLM stuff can ever lead to AGI either. I think it can do some pretty impressive things with some very real drawbacks/caveats and there is definitely room to keep improving them, but that the whole architecture is flawed if you want to make an AGI.


I’m not fully up to speed on Waymo and if they have ever released remote assistance/ miles details, but when Cruise went through that shit storm a year or two ago, it came out that that the cars were asking for help every few miles.
Cruise was essentially all smoke and mirrors.


From the description it’s really not meant to solve that. In a situation like that they’d have to send someone, but they would be able to get out of the middle of a lane, off to the side, even if that only gives an extra foot or two of space to pass the vehicle.
Edit: And that’s assuming their remote helpers couldn’t direct the car to drive itself out using their other tool where the AI drives itself with their suggestions.


For future reference, here’s your proof its possible to be remotely moved, which means a hacker could exploit it.
In very limited circumstances such as to facilitate movement of the AV out of a freeway lane onto an adjacent shoulder, if possible, our Event Response agents are able to remotely move the Waymo AV under strict parameters, including at a very low speed over a very short distance.


(not op) Right here. It’s the only place they’ve ever admitted its possible.
In very limited circumstances such as to facilitate movement of the AV out of a freeway lane onto an adjacent shoulder, if possible, our Event Response agents are able to remotely move the Waymo AV under strict parameters, including at a very low speed over a very short distance.


For anyone that is curious, Waymo actually is capable of remote moving the vehicles despite what they say. They do their best not to admit it’s possible, but it’s right in the CPUC filings as a footnote, and probably the only place they’ll ever admit it.
In very limited circumstances such as to facilitate movement of the AV out of a freeway lane onto an adjacent shoulder, if possible, our Event Response agents are able to remotely move the Waymo AV under strict parameters, including at a very low speed over a very short distance.
I’m not opposed or knocking that they can do this, but they are lying to or misleading people when they say it can’t be done.


This is how it generally behaves, but they are capable of taking direct control in more difficult situations. It’s only very slow maneurvers though, it’s not like they would be driving it down the street. They could move it off the road onto the shoulder though if needed.
Edit: I am trying to find the source, but having problems. It was only ever mentioned in 1 official waymo document that I’ve seen that it was technically possible. My guess is they say their remote helpers can’t / don’t do it because they truly can’t, and it’s some highly restricted type of person who can, who isn’t classified like these other employees. The whole misleading but technical true kinda speak. I’ll keep looking though because I was really surprised to see them admit it when I saw it in an official document.
Found it
In very limited circumstances such as to facilitate movement of the AV out of a freeway lane onto an adjacent shoulder, if possible, our Event Response agents are able to remotely move the Waymo AV under strict parameters, including at a very low speed over a very short distance.
Looks like I was right as well on terminology, it’s not the remote operators that can do it, it’s the “Event Response” team that can.
As far as I know this is the only official acknowledgement it’s possible. Everywhere else they say it isn’t, and this is a footnote in that document.


They just stop moving when that happens. It’s been the cause of many traffic jams.


I didn’t know someone was trying a different approach like that, their animated graphics were really cool.
Eventually someone has gotta figure this out, I just hope I’m alive to see it and the outcome of it.


Just put the cpu by all the ports?


Ugh, that is bonkers.
Bought, didn’t know it was out!