• 89 Posts
  • 828 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • I’m not scrolling through the thread to police people. I’m just responding to you as one person to another. I’m certainly not trying to shut you up.

    I didn’t call you or anyone else in this thread a tankie, because I actually agree with the premise that Lemmy could benefit from a more collaborative culture. I’m not identifying and harping on which political issues we disagree on here, because there are plenty of other threads on Lemmy for that.

    I’m just letting you know that I honestly believe that you and I are on the same side. We agree on 90% of the most important issues in the world right now. I don’t think there is any daylight between us when it comes to Trump or unions or healthcare or LGBTQ+ or protecting online speech or most other things. I don’t consider you an enemy at all, and I hope you think the same about me. We can and should (and have) discuss when we disagree on threads related to topics where we differ.

    Neglecting the majority of topics where we support one another does not benefit us, Lemmy, or the world.
















  • Here’s an interesting post that gives a pretty good quick summary of when an LLM may be a good tool.

    Here’s one key:

    Machine learning is amazing if:

    • The problem is too hard to write a rule-based system for or the requirements change sufficiently quickly that it isn’t worth writing such a thing and,
    • The value of a correct answer is much higher than the cost of an incorrect answer.

    The second of these is really important.

    So if your math problem is unsolvable by conventional tools, or sufficiently complex that designing an expression is more effort than the answer is worth… AND ALSO it’s more valuable to have an answer than it is to have a correct answer (there is no real cost for being wrong), THEN go ahead and trust it.

    If it is important that the answer is correct, or if another tool can be used, then you’re better off without the LLM.

    The bottom line is that the LLM is not making a calculation. It could end up with the right answer. Different models could end up with the same answer. It’s very unclear how much underlying technology is shared between models anyway.

    For example, if the problem is something like, "here is all of our sales data and market indicators for the past 5 years. Project how much of each product we should stock in the next quarter. " Sure, an LLM may be appropriately close to a professional analysis.

    If the problem is like “given these bridge schematics, what grade steel do we need in the central pylon?” Then, well, you are probably going to be testifying in front of congress one day.







  • Some companies have an accelerometer in the device itself. This one is just a BT chip that pulls the location and acceleration data on your phone itself, as well as a ton of other permissions and invasive data trackers, as well as an agreement that the insurance co can share collected data with third party partners. Unfortunately these days it’s pretty boilerplate stuff.


  • gedaliyah@lemmy.worldtomemes@lemmy.worldNopin' Right Outta There
    link
    fedilink
    arrow-up
    46
    arrow-down
    1
    ·
    27 days ago

    I got one of those “safe driver” transceivers to save money on insurance. It required an app, which I figured would be like a simple bridge that uploads the data from the device to the server.

    Holy mackerel, was it full of additional spyware. I was already a little uneasy sending driving data to a private company, but this would potentially be sending constant phone tracking. No thanks!