The Framework Desktop is a deep disappointment to me. Framework, the company that got into the business with an explicit purpose of building modular and repairable computers, went into a space where that was the norm (desktops), and introduced a PC that was none of those things, at an exorbitant price. When they debuted it, it was marketed specifically as a gaming machine. As much as I want to support them, I cannot reward them for this specific product, as it abandons their fundamental tenets.

Here’s the build. You can see similar builds featured on many YT channels at this point with the new NV10 case and 5060 LP GPU.

Here’s one from ETA Prime

And another from “MRGUI on PC”

This build: ~$1100

Comparable Framework build: ~$1700

I will concede the Framework is still better at a few things:

  • Efficiency (I’m not sure that this is to any degree that’s worth being factored in)
  • Being that it’s more efficient, it’s also quieter
  • Local LLMs (which no one should care about or be using)
  • A bit thinner due to not have a dGPU
  • papertowels@mander.xyz
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    2 months ago

    claiming that working with free, open source, self hosted, privacy-focused local LLMs means you’ve drunk the corporate Kool aid is only further discrediting yourself on this topic…

    • artyom@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      2 months ago

      Right, because FOSS LLMs couldn’t possibly be influenced by the non-FOSS ones. That’s stupid. That wouldn’t make any sense at all…

      • papertowels@mander.xyz
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        2 months ago

        because FOSS LLMs couldn’t possibly be influenced by the non-FOSS ones.

        Okay, so I have a question for you. If we replace “LLMs” with operating systems in that statement, you’d use the same logic to argue that it’s drinking the corporate Kool aid to use linux, because it’s been influenced by non-FOSS operating systems.

        Do you see how embarrassing of an argument that is to try and make?

        Let’s shine some light on the elephant in the room. How much experience do you actually have with self-hosted LLMs? It’s sounding like 0, in which case I’d speak a lot less authoritatively if that were the case.

        • artyom@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 months ago

          It would be embarrassing, if that’s what I had said. But Linux and other operating systems actually serve a purpose. They’re not just bullshit machines.

          • papertowels@mander.xyz
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            2 months ago

            Before we continue the conversation, I’m going to ask you again, how much experience do you actually have with local LLMs? I’m going to assume you just haven’t found a use for it, and for some reason you have the gall to think that applies to everyone else, without ever touching it yourself.

            I’ll provide you a use-case I like. You can connect video feeds from self-hosted FOSS frigate NVR to a locally hosted LLM to provide searchable descriptions of tagged events, for example, you can search your events for “parked blue van”, which was tagged by a local LLM, instead of trying to look through hundreds of events yourself, and without sending your private video feeds to any corporation.

            Boom, local AI serves a purpose.

            Boom, your earlier argument now reads as “using Linux is drinking the corporate Kool aid”

            Are we done here?

            • artyom@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              2 months ago

              I’m going to ask you again, how much experience do you actually have with local LLMs\

              I’m not answering because I don’t keep a log of time spent with LLMs. I can tell you it’s many hours.

              I’m going to assume you just haven’t found a use for it, and for some reason you have the gall to think that applies to everyone else, without ever touching it yourself.

              Well you know what happens when you assume. I haven’t found a use for it because there aren’t any, other than writing shitty social media posts.

              You can connect video feeds from self-hosted FOSS frigate NVR to a locally hosted LLM to provide searchable descriptions of tagged events, for example, you can search your events for “parked blue van”, which was tagged by a local LLM

              LOOOOOLOL I see the problem here. You think LLM = AI. That’s nothing to do with an LLM, that’s just object detection/recognition.

              Are we done here?

              Yes, I’m glad I was able to clear that up for you!

              • papertowels@mander.xyz
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                2 months ago

                LOOOOOLOL I see the problem here. You think LLM = AI. That’s nothing to do with an LLM, that’s just object detection/recognition.

                …it’s literally using ollama to produce textual descriptions of scenes. Maybe my example was too basic. As per the frigate documentation, you can search for “red sedan driving down a residential street on a sunny day”. Object detection will not provide that context. That’s why I added “parked” to blue car - common object detection algorithms like YOLO will only say “blue car” without any context of the scene it belongs in. Hell, I think YOLO itself will only say “car”, IIRC.

                Frigate does not provide that capability without LLMs

                • artyom@piefed.socialOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  2 months ago

                  The only thing Ollama is doing is parsing your natural language into search queries for tags like “red car”, “sedan”, “residential street” etc. You could just as easily do the same thing by just typing in those tags. Like I said, pointless. And probably inaccurate.

                  • papertowels@mander.xyz
                    link
                    fedilink
                    arrow-up
                    2
                    ·
                    edit-2
                    2 months ago

                    The only thing Ollama is doing is parsing your natural language into search queries for tags like “red car”, “sedan”, “residential street” etc.

                    No, ollama is using models to provide further context on thumbnails containing tracked objects.. In textual form, that can be searched via semantic embeddings later on.

                    It blows my mind that you’re trying to speak so authoritatively on something you’ve clearly misunderstood. I’m going to guess you haven’t worked with frigate before?

                    Are you trying to understand me, or are you just defending yourself at all costs?