I’d like to set up a local coding assistant so that I can stop using Google to ask complex questions to for search results.

I really don’t know what I’m doing or if there’s anything that’s available that respects privacy. I don’t necessarily trust search results for this kind of query either.

I want to run it on my desktop, Ryzen 7 5800xt + Radeon RX 6950xt + 32gb of RAM. I don’t need or expect data center performance out of this thing. I’m also a strict Sublime user so I’d like to avoid VS Code suggestions as much as possible.

My coding laptop is an oooooold MacBook Air so I’d like something that can be ran on my desktop and used from my laptop if possible. No remote access needed, just to use from the same home network.

Something like LM Studio and Qwen sounds like it’s what I’m looking for, but since I’m unfamiliar with what exists I figured I would ask for Lemmy’s opinion.

Is LM Studio + Qwen a good combo for my needs? Are there alternatives?

I’m on Lemmy Connect and can’t see comments from other instances when I’m logged in, but to whomever melted down from this question your relief is in my very first sentence:

to ask complex questions to for search results.

  • Coolcoder360@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    18 hours ago

    I’ve not found them useful yet for more than basic things. I tried Ollama, it let’s you run locally, has simple setup, stays out of the way.

  • ryokimball@infosec.pub
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    18 hours ago

    I have heard good things about LM Studio from several professional coders and tinkers alike. Not tried it myself yet though, but I might have to bite the bullet because I can’t seem to get ollama to perform how I want.

    TabbyML is another thing to try.

    • wasp_eggs@midwest.socialOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      18 hours ago

      Thanks for the reply!

      I had noticed TabbyML but something about their wording made me rethink and then the next day I saw a post on here regarding the same phrasing, I decided to leave it alone after that

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        17 hours ago

        Yeah I tried tabby too and they had like a mandatory "we share your code " line and I hoped out. Like if you’re going to do that I might as well just use claude

  • TomAwezome@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    18 hours ago

    I get good mileage out of the Jan client and Void editor, various models will work but Jan-4B tends to do OK, maybe a Meta-Llama model could do alright too. The Jan client has settings where you can start up a local OpenAI-compatible server, and Void can be configured to point to that localhost URL+port and specific models. If you want to go the extra mile for privacy and you’re on a Linux distro, install firejail from your package manager and run both Void and Jan inside the same namespace with outside networking disabled so it only can talk on localhost. E.g.: firejail --noprofile --net=none --name=nameGoesHere Jan and firejail --noprofile --net=none --join=nameGoesHere void, where one of them sets up the namespace (–name=) and the other one joins the namespace (–join=)

    • IMALlama@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      3
      ·
      17 hours ago

      Straight up vibe coding is a horrible idea, but I’ll happily take tools to reduce mundane tasks.

      The project I’m currently working on leans on Temporal for durable execution. We define the activities and workflows in protobufs and utilize codegen for all the boring boiler plate stuff. The project hasa number of http endpoints that are again defined in protos, along with their inputs and outputs. Again, lots of code gen. Is code gen making me less creative or degrading my skills? I don’t think so. It sure makes the output more consistent and reduces the opportunity for errors.

      If I engage gen AI during development, which isn’t very often, my prompts are very targeted and the scope is narrow. However, I’ve found that gen AI is great for writing and modifying tests and with a little prompting you can get pretty solid unit test coverage for a verity of different scenarios. In the case of the software I write at work the creativity is in the actual code and the unit tests are often pretty repetitive (happy path, bad input 1…n, no result, mock an error at this step, etc). Once you know how to do that there’s no reason not to offload it IMO.

    • Shimitar@downonthestreet.eu
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      13 hours ago

      While you are correct, as all tools AI is not bad per se.

      If you use ai to replace more lengthy documentation searches and write your own code that works out pretty well and speed up your work without degrading your coding. Granted, I got plainly incorrect answers as well, but at least I managed to be much more efficient.

      Treat LLMs/ai as a glorified documentation aggregator and that’s how you correctly use that tool.

      Like, use a knife to cut and cook meat, not to cut another person body, and that’s how you correctly use that tool too.

      • poVoq@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        6 hours ago

        In my experience using AI for that replaces legthy documentation searches with reading lengthy AI output that turns out to be full of halucinations. Net time saved usually negative.

      • Artwork@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        4
        ·
        edit-2
        8 hours ago

        No, thank you. Sorry, never.

        Not only that, but the huge probability of mistakes is just deafening. The last time I used LLM was in 2023 someone recommended for a task at paper work, and I got a literal headache in 10 minutes… Since then I never ever will use that sorrow for anything that is not for blackbox pentesting or experimental unverified data generated you may find in medicine or military isolated solutions.

        That deafening feel that every single bit of output from that LLM or void machine may contain a mistake no soul is accountable for to ask about… A generated bit of someone’s work you just cannot verify since no source nor human is available… How would you trace the rationale that resulted in the output shown?

        Faster? Is that so… Doesn’t verification of every output require even more time to test it and consider stable, to prove it is correct, to stay accountable for the knowledge and actions you perform as a developer, artist, researcher… human?

        Your mind is to be trained to do a research, remember, and do not depend on someone’s service to a level of predominance/replacement.
        Meanwhile, effort, passion, creativity, empathy, and love, in turn, you carry, supports in long-term.

        You may not care now, though, but you do you. It’s your mind and memory you develop.

        • Shimitar@downonthestreet.eu
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          8 hours ago

          Maybe you should check 5hat again. I never used llm before 2025, but proved itself useful for a few tasks. Yes check and verification still needed, but indeed made my life easier and got taken done faster. Quality was still a good as what I could do myself. Maybe that doesn’t speak well of myself I don’t know.

      • Appoxo@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        5
        ·
        11 hours ago

        I like it to generate my git comments.
        Sometimes I just don’t know how to actually describe what I did.