• mstrk@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    2 days ago

    I usually combine both to unblock myself. Lately, SO, repository issues, or just going straight to the documentation of the package/crate seem to give me faster outcomes.

    People have suggested that my prompts might not be optimal for the LLM. One even recommended I take a prompt engineering boot camp. I’m starting to think I’m too dumb to use LLMs to narrow my research sometimes. I’m fine with navigating SO toxicity, though it’s not much different from social media in general. It’s just how people are. You either take the best you can from it or let other people’s bad days affect yours.

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      If SO doesn’t have the answer to your question, LLMs won’t either. You can’t improve that by prompting “better”.

      They are just an easier way to search for it. They don’t make answers up (or rather, they do, but when they do that, they are always wrong).

    • If you’re planning on using LLMs for coding advice, may I recommend selfhosting a model and adding the documentation and repositories as context?

      I use a a 1.5b qwen model (mega dumb) but with no context limit I can attach the documentation for the language I’m using, and attach the files from the repo I’m working in (always a local repo in my case) I can usually explain what I’m doing, what I’m trying to accomplish, and what I’ve tried to the LLM and it will generate snippets that at the very least point me in the right direction but more often than not solve the problem (after minor tweaks because dumb model not so good at coding)

      • MonkeMischief@lemmy.today
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        2 days ago

        That’s a really cool idea actually. I never considered that you could use such a crazy low quant to, it sounds like, temporarily “train” it for the task at hand instead of having to use up countless watt hours training the model itself!

        That’s how I use these things, too. Not to “help me code”, but as a fancy search engine that can generally nudge me towards a solution I can work out myself.

      • mstrk@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        I do use the 1.5b of whatever latest ollama with open web ui as frontend for my personal use. Although I can upload files and search the web it’s too slow on my machine.

        • If you’ve got a decent Nvidia GPU and are hoping on linux, look into the Kobold-cpp Vulkan backend, in my experience it works far better than the CUDA backend and is astronomically faster than the CPU-Only backend.

            • When/If you do, a RTX3070-lhr (about $300 new) is just about the BARE MINIMUM for gpu inferencing. Its what I use, it gets the job done, but I often find context limits too small to be usable with larger models.

              If you wanna go team red, Vulkan should still work for inferencing and you have access to options with significantly more VRAM, allowing you to more effectively use larger models. I’m not sure about speed though, I haven’t personally used AMDs GPUs since around 2015.

    • smh@slrpnk.net
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      I’ve been having good luck with Kimi K2 for CSS/bootstrap stuff, and boilerplate API calls (example: update x to y, pulling x and y from this .csv). I appreciate that it cites its sources because then I can go read more and hopefully become more self-reliant when looking up documentation.