- cross-posted to:
- steamdeck@sopuli.xyz
- cross-posted to:
- steamdeck@sopuli.xyz
Source: https://www.gog.com/en/work/senior-software-engineer-c-gog-galaxy
We already knew that GOG were going to be looking more seriously at Linux, as covered here previously on GamingOnLinux. However, they’re going a step further as noted in their job listing on how “Linux is the next major frontier”.



None of what you brought up as a positive are things an LLM does. Most of those things existed before the modern transformer-based LLMs were even a thing.
LLM-s are glorified text prediction engines and nothing about their nature makes them excel at formal languages. It doesn’t know any rules. It doesn’t have any internal logic. For example if the training data consistently exhibits the same flawed piece of code then an LLM will spit out the same flawed piece of code, because that’s the most likely continuation of its current “train of thought”. You would have to fine-tune the model around all those flaws and then hope some combination of a prompt won’t lead the model back into that flawed data.
I’ve used LLMs to generate SQL, which according to you is something they should excel at, and I’ve had to fix literal syntax errors that would prevent the statement from executing. A regular SQL linter would instantly pick up that the SQL is wrong but an LLM can’t pick up those errors because an LLM does not understand the syntax.
I’ve seen humans generate code with syntax errors, try to run it, then fix it. I’ve seen llms do the same stuff - it does that faster than the human though
But that extra time is then wasted because humans still have to review the code an LLM generates and fix all the other logical errors it makes because at best an LLM does exactly what you tell them to do. I’ve worked with a developer who did exactly what the ticket says and nothing more and it was a pain in the ass because their code always needed double checking that their narrow focus on a very specific problem didn’t break the domain as a whole. I don’t think you’re gaining any productivity with LLMs, you’re only shifting the work from writing code to reviewing code and I’ve yet to meet a developer who enjoys reviewing code more than writing code, which means code will receive less attention and thus becomes more prone to bugs.