Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 15 Posts
  • 3.34K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • “Write code without learning it!” I get it. I’ve struggled learning to program for 10 years. But every time I hear a programmer talk about AIGen code, it’s never good, and my job’s software has gotten less stable as AIGen code as been added in.

    I’m similarly dubious about using LLMs to do code. I’m certainly not opposed to automation — software development has seen massive amounts of automation over the decades. But software is not very tolerant of errors.

    If you’re using an LLM to generate text for human consumption, then an error here or there often isn’t a huge deal. We get cued by text; “approximately right” is often pretty good for the way we process language. Same thing with images. It’s why, say, an oil painting works; it’s not a perfect depiction of the world, but it’s enough to cue our brain.

    There are situations where “approximately right” might be more-reasonable. There are some where it might even be pretty good — instead of manually-writing commit messages, which are for human consumption, maybe we could have LLMs describe what code changes do, and as LLMs get better, the descriptions improve too.

    This doesn’t mean that I think that AI and writing code can’t work. I’m sure that it’s possible to build an AGI that does fantastic things. I’m just not very impressed by using a straight LLM, and I think that the limitations are pretty fundamental.

    I’m not completely willing to say that it’s impossible. Maybe we could develop, oh, some kind of very-strongly-typed programming language aimed specifically at this job, where LLMs are a good heuristic to come up with solutions, and the typing system is aimed at checking that work. That might not be possible, but right now, we’re trying to work with programming languages designed for humans.

    Maybe LLMs will pave the way to getting systems in place that have computers do software engineering, and then later we can just slip in more-sophisticated AI.

    But I don’t think that the current approach will wind up being the solution.

    “Summarize a book!” I am doing this for fun, why would I want to?

    Summarizing text — probably not primarily books — is one area that I think might be more useful. It is a task that many people do spend time doing. Maybe it’s combining multiple reports from subordinates, say, and then pushing a summary upwards.

    “Generate any image!” I get the desire, but I can’t ignore the broader context of how we treat artists. Also the images don’t look that great anyway.

    I think that in general, quality issues are not fundamental.

    There are some things that we want to do that I don’t think that the the current approaches will do well, like producing consistent representations of characters. There are people working on it. Will they work? Maybe. I think that for, say, editorial illustration for a magazine, it can be a pretty decent tool today.

    I’ve also been fairly impressed with voice synth done via genAI, though it’s one area that I haven’t dug into deeply.

    I think that there’s a solid use case for voice query and response on smartphones. On a desktop, I can generally sit down and browse webpages, even if an LLM might combine information more quickly than I can manually. Someone, say, driving a car or walking somewhere can ask a question and have an LLM spit out an answer.

    I think that image tagging can be a pretty useful case. It doesn’t have to be perfect — just be a lot cheaper and more universal than it would to have humans doing it.

    Some of what we’re doing now, both on the part of implementers and on the R&D people working on the core technologies, is understanding what the fundamental roadblocks are, and quantifying strengths and weaknesses. That’s part of the process for anything you do. I can see an argument that more-limited resources should be put on implementation, but a company is going to have to go out and try something and then say “okay, this is what does and doesn’t work for us” in order to know what to require in the next iteration. And that’s not new. Take, oh, the Macintosh. Apple tried to put out the Lisa. It wasn’t a market success. But taking what did work and correcting what didn’t was a lot of what led to the Macintosh, which was a much larger success and closer to what the market wanted. It’s going to be an iterative process.

    I also think that some of that is laying the groundwork for more-sophisticated AI systems to be dropped in. Like, if you think of, say, an LLM now as a placeholder for a more-sophisticated system down the line, the interfaces are being built into other software to make use of more-sophisticated systems. You just change out the backend. So some of that is going to be positioning not just for the current crop, but tomorrow’s crop of systems.

    If you remember the Web around the late 1990s, the companies that did have websites were often pretty amateurish-looking. They were often not very useful. The teams that made them didn’t have a lot of resources. The tools to work with websites were still limited, and best practices not developed.

    https://www.webdesignmuseum.org/gallery/year-1997

    But what they did was get a website up, start people using them, and start building the infrastructure for what, some years later, was a much-more-important part of the company’s interface and operations.

    I think that that’s where we are now regarding use of AI. Some people are doing things that won’t wind up ultimately working (e.g. the way Web portals never really took over, for the Web). Some important things, like widespread encryption, weren’t yet deployed. The languages and toolkits for doing development didn’t really yet exist. Stuff like Web search, which today is a lot more approachable and something that we simply consider pretty fundamental to use of the Web, wasn’t all that great. If you looked at the Web in 1997, it had a lot of deficiencies compared to brick-and-mortar companies. But…that also wasn’t where things stayed.

    Today, we’re making dramatic changes to how models work, like the rise of MoEs. I don’t think that there’s much of a consensus on what hardware we’ll wind up using. Training is computationally expensive. Just using models on a computer yourself still involves a fair amount of technical knowledge, the sort of way the MS-DOS era on personal computers prevented a lot of people from being able to do a lot with computers. There are efficiency issues, and basic techniques for doing things like condensing knowledge are still being developed. LLMs people are building today have very little “mutable” memory — you’re taking a snapshot of information at training time and making something that can do very little learning at runtime. But if I had to make a guess, a lot of those things will be worked out.


  • You could also just only use Macs.

    I actually don’t know what the current requirement is. Back in the day, Apple used to build some of the OS — like QuickDraw — into the ROMs, so unless you had a physical Mac, not just a purchased copy of MacOS, you couldn’t legally run MacOS, since the ROM contents were copyrighted, and doing so would require infringing on the ROM copyright. Apple obviously doesn’t care about this most of the time, but I imagine that if it becomes institutionalized at places that make real money, they might.

    But I don’t know if that’s still the case today. I’m vaguely recalling that there was some period where part of Apple’s EULA for MacOS prohibited running MacOS on non-Apple hardware, which would have been a different method of trying to tie it to the hardware.

    searches

    This is from 2019, and it sounds like at that point, Apple was leveraging the EULAs.

    https://discussions.apple.com/thread/250646417?sortBy=rank

    Posted on Sep 20, 2019 5:05 AM

    The widely held consensus is that it is only legal to run virtual copies of macOS on a genuine Apple made Apple Mac computer.

    There are numerous packages to do this but as above they all have to be done on a genuine Apple Mac.

    • VMware Fusion - this allows creating VMs that run as windows within a normal Mac environment. You can therefore have a virtual Mac running inside a Mac. This is useful to either run simultaneously different versions of macOS or to run a test environment inside your production environment. A lot of people are going to use this approach to run an older version of macOS which supports 32bit apps as macOS Catalina will not support old 32bit apps.
    • VMware ESXi aka vSphere - this is a different approach known as a ‘bare metal’ approach. With this you use a special VMware environment and then inside that create and run virtual machines. So on a Mac you could create one or more virtual Mac but these would run inside ESXi and not inside a Mac environment. It is more commonly used in enterprise situations and hence less applicable to Mac users.
    • Parallels Desktop - this works in the same way as VMware Fusion but is written by Parallels instead.
    • VirtualBox - this works in the same way as VMware Fusion and Parallels Desktop. Unlike those it is free of charge. Ostensible it is ‘owned’ by Oracle. It works but at least with regards to running virtual copies of macOS is still vastly inferior to VMware Fusion and Parallels Desktop. (You get what you pay for.)

    Last time I checked Apple’s terms you could do the following.

    • Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of doing software development
    • Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of testing
    • Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of being a server
    • Run a virtualised copy of macOS on a genuine Apple made Mac for personal non-commercial use

    No. Apple spells this out very clearly in the License Agreement for macOS. Must be installed on Apple branded hardware.

    They switched to ARM in 2020, so unless their legal position changed around ARM, I’d guess that they’re probably still relying on the EULA restrictions. That being said, EULAs have also been thrown out for various reasons, so…shrugs

    goes looking for the actual license text.

    Yeah, this is Tahoe’s EULA, the most-recent release:

    https://www.apple.com/legal/sla/docs/macOSTahoe.pdf

    Page 2 (of 895 pages):

    They allow only on Apple-branded hardware for individual purchases unless you buy from the Mac Store. For Mac Store purchases, they allow up to two virtual instances of MacOS to be executed on Apple-branded hardware that is also running the OS, and only under certain conditions (like for software development). And for volume purchase contracts, they say that the terms are whatever the purchaser negotiated. I’m assuming that there’s no chance that Apple is going to grant some “go use it as much as you want whenever you want to do CI tests or builds for open-source projects targeting MacOS” license.

    So for the general case, the EULA prohibits you from running MacOS wherever on non-Apple hardware.





  • Yes. For a single change. Like having an editor with 2 minute save lag, pushing commit using program running on cassette tapes4 or playing chess over snail-mail. It’s 2026 for Pete’s sake, and we5 won’t tolerate this behavior!

    Now of course, in some Perfect World, GitHub could have a local runner with all the bells and whistles. Or maybe something that would allow me to quickly check for progress upon the push6 or even something like a “scratch commit”, i.e. a way that I could testbed different runs without polluting history of both Git and Action runs.

    For the love of all that is holy, don’t let GitHub Actions manage your logic. Keep your scripts under your own damn control and just make the Actions call them!

    I don’t use GitHub Actions and am not familiar with it, but if you’re using it for continuous integration or build stuff, I’d think that it’s probably a good idea to have that decoupled from GitHub anyway, unless you want to be unable to do development without an Internet connection and access to GitHub.

    I mean, I’d wager that someone out there has already built some kind of system to do this for git projects. If you need some kind of isolated, reproducible environment, maybe Podman or similar, and just have some framework to run it?

    like macOS builds that would be quite hard to get otherwise

    Does Rust not do cross-compilation?

    searches

    It looks like it can.

    https://rust-lang.github.io/rustup/cross-compilation.html

    I guess maybe MacOS CI might be a pain to do locally on a non-MacOS machine. You can’t just freely redistribute MacOS.

    goes looking

    Maybe this?

    https://www.darlinghq.org/

    Darling is a translation layer that lets you run macOS software on Linux

    That sounds a lot like Wine

    And it is! Wine lets you run Windows software on Linux, and Darling does the same for macOS software.

    As long as that’s sufficient, I’d think that you could maybe run MacOS CI in Darling in Podman? Podman can run on Linux, MacOS, Windows, and BSD, and if you can run Darling in Podman, I’d think that you’d be able to run MacOS stuff on whatever.


  • I think that it’s going to be hard to provide a meaningful answer. There are a wide range of fields that use the scientific process, the stuff that you’d call “science”.

    Some of those, no doubt, produce a strong return on investment. You could say, purely on finnacial terms, that research there makes a lot of sense. Producing, say, the integrated circuit is something that transformed the world.

    I am sure that if you looked, you could find some areas that don’t do that.

    In some of these latter cases — say, cosmology — I doubt that there are likely direct financial returns, but if we want to understand where the universe has been and where it’s going, we have to place some kind of value on that and fund it to that value.

    But…science isn’t a single entity that you fund or don’t fund to a given amount. It’s people working in a wide range of fields. It’s like saying “should we fund sysadmins more” or “should we fund human resource departments more”. The answer is almost certainly going to be “it depends on the specific case”.









  • There was some similar project that the UK was going to do, run an HVDC submarine line down from the UK to Africa.

    searches

    https://en.wikipedia.org/wiki/Xlinks_Morocco–UK_Power_Project

    The Xlinks Morocco-UK Power Project is a proposal to create 11.5 GW of renewable generation, 22.5 GWh of battery storage and a 3.6 GW high-voltage direct current interconnector to carry solar and wind-generated electricity from Morocco to the United Kingdom.[1][2][3][4] Morocco has been hailed as a potential key power generator for Europe as the continent looks to reduce reliance on fossil fuels.[5]

    If built, the 4,000 km (2,500 miles) cable would be the world’s longest undersea power cable, and would supply up to 8% of the UK’s electricity consumption.[6][7][8] The project was projected to be operational within a decade.[9][10] The proposal was rejected by the UK government in June 2025.






  • I think another major factor for Linux gaming beyond Valve was a large shift by game developers to using widely-used game engines. A lot of the platform portability work happened at that level, so was spread across many games. Writing games that could run on both personal computers and personal-computer-like consoles with less porting work became a goal. And today, some games also have releases on mobile platforms.

    When I started using Linux in the late 1990s, the situation was wildly different on that front.


  • Context:

    https://en.wikipedia.org/wiki/Ultra-mobile_PC

    An ultra-mobile PC,[1] or ultra-mobile personal computer (UMPC), is a miniature version of a pen computer, a class of laptop whose specifications were launched by Microsoft and Intel in Spring 2006. Sony had already made a first attempt in this direction in 2004 with its Vaio U series, which was only sold in Asia. UMPCs are generally smaller than subnotebooks, have a TFT display measuring (diagonally) about 12.7 to 17.8 centimetres (5.0 to 7.0 in), are operated like tablet PCs using a touchscreen or a stylus, and can also have a physical keyboard.