You could also just only use Macs.
I actually don’t know what the current requirement is. Back in the day, Apple used to build some of the OS — like QuickDraw — into the ROMs, so unless you had a physical Mac, not just a purchased copy of MacOS, you couldn’t legally run MacOS, since the ROM contents were copyrighted, and doing so would require infringing on the ROM copyright. Apple obviously doesn’t care about this most of the time, but I imagine that if it becomes institutionalized at places that make real money, they might.
But I don’t know if that’s still the case today. I’m vaguely recalling that there was some period where part of Apple’s EULA for MacOS prohibited running MacOS on non-Apple hardware, which would have been a different method of trying to tie it to the hardware.
searches
This is from 2019, and it sounds like at that point, Apple was leveraging the EULAs.
https://discussions.apple.com/thread/250646417?sortBy=rank
Posted on Sep 20, 2019 5:05 AM
The widely held consensus is that it is only legal to run virtual copies of macOS on a genuine Apple made Apple Mac computer.
There are numerous packages to do this but as above they all have to be done on a genuine Apple Mac.
- VMware Fusion - this allows creating VMs that run as windows within a normal Mac environment. You can therefore have a virtual Mac running inside a Mac. This is useful to either run simultaneously different versions of macOS or to run a test environment inside your production environment. A lot of people are going to use this approach to run an older version of macOS which supports 32bit apps as macOS Catalina will not support old 32bit apps.
- VMware ESXi aka vSphere - this is a different approach known as a ‘bare metal’ approach. With this you use a special VMware environment and then inside that create and run virtual machines. So on a Mac you could create one or more virtual Mac but these would run inside ESXi and not inside a Mac environment. It is more commonly used in enterprise situations and hence less applicable to Mac users.
- Parallels Desktop - this works in the same way as VMware Fusion but is written by Parallels instead.
- VirtualBox - this works in the same way as VMware Fusion and Parallels Desktop. Unlike those it is free of charge. Ostensible it is ‘owned’ by Oracle. It works but at least with regards to running virtual copies of macOS is still vastly inferior to VMware Fusion and Parallels Desktop. (You get what you pay for.)
Last time I checked Apple’s terms you could do the following.
- Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of doing software development
- Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of testing
- Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of being a server
- Run a virtualised copy of macOS on a genuine Apple made Mac for personal non-commercial use
No. Apple spells this out very clearly in the License Agreement for macOS. Must be installed on Apple branded hardware.
They switched to ARM in 2020, so unless their legal position changed around ARM, I’d guess that they’re probably still relying on the EULA restrictions. That being said, EULAs have also been thrown out for various reasons, so…shrugs
goes looking for the actual license text.
Yeah, this is Tahoe’s EULA, the most-recent release:
https://www.apple.com/legal/sla/docs/macOSTahoe.pdf
Page 2 (of 895 pages):
They allow only on Apple-branded hardware for individual purchases unless you buy from the Mac Store. For Mac Store purchases, they allow up to two virtual instances of MacOS to be executed on Apple-branded hardware that is also running the OS, and only under certain conditions (like for software development). And for volume purchase contracts, they say that the terms are whatever the purchaser negotiated. I’m assuming that there’s no chance that Apple is going to grant some “go use it as much as you want whenever you want to do CI tests or builds for open-source projects targeting MacOS” license.
So for the general case, the EULA prohibits you from running MacOS wherever on non-Apple hardware.















I’m similarly dubious about using LLMs to do code. I’m certainly not opposed to automation — software development has seen massive amounts of automation over the decades. But software is not very tolerant of errors.
If you’re using an LLM to generate text for human consumption, then an error here or there often isn’t a huge deal. We get cued by text; “approximately right” is often pretty good for the way we process language. Same thing with images. It’s why, say, an oil painting works; it’s not a perfect depiction of the world, but it’s enough to cue our brain.
There are situations where “approximately right” might be more-reasonable. There are some where it might even be pretty good — instead of manually-writing commit messages, which are for human consumption, maybe we could have LLMs describe what code changes do, and as LLMs get better, the descriptions improve too.
This doesn’t mean that I think that AI and writing code can’t work. I’m sure that it’s possible to build an AGI that does fantastic things. I’m just not very impressed by using a straight LLM, and I think that the limitations are pretty fundamental.
I’m not completely willing to say that it’s impossible. Maybe we could develop, oh, some kind of very-strongly-typed programming language aimed specifically at this job, where LLMs are a good heuristic to come up with solutions, and the typing system is aimed at checking that work. That might not be possible, but right now, we’re trying to work with programming languages designed for humans.
Maybe LLMs will pave the way to getting systems in place that have computers do software engineering, and then later we can just slip in more-sophisticated AI.
But I don’t think that the current approach will wind up being the solution.
Summarizing text — probably not primarily books — is one area that I think might be more useful. It is a task that many people do spend time doing. Maybe it’s combining multiple reports from subordinates, say, and then pushing a summary upwards.
I think that in general, quality issues are not fundamental.
There are some things that we want to do that I don’t think that the the current approaches will do well, like producing consistent representations of characters. There are people working on it. Will they work? Maybe. I think that for, say, editorial illustration for a magazine, it can be a pretty decent tool today.
I’ve also been fairly impressed with voice synth done via genAI, though it’s one area that I haven’t dug into deeply.
I think that there’s a solid use case for voice query and response on smartphones. On a desktop, I can generally sit down and browse webpages, even if an LLM might combine information more quickly than I can manually. Someone, say, driving a car or walking somewhere can ask a question and have an LLM spit out an answer.
I think that image tagging can be a pretty useful case. It doesn’t have to be perfect — just be a lot cheaper and more universal than it would to have humans doing it.
Some of what we’re doing now, both on the part of implementers and on the R&D people working on the core technologies, is understanding what the fundamental roadblocks are, and quantifying strengths and weaknesses. That’s part of the process for anything you do. I can see an argument that more-limited resources should be put on implementation, but a company is going to have to go out and try something and then say “okay, this is what does and doesn’t work for us” in order to know what to require in the next iteration. And that’s not new. Take, oh, the Macintosh. Apple tried to put out the Lisa. It wasn’t a market success. But taking what did work and correcting what didn’t was a lot of what led to the Macintosh, which was a much larger success and closer to what the market wanted. It’s going to be an iterative process.
I also think that some of that is laying the groundwork for more-sophisticated AI systems to be dropped in. Like, if you think of, say, an LLM now as a placeholder for a more-sophisticated system down the line, the interfaces are being built into other software to make use of more-sophisticated systems. You just change out the backend. So some of that is going to be positioning not just for the current crop, but tomorrow’s crop of systems.
If you remember the Web around the late 1990s, the companies that did have websites were often pretty amateurish-looking. They were often not very useful. The teams that made them didn’t have a lot of resources. The tools to work with websites were still limited, and best practices not developed.
https://www.webdesignmuseum.org/gallery/year-1997
But what they did was get a website up, start people using them, and start building the infrastructure for what, some years later, was a much-more-important part of the company’s interface and operations.
I think that that’s where we are now regarding use of AI. Some people are doing things that won’t wind up ultimately working (e.g. the way Web portals never really took over, for the Web). Some important things, like widespread encryption, weren’t yet deployed. The languages and toolkits for doing development didn’t really yet exist. Stuff like Web search, which today is a lot more approachable and something that we simply consider pretty fundamental to use of the Web, wasn’t all that great. If you looked at the Web in 1997, it had a lot of deficiencies compared to brick-and-mortar companies. But…that also wasn’t where things stayed.
Today, we’re making dramatic changes to how models work, like the rise of MoEs. I don’t think that there’s much of a consensus on what hardware we’ll wind up using. Training is computationally expensive. Just using models on a computer yourself still involves a fair amount of technical knowledge, the sort of way the MS-DOS era on personal computers prevented a lot of people from being able to do a lot with computers. There are efficiency issues, and basic techniques for doing things like condensing knowledge are still being developed. LLMs people are building today have very little “mutable” memory — you’re taking a snapshot of information at training time and making something that can do very little learning at runtime. But if I had to make a guess, a lot of those things will be worked out.