It was never to your definition of free, so you were never going to be using it in the first place. Don’t need to say goodbye when you were never here.
It was never to your definition of free, so you were never going to be using it in the first place. Don’t need to say goodbye when you were never here.
If you’re going to use it, you’d be paying for it one way or another; either through money or privacy. Par for the course.
Everything eventually dies off, or transforms into something not serving our needs and the legacy version dies off; free, paid, proprietary or open source, doesn’t matter. The only thing we can do is position ourselves in such a way that when it happens, not if, we are ready to take what we’d need to the next solution that will serve our needs.
Removed by mod
iOS supports VPN out of the gate. Apps just make it easier to configure. Please don’t spew divisive misinformation, regardless if this is ignorant to the facts or otherwise.
This is Apple; they value different things than most people… sometimes warranted, results in offering a much better experience, and pushes everything forward (see MagSafe -> Qi2 for recent example), other times they’re just regarded as late adopters. The detraction of visual aesthetics from folding crease is apparently one of such things that they care about.
Amazing stuff. Thank you so much!
The LM password hash (predecessor to NTLM) was calculated in two blocks of 7 characters from that truncated 14 characters. Which meant the rainbow table for that is much smaller than necessary and if your password is not 14 characters, then technically part of the hash is much easier to brute force, because the other missing characters are just padded with null.
If memory serves, 175B parameters is for the GPT3 model, not even the 3.5 model that caught the world by surprise; and they have not disclosed parameter space for GPT4, 4o, and o1 yet. If memory also serves, 3 was primarily English, and had only a relatively small set of words (I think 50K or something to that effect) it was considering as next token candidates. Now that it is able to work in multiple languages and multi modal, the parameter space must be much much larger.
The amount of things it can do now is incredible, but our perceived incremental improvements on LLM will probably slow down (due to the pace fitting to the predicted lines in log space)… until the next big thing (neural nets > expert systems > deep learning > LLM > ???). Such an exciting time we’re in!
Edit: found it. Roughly 50K tokens for input output embedding, in GPT3. 3Blue1Brown has a really good explanation here for anyone interested: https://youtu.be/wjZofJX0v4M
The models are not wrong. The models are nothing but a statistical model that’s really good at predicting the next word that is likely to follow base on prior information given. It doesn’t have understanding of the context of the words, just that statistically they’re likely to follow. As such, all LLM outputs are correct to their design.
The users’ assumption/expectation of the output being factual is what is wrong. Hallucination is a fancy word in attempt make the users not feel as upset when the output passage doesn’t match their assumption/expectation.
The network effect is too strong. The minority that are whining here isn’t going to make a dent. Next time you’re out, look at how many people are using ads ridden apps instead of paying $0.99 or whatever to remove them. The users have already decided their time and privacy is worthless and would rather getting the service for “free”.
deleted by creator
4o does perform web searches, give summaries from a couple of pages, and include the link to those pages when prompted properly.
However, as most people know, first couple results doesn’t always tell the full picture and further actual researches are required… but, most “AI assistant” (also including things like those voice assistants in speakers) users tends to take the first response as fact…
¯\_(ツ)_/¯
Reducing ad spend on one platform, albeit often the elephant in the room for most companies’ online marketing department, isn’t going to reduce prices at the till. Companies will either reallocate the ad spend elsewhere, there by spamming more ads in front of everyone, or pocket the difference to pad their profit margin.
Google did not make RCS; RCS is made by GSM consortium as succession of SMS, Google extended it to add some extra features such as end to end encryption (but only when messages are routed through their servers).
China mandated 5G sold in China must support RCS, hence why Apple added support for this. Since Google is basically banned in China, you can pretty much bet RCS going into/out of China is going to be unencrypted.
So you’re basically stuck between getting inferior unencrypted messages, or routing everything through Google.
Avoid RCS like the plague.
Sure. But the capacitors in the devices do make a pop and the fragments/shrapnels from the damaged devices depart from their physical location at pace that I would not be comfortable with.
If I’m dealing with a spicy pillow situation, the technical definitions as to whether or not something counts as an explosion is the last of my concern.
Most portable electronics today use some variation of lithium ion batteries, which when it becomes unstable can combust/explode if mishandled. However, devices generally have thermal management software and hardware, as well as multitude of other safety mechanisms like power management systems to handle charge regulation. Unless you intentionally puncture your batteries, they’re not likely to cause any problems on their own.
It is easier to think of the SSL termination in legs.
If, however, you want to directly expose your service without orange cloud (running a game server on the same subdomain for example), then you’d disable the orange cloud and do Let’s Encrypt or deploy your own certificate on your reverse proxy.
Looking great! I think it would be amazing if there are filters for processor generations as well as form factor. Thanks for sharing this tool!
Using Ollama to try a couple of models right now for an idea. I’ve tried to run Llama 3.2 and Qwen 2.5 3b, both of which fits my 3050 6G’s VRAM. I’ve also tried for fun to use Qwen 2.5 32b, which fits in my RAM (I’ve got 128G) but it was only able to reply a couple of tokens per second, thereby making it very much a non-interactive experience. Will need to explore the response time piece a bit further to see if there are ways I can lean on larger models with longer delays still.