Yes genetic algorithms are something different. Though they are used sometimes in training or architecting NNs, but not at the scale of modern LLMs.
Fyi you can have all or nothing outputs from a perceptron or other network. It all depends on the activation function. Most LLMs don’t use that kind of activation function, but it is possible. Have you heard of bitnet? They use only one of three states for the neuron output in an LLM. It’s interesting stuff.
Then again my knowledge of Machine Learning is 3 decades old (so, even before Recurrent Neural Networks were invented, much less Attention) and then some more recent reading up on LLMs from an implementation point of view to understand at least a bit how they work (it’s funny how so much of the modern stuff is still anchored in 3 decades old concepts).
Yes genetic algorithms are something different. Though they are used sometimes in training or architecting NNs, but not at the scale of modern LLMs.
Fyi you can have all or nothing outputs from a perceptron or other network. It all depends on the activation function. Most LLMs don’t use that kind of activation function, but it is possible. Have you heard of bitnet? They use only one of three states for the neuron output in an LLM. It’s interesting stuff.
I haven’t heard of bitnet.
Then again my knowledge of Machine Learning is 3 decades old (so, even before Recurrent Neural Networks were invented, much less Attention) and then some more recent reading up on LLMs from an implementation point of view to understand at least a bit how they work (it’s funny how so much of the modern stuff is still anchored in 3 decades old concepts).