• Aceticon@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    5 hours ago

    Yeah, if you’re supposedly in AI/ML and don’t recognize a (stupidly simplified) diagram for a Neural Network, you don’t really make stuff with it, you’re just another user (probably a “prompt engineer”).

    Even people creating Machine Learning solutions with other techniques would recognize that as representing a Neural Network.

    That should be as recognizable to a professional in that domain as a long string of 0s and 1s would be recognizable as binary to a programmer - even if you’re not working with it at that level, you recognized such building blocks of your trade.

    • NotANumber@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      5 hours ago

      To be more specific this is an MLP (Multi-Layer Perceptron). Neural Network is a catch all term that includes other things such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Diffusion models and of course Transformers.

      What you are arguing online is some variant of a Generative Pre-trained Transformer, which do have MLP or MoE layers but that’s only one part of what they are. They also have multi-headed attention mechanisms and embedding + unembedding vectors.

      I know all this and wouldn’t call myself a machine learning expert. I just use the things. Though I did once train a simple MLP like the one in the picture. I think it’s quite bad calling yourself a machine learning expert and not knowing all of this stuff and more.

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        5 hours ago

        Right, if I understood it correctly, what you see as “IF” is the multi-headed attention stuff. I was under the impression that you can’t actually have non-contiguous functions there so even multi-headed attention stuff involves functions which don’t have +/- infinity in their first derivative - they can boost or suppress inputs but they don’t have the hard YES/NO transitions of logical IF.

        However the Genetic Algorithms stuff is something completelly different from Neural Networks: it’s basically an Evolutionary method of finding the best “formula” to process inputs to generate the desired outputs by assessing different variants of the “formula” with the training data, picking the best ones and then generating a new generation of “formula” variants from the best ones and assessing those and keep doing it until the error rate is below a certain value - it’s basically a way of using “Natural” Selection for mathematical formulas.

        As far as I can tell Genetic Algorithms can’t really scale to the size of something like an LLM (the training requirements would be even more insane) though I guess that technique could be used to train part of a Neural Network or to create functional blocks that worked together with NNs.

        And yeah, MLPs trained via simple Backpropagation are exactly what I’m familiar with, having learned that stuff 3 decades ago as part of my degree when that was the pinnacle of NN technology and model architectures were still stupidly simple. That’s why I would be shocked if a so-called ML “expert” didn’t recognize that, as it’s the most basic form of Neural Network there is and it’s being doing the rounds for ages (that stuff was literally used to in automated postal code recognition in letters for automated mail sorting back in the 90s).

        I would expect that for people doing ML a simple MLP is as recognizable as binary is for programmers - sure people don’t work at that level anymore, but at they should at least recognize it.

        • NotANumber@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          2 hours ago

          Yes genetic algorithms are something different. Though they are used sometimes in training or architecting NNs, but not at the scale of modern LLMs.

          Fyi you can have all or nothing outputs from a perceptron or other network. It all depends on the activation function. Most LLMs don’t use that kind of activation function, but it is possible. Have you heard of bitnet? They use only one of three states for the neuron output in an LLM. It’s interesting stuff.

          • Aceticon@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 hours ago

            I haven’t heard of bitnet.

            Then again my knowledge of Machine Learning is 3 decades old (so, even before Recurrent Neural Networks were invented, much less Attention) and then some more recent reading up on LLMs from an implementation point of view to understand at least a bit how they work (it’s funny how so much of the modern stuff is still anchored in 3 decades old concepts).