• Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 hours ago

      Yeah, if you’re supposedly in AI/ML and don’t recognize a (stupidly simplified) diagram for a Neural Network, you don’t really make stuff with it, you’re just another user (probably a “prompt engineer”).

      Even people creating Machine Learning solutions with other techniques would recognize that as representing a Neural Network.

      That should be as recognizable to a professional in that domain as a long string of 0s and 1s would be recognizable as binary to a programmer - even if you’re not working with it at that level, you recognized such building blocks of your trade.

      • NotANumber@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 hours ago

        To be more specific this is an MLP (Multi-Layer Perceptron). Neural Network is a catch all term that includes other things such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Diffusion models and of course Transformers.

        What you are arguing online is some variant of a Generative Pre-trained Transformer, which do have MLP or MoE layers but that’s only one part of what they are. They also have multi-headed attention mechanisms and embedding + unembedding vectors.

        I know all this and wouldn’t call myself a machine learning expert. I just use the things. Though I did once train a simple MLP like the one in the picture. I think it’s quite bad calling yourself a machine learning expert and not knowing all of this stuff and more.

        • Aceticon@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          Right, if I understood it correctly, what you see as “IF” is the multi-headed attention stuff. I was under the impression that you can’t actually have non-contiguous functions there so even multi-headed attention stuff involves functions which don’t have +/- infinity in their first derivative - they can boost or suppress inputs but they don’t have the hard YES/NO transitions of logical IF.

          However the Genetic Algorithms stuff is something completelly different from Neural Networks: it’s basically an Evolutionary method of finding the best “formula” to process inputs to generate the desired outputs by assessing different variants of the “formula” with the training data, picking the best ones and then generating a new generation of “formula” variants from the best ones and assessing those and keep doing it until the error rate is below a certain value - it’s basically a way of using “Natural” Selection for mathematical formulas.

          As far as I can tell Genetic Algorithms can’t really scale to the size of something like an LLM (the training requirements would be even more insane) though I guess that technique could be used to train part of a Neural Network or to create functional blocks that worked together with NNs.

          And yeah, MLPs trained via simple Backpropagation are exactly what I’m familiar with, having learned that stuff 3 decades ago as part of my degree when that was the pinnacle of NN technology and model architectures were still stupidly simple. That’s why I would be shocked if a so-called ML “expert” didn’t recognize that, as it’s the most basic form of Neural Network there is and it’s being doing the rounds for ages (that stuff was literally used to in automated postal code recognition in letters for automated mail sorting back in the 90s).

          I would expect that for people doing ML a simple MLP is as recognizable as binary is for programmers - sure people don’t work at that level anymore, but at they should at least recognize it.

          • NotANumber@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            16 minutes ago

            Yes genetic algorithms are something different. Though they are used sometimes in training or arcitecting NNs, but not at the scale of modern LLMs.

            Fyi you can have all or nothing outputs from a perceptron or other network. It all depends on the activation function. Most LLMs don’t use that kind of activation function, but it is possible. Have you heard of bitnet? They use only one of three states for the neuron output in an LLM. It’s interesting stuff.

            • Aceticon@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 minutes ago

              I haven’t heard of bitnet.

              Then again my knowledge of Machine Learning is 3 decades old (so, even before Recurrent Neural Networks were invented, much less Attention) and then some more recent reading up on LLMs from an implementation point of view to understand at least a bit how they work (it’s funny how so much of the modern stuff is still anchored in 3 decades old concepts).

          • NotANumber@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            Not all machine learning is AI. There are plenty of Machine Learning algorithms like Random Forests that are not neural networks. Deep learning would be big neural networks.

            • Aceticon@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              I haven’t really done Neural Networks in 2 decades, and was under the impression that NNs pretty much dominate Machine Learning nowadays, whilst stuff like Genetic Algorithms were way less popular or not at all used anymore.

              Is that the case?

              • Nikls94@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 hours ago

                I don’t know if it is the case for the world today, but all those Models behave like genetic algorithms and IF-functions with a little RNG sprinkled on top of them.

                • Aceticon@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  2 hours ago

                  You mean that they’re actually competing multiple variants of a model against each other to see which ones get closer to generating the expected results, and picking the best ones to create the next generation?

                  Because that’s how Genetic Algorithms work and get trained, which is completelly different from how Neural Networks work and get trained.

                  Also the links in Neural Networks don’t at all use IF-functions: the output of a neuron is just a mathematical operation on the values of all its inputs (basically a sum of the results of a function applied to the input numbers, though nowadays there are also cyclic elements) - the whole thing is just floating values being passed down the network (or back up the network during training) whilst being transformed by some continuous function or other with no discontinuity like you would get with IF involved.

              • NotANumber@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 hours ago

                It’s only one type of neural network. A dense MLP. You have sparse neural networks, recurrent neural networks, convolutional neural networks and more!

              • Tamo240@programming.dev
                link
                fedilink
                English
                arrow-up
                4
                ·
                6 hours ago

                Its an abstraction for neural networks. Different individual networks might vary in number of layers (columns), nodes (circles), or loss function (lines), but the concept is consistent across all.

                • NotANumber@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  16 minutes ago

                  Kinda but also no. That’s specifically a dense neural network or MLP. It gets a lot more complicated than that in some cases.

  • NeedyPlatter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    33
    ·
    edit-2
    20 hours ago

    Broke: Falling for ragebait and getting into arguments online

    Woke: Assuming everyone with strong opinions on something you disagree with is a bot /j

  • Una@europe.pub
    link
    fedilink
    English
    arrow-up
    4
    ·
    21 hours ago

    101111001001010010001111001110101001101010001111110101011010101010101010110101010101101010110101101001101010001011111101010110101011010101010011010101010101010101010101001010110101010101010101010101010101010101010100110101010101010101010101