You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.
You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.
I think we should be building localized, smaller, more finely-tuned LLMs.
I used AI to help with debugging and coding, as well as exploring a theory I came up with a long time ago - and with my framework and notes and research papers and everything else I’ve collected to support my theory, I was able to put it into application with my own AI cybersecurity I’ve developed.
We’ve created 26,000 new cyber threat datasets because I had access to an LLM that could help me take the frameworks, notes, and research I’d gathered in my attempts to build this out and within a couple months I had something that blew my prototype out of the water.
My startup in cybersecurity- we use less than 1GB of ram, at peak use maybe 30% of a single cpu core, and it was build with ethics and safeguards in mind. Not LLM but real Machine + reinforcement learning.
To me ethics also meant resource awareness. If I’m poisoning the planet and the people then it’s not a good product.
Building smaller, more specialized local models is not only better from a cybersecurity perspective, but smaller local LLMs mean new startups to build them, a race to innovate and improve resource usage, more data privacy, smaller attack surface, no obscenely expensive API calls and overage fees…
What we should have is a Symbiotic approach to AI - a partnership sort of understanding.
LLMs helped me with debugging and putting this research and theory together. And in a fraction of the time it took me to build the framework.
I pushed autonomous operation because I felt that it was about giving people their time back. Providing freedom. If my cybersecurity can take care of 94.1% of all threats before they reach an analyst - that analyst doesn’t have to wake up at 2AM to sift through 10000 false positives. We do it.
Now that analyst can do what they got a degree to do - actually defend a network. Build and explore threat research and databases. Find their purpose again.
We require that a human is always in the loop and help protect cybersecurity jobs by ensuring that all human input is always the final decision. Let our AI do the heavy lifting so you can take care of this shit that matters and what you really want to do.
Sorry I think my adhd took control of this conversation.