• 7 Posts
  • 1.5K Comments
Joined 2 years ago
cake
Cake day: January 8th, 2024

help-circle









  • Diplomjodler@lemmy.worldtoMemes@sopuli.xyzValid question
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 days ago

    Yes, I’m aware of that. But you don’t seem to be aware of the fact that most cultural output (and yes, that includes Reddit shitposts) is produced by a small minority of people. Most people never contribute anything. So however shit the AI slop may be (which I’m not in any way denying), it’s still better than what the majority of people can manage. Just look at the percentage of people that are functionally illiterate.



  • Diplomjodler@lemmy.worldtoMemes@sopuli.xyzValid question
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    3 days ago

    A person without training data world be a blathering idiot. Any word you’ve ever heard, anything you’ve ever seen, smelled or otherwise perceived is data that was used to train the neural network in your head. And that’s not counting the billions of years it took to hardwire the hardwired bits.




  • Most of those features are implemented in the scan software on the PC, not on the scanner itself. Although there is a tendency to integrate more and more features in the firmware, which is not always a good idea. Also, if you’re scanning low volumes, I’d say doing the separation before the scan is generally more efficient. At least that’s how I do it. But that’s just me, of course. I wasn’t in any way trying to criticize your approach. If it works for you, it’s great.



  • You can use e.g. barcodes, patch codes or separator sheets (which usually carry the patch code). Sometimes you can also separate documents by recognising some feature on the first page, e.g. a logo or a barcode that’s already there. And of course it’s a good idea to put single page documents in a separate batch so you just separate them by page count. This of course also works if all documents are two or three pages long.