

I can’t remember how I installed Whisper in the first place
Typically however you want and if not https://github.com/ggml-org/whisper.cpp/releases


I can’t remember how I installed Whisper in the first place
Typically however you want and if not https://github.com/ggml-org/whisper.cpp/releases


I don’t. That’s the entire point about having different mailboxes in the first place : they stay isolated and I manage notifications (or not) exactly how I want, when I want.


suddenly it hit me. Im on linux I can do a lot of this easier with the command line.
Nice, you get it! You have so much to learn so don’t be afraid of taking notes. The CLI and the UNIX philosophy are very powerful. They remain powerful decades after (from desktop to mobile with e.g. adb on Android to the “cloud” with shell via e.g. ssh) so IMHO it still is a good investment. Still discovery can be tricky so be gentle with yourself
Also few tricks that can help you go further faster :
.md file or a wiki page, entirely up to you).bashrc to keep your shortcuts and composeCtrl-rAnyway, enjoy it’s an adventure!


I’ll be checking over the subtitles anyway, generating just saves a bunch of time before a full pass over it. […] The editing for the subs generation looks to be as much work as just transcribing a handful of frames at a time.
Sorry I’m confused, which is it?
doing this as a favour […] Honestly I hate the style haha
I’m probably out of line for saying this but I recommend you reconsider.


Exactly, and it works quite well, thanks for teaching me something new :)


There’s no getting around using AI for some of this, like subtitle generation
Eh… yes there is, you can pay actual humans to do that. In fact if you do “subtitle generation” (whatever that might mean) without any editing you are taking a huge risk. Sure it might get 99% of the words right but it fucks up on the main topic… well good luck.
Anyway, if you do want to go that road still you could try
.mkv? Depends on context obviously)*.srt *.ass *.vtt *.sbv formats

Thanks for sharing and the clarifications. I do think both the philosophy behind this and the technological choices are right but it’s also true that “How many people?” can it handle is important for people who want to actually try and onboard others. It’s one thing to try alone but as long as we ask others to join, knowing what the limits are makes everybody more understanding.


much procesing could it handle though? If it is only a handful of friends then what makes it better than Signal?
I don’t actually know the project but I think your mindset here is (and correct me if I’m wrong) “Does it scale?” whereas the mindset of this project, based on the name itself and the “small scale” in the description, is “no, it does not scale and that’s A-OK”.


Resistance to power outage? Ins’t a phone just a server without a keyboard and with an integrated UPS? /s


Oh…, that’s neat thanks!
So in my use case I made a template for prototype metadata, add a menu action could be to generate the file instead of creating from the template via Exec= field. This would prepopulate the metadatafile with e.g. the list of selected files thanks to %F.



Neat! Thanks I
~/.local/share/templates//usr/share/templates for .desktop files~/Templates and finally now getting :
fixed icon



Sorry typo, it’s indeed ~/Templates (even verified to be sure but even has a specific icon), was already there, I didn’t create it. So unfortunately still does not work.


Sad but unsurprising.
I did read quite a lot on the topic, including “Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass.” (2019) and saw numerous documentaries e.g. “Invisibles - Les travailleurs du clic” (2020).
What I find interesting here is that it seems the tasks go beyond dataset annotation. In a way it is still annotation (as in you take a data in, e.g. a photo, and your circle part of it to label i.e. e.g “cat”) but here it seems to be 2nd order, i.e. what are the blind spots in how this dataset is handled. It still doesn’t mean anything produced is more valuable or that the expected outcome is feasible with solely larger datasets and more compute yet maybe it does show a change in the quality of tasks to be done.
Enforcing GDPR fines would be a great start, only adding more if need be.
I feel like we could are more laws but if they are not enforced it’s pointless, maybe even worst but it gives the illusion of privacy while in reality nothing changes.


Creating a .prototype file in ~/Templates didn’t work for me on KDE Plasma version 5.27.5.
Because it’s my own filetype I added it in “File Associations” known types in text category, to open with Kate, Gvim, etc, just in case, but didn’t help.
Is something else required?
None of your requirements are distribution specific. I do all (Steam, non Steam, Kdenlive, Blender/OpenSCAD, vim/Podman, LibreOffice, Transmission) of that and I’m running Debian with an NVIDIA GPU. Consequently I can personally recommend it.


some people experience crash on linux where stable on windows.
FUD much? I’m not saying it’s not true… but like the opposite has to be true too. So without actual data to say it’s significant it’s really just not help much in any way, just creating doubt.
very few games represent the majority of gametime, and a lot of them do not run on linux.
Same, which ones? What’s your dataset?


Nice, the hinge always stressed me a bit but for GBA looks perfect.
Well my use is mostly tinkering :P but the realization that on top of being a neat little handheld it’s also a proper computer… I just can’t help myself and want to get “more” out of it. Silly habit!
Yeah… it’s not you. I’m a professional developer and have been using Linux for decades. It’s still hard for me to install specific environments. Sometimes it just works… but often I give up. Sometimes it’s my mistake but sometimes it’s also because the packaging is not actually reproducible. It works on the setup that the developer used, great for them, but slight variation throw you right into dependency hell.