• 6 Posts
  • 996 Comments
Joined 1 year ago
cake
Cake day: October 4th, 2023

help-circle


  • looks dubious

    The problem here is that if this is unreliable – and I’m skeptical that Google can produce a system that will work across-the-board – then you have a synthesized image that now has Google attesting to be non-synthetic.

    Maybe they can make it clear that this is a best-effort system, and that they only will flag some of them.

    There are a limited number of ways that I’m aware of to detect whether an image is edited.

    • If the image has been previously compressed via lossy compression, there are ways to modify the image to make the difference in artifacts in different points of the image more visible, or – I’m sure – statistically look for such artifacts.

    • If an image has been previously indexed by something like Google Images and Google has an index sufficient to permit Google to do fuzzy search for portions of the image, then they can identify an edited image because they can find the original.

    • It’s possible to try to identify light sources based on shading and specular in an image, and try to find points of the image that don’t match. There are complexities to this; for example, a surface might simply be shaded in such a way that it looks like light is shining on it, like if you have a realistic poster on a wall. For generation rather than photomanipulation, better generative AI systems will also probably tend to make this go away as they improve; it’s a flaw in the image.

    But none of these is a surefire mechanism.

    For AI-generated images, my guess is that there are some other routes.

    • Some images are going to have metadata attached. That’s trivial to strip, so not very good if someone is actually trying to fool people.

    • Maybe some generative AIs will try doing digital watermarks. I’m not very bullish on this approach. It’s a little harder to remove, but invariably, any kind of lossy compression is at odds with watermarks that aren’t very visible. As lossy compression gets better, it either automatically tends to strip watermarks – because lossy compression tries to remove data that doesn’t noticeably alter an image, and watermarks rely on hiding data there – or watermarks have to visibly alter the image. And that’s before people actively developing tools to strip them. And you’re never gonna get all the generative AIs out there adding digital watermarks.

    • I don’t know what the right terminology is, but my guess is that latent diffusion models try to approach a minimum error for some model during the iteration process. If you have a copy of the model used to generate the image, you can probably measure the error from what the model would predict – basically, how much one iteration would change an image or part of it. I’d guess that that only works well if you have a copy of the model in question or a model similar to it.

    I don’t think that any of those are likely surefire mechanisms either.






  • YouTube desperately needs to fix the recommendations for music.

    I mean, I guess if someone has a YouTube account, there’s nothing wrong with using YouTube as a music recommendations system, but it isn’t really the first thing I’d think of. I mean, music isn’t really what it was designed for.

    And YouTube doesn’t know what a user would listen to offline, so unless all their music-listening is from YouTube tracks…I’m not sure how representative the listening data would be of what a user would listen to.

    I don’t use them, because I don’t really want to hand them a profile of me, but if I wanted to get music recommendations, I’d probably use something like Audioscrobbler, which was designed for building a profile on someone’s music-listening habits and then handing them recommendations based on that.


  • This Popsie Funk channel is upfront, that the music is AI generated.

    goes looking

    Yeah, the description reads:

    Popsie Funk is a fictitious creation. The tracks are A.I. generated from lyrics and musical compositions that I have created. The A.I. samples are then mixed and edited by me.

    I am adding this disclaimer due to repeated questions about the genuine authenticity of Popsie Funk and his music.

    I don’t think that the artist in question is faking this.

    All that being said, while this particular case isn’t, I suppose one could imagine such a “trying to pretend to be human” artist existing. That is, if you think about all the websites out there with AI-generated questions and answers that do try to appear human-generated, you gotta figure that someone is thinking about doing the same with musicians…and at mass scale, not manually doing one or two.





  • We’ve genetically engineered other colored foods before, like golden rice.

    We’ve genetically-engineered many bioluminescent plants and animals.

    kagis

    We’ve genetically-engineered blue flowers:

    https://www.science.org/content/article/scientists-genetically-engineer-world-s-first-blue-chrysanthemum

    We all think we’ve seen blue flowers before. And in some cases, it’s true. But according to the Royal Horticultural Society’s color scale—the gold standard for flowers—most “blues” are really violet or purple. Florists and gardeners are forever on the lookout for new colors and varieties of plants, however, but making popular ornamental and cut flowers, like roses, vibrant blue has proved quite difficult. “We’ve all been trying to do this for a long time and it’s never worked perfectly,” says Thomas Colquhoun, a plant biotechnologist at the University of Florida in Gainesville who was not involved with the work.

    True blue requires complex chemistry. Anthocyanins—pigment molecules in the petals, stem, and fruit—consist of rings that cause a flower to turn red, purple, or blue, depending on what sugars or other groups of atoms are attached. Conditions inside the plant cell also matter. So just transplanting an anthocyanin from a blue flower like a delphinium didn’t really work.

    Naonobu Noda, a plant biologist at the National Agriculture and Food Research Organization in Tsukuba, Japan, tackled this problem by first putting a gene from a bluish flower called the Canterbury bell into a chrysanthemum. The gene’s protein modified the chrysanthemum’s anthocyanin to make the bloom appear purple instead of reddish. To get closer to blue, Noda and his colleagues then added a second gene, this one from the blue-flowering butterfly pea. This gene’s protein adds a sugar molecule to the anthocyanin. The scientists thought they would need to add a third gene, but the chrysanthemum flowers were blue with just the two genes, they report today in Science Advances.

    “That allowed them to get the best blue they could obtain,” says Neil Anderson, a horticultural scientist at the University of Minnesota in St. Paul who was not involved with the work.

    Chemical analyses showed that the blue color came about in just two steps because the chrysanthemums already had a colorless component that interacted with the modified anthocyanin to create the blue color. “It was a stroke of luck,” Colquhoun says. Until now, researchers had thought it would take many more genes to make a flower blue, Nakayama adds.

    The next step for Noda and his colleagues is to make blue chrysanthemums that can’t reproduce and spread into the environment, making it possible to commercialize the transgenic flower. But that approach could spell trouble in some parts of the world. “As long as GMO [genetically modified organism] continues to be a problem in Europe, blue [flowers] face a difficult economic future,” predicts Ronald Koes, a plant molecular biologist at the University of Amsterdam who was not involved with the work. But others think this new blue flower will prevail. “It’s certainly an advance for the retail florist,” Anderson says. “It would have a lot of market value worldwide.”

    I imagine that it’s quite possibly within the realm of what we could do.




  • tal@lemmy.todaytoAsk Lemmy@lemmy.worldApp Server for phone apps
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    4 days ago

    If you want to get deals for the grocery store you need their app

    That’s because they want to get their app on your phone so that they can perform data-mining using the data that the app can get from the phone environment.

    I mean, I don’t think that it’s worth bothering with trying to game the system. I’m not going to give them my data, and I don’t really care about the discount that they’re offering for it. But if you want to do so, you can probably run an Android environment on a server and use the equivalent of RDP or VNC or something to reach it remotely.

    grabs a random example

    https://waydro.id/

    A container-based approach to boot a full Android system on regular GNU/Linux systems running Wayland based desktop environments.

    Need to connect that up to VNC or RDP somehow if it doesn’t have native support.

    EDIT: I think that I’d take a hard look at how much it’s likely to save you relative to how much time and effort you’re going to spend on setting up and maintaining this, though.


  • For me, video is rarely the form that I want to consume any content in. It’s also very obnoxious if I’m on a slow data link (e.g. on a slower or saturated cell phone link).

    However, sometimes it’s the only form that something is available in. For major news items, you can usually get a text-form article, but that isn’t all content. I submitted a link to a YouTube video of a Michael Kofman interview the other day talking about military aid to a Ukraine community. I also typed up a transcript, but it was something like an hour and a half, and I don’t know if that’s a reasonable bar to expect people to meet.

    I think that some of this isn’t that people actually want video, but that YouTube has an easy way to monetize video for content creators. I don’t think that there’s actually a good equivalent for independent creators of text, sadly-enough.

    And there are a few times that I do want video.

    And there may be some other people that prefer video.

    Video doesn’t actually hurt me much at this point, but it would kind of be nice to have a way to filter it out for people who don’t want it. Moving all video to another community seems like overkill, though. Think it might be better to have some mechanism added to Threadiverse clients to permit content filtering rules; I think that probably a better way to meet everyone’s wants. It’d also be nice if there were some way to clearly indicate that a link is video content, so that I can tell prior to clicking on it.




  • You can still get a few phones with built-in headphones jacks. They tend to be lower-end and small.

    I was just looking at phones with very long battery life yesterday, and I noticed that the phone currently at the top of the list I was looking at, a high-end, large, gaming phone, also had a headphones jack. The article also commented on how unusual that was.

    Think it was an Asus ROG something-or-other.

    kagis

    https://rog.asus.com/us/phones/rog-phone-8-pro/

    An Asus ROG Phone 8 Pro.

    That’s new and current. Midrange-and-up phones with audio jacks aren’t common, but they are out there.

    Honestly, I’d just get a USB C audio interface with pass-through PD so that you can still charge with it plugged in and just leave that plugged into your headphones if you want to use 1/8th inch headphones. It’s slightly more to carry around, but not that much more.

    Plus, the last smartphone I had with a built-in audio DAC would spill noise into the headphones output when charging. Very annoying. Needed better power circuitry. I don’t know if any given USB C audio interface avoids the issue, but if it’s built into the phone, there’s a limited amount you can do about it. If it’s external, you can swap it, and there’s the hope that their less-limited space constraints meant that they put in better power supply circuitry.