• Dasus@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 hours ago

    Monolingual people should be reminded that machine translation is still for rather basic conversation.

    Until they manage to autogenerate even correct English subs on YouTube on English speaking videos, theres really not much trust I will have in it.

    So yeah, cool function, definitely helpful, but machine translation isn’t dependable if you need to accurate with your language.

    I have a few problems with this episode, but also it’s one of my favourites, because it’s trying to actually process the problems tech like that would have, languages is sometimes incredibly contextual.

    For one AI is shit with idioms.

    For things like the UN, you just must have an actual person — who’s proficient at a native-level — translating.

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    2
    ·
    1 day ago

    I hate AI as much as the next lemming, but nobody is going to tell me a babble fish for real isn’t cool AF.

    • Halcyon@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Though babble fish is a funny term, Douglas Adams named the creature “Babel fish”, after the biblical story of the tower of Babel.

    • 𝕸𝖔𝖘𝖘@infosec.pub
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      18 hours ago

      Except, unlike the real babble fish that feed on our thought waves, this one feeds on our environment and our planet’s future.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        8 hours ago

        I hope you dont play video games or stream HD video, given that they use more electricity for less social benefit than this would.

  • gedaliyah@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    23 hours ago

    Anyone who likes this idea might also be interested in checking out RTranslator, an open source, on-device app, which has some similar functionality. You can connect two Bluetooth devices using this app to communicate between two people in different languages.

    It can’t translate multiple speakera simultaneously or clone voices, but it’s very useful for traveling or communicating with friends and family in multiple languages. Especially since it does not need any connection, it comes in handy on the road when you might not have a reliable connection.

  • Psythik@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    16 hours ago

    I need this for the nail salon. When is it hitting stores and for how much? (I didn’t see any mention of cost/availability in the article.)

  • stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    1 day ago

    Ok, so this concept is cool, but has a few problems…

    1. Privacy, this is far too complex to run on the headphones themselves, so the system will need to connect to a server to do the heavy lifting, what happens to the data once it used? For legal purposes I suspect it will need to be saved, meaning that any thing recorded could be analyzed or monitored.
    2. Trust, AI models have rules in place to make them act in specific ways, the owner of the AI system used could tweak it to change what spoken or how it is said, this could push political agendas in everyday conversations.
    3. Reduced lingual skills, an AI like this would reduce the incentive to learn another language, reducing people’s international direct communications, increasing dependancy on the AI service, further reducing our lingual skills.

    This is scary…

    • lakemalcom10@lemm.ee
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 day ago

      For 1 they actually addressed that: The system then translates the speech and maintains the expressive qualities and volume of each speaker’s voice while running on a device, such mobile devices with an Apple M2 chip like laptops and Apple Vision Pro. (The team avoided using cloud computing because of the privacy concerns with voice cloning.) Finally, when speakers move their heads, the system continues to track the direction and qualities of their voices as they change.

      • stoy@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        If that is enough power, and you can run it without any internet access, then yes, it would probably adress point 1.

      • Ilovethebomb@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 hours ago

        The fact that all this can run on a phone is incredible, this sounds very processor intensive.

        I wonder what it would do to your battery life?

    • themoken@startrek.website
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      I’m with you on 1 and 2, but “reduced lingual skills” I think is a bit of a stretch. Becoming fluent in another language takes a lot of effort and people only do it if they have a good long term reason.

      I think it’s more likely this would cover the vacation / short term business case that is already covered by human interpreters (or apps already) instead.

    • Psythik@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      16 hours ago

      Next time try reading the article first before you comment.

      • stoy@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        15 hours ago

        This is an utterly idiotic comment, I’ll break it down into bulletpoints to make it earier to understand.

        1. The comment assumes that I didn’t read the article, this is semi-wrong, I skimmed it, and found nothing of what I wrote in the article.
        2. The comment provides ZERO additional information, it is pure snark, and does nothing to inform me about what I missed.
        3. The comment assumes that everyone else also reads the article, this is not the case.
        4. The comment forgets the advantage of summarizing for others, if my points was found in the article, it is a good thing to summarize them in a more accessible way.

        With these point in mind I believe you can make an effort to make a better comment next time.

    • iturnedintoanewt@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      20 hours ago

      Check whisper apk on fdroid. The thing runs local. It does just this. The model gets audio in an undetermined language, figures out which one automatically, transcribes it, translates it to English (only English atm) and then it speaks it out. It’s not using any acceleration and its a very early build. My Pixel 9 is getting about 3 seconds delay from input to output. It’s all running local.

      It’s doable.

        • Sandbar_Trekker@lemmy.today
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          18 hours ago

          It’s not sending the audio to an unknown server. It’s all local. From the article:

          The system then translates the speech and maintains the expressive qualities and volume of each speaker’s voice while running on a device, such mobile devices with an Apple M2 chip like laptops and Apple Vision Pro. (The team avoided using cloud computing because of the privacy concerns with voice cloning.)