14 votes

You can now translate sign language automatically with these amazing Raspberry Pi glasses

7 comments

  1. [2]
    tanglisha
    Link
    I know this sounds really awesome, and I'm sure the person working on it put in a lot of effort. However. It's a toy. This 2017 article does a pretty good job explaining the history of...

    I know this sounds really awesome, and I'm sure the person working on it put in a lot of effort.

    However. It's a toy.

    This 2017 article does a pretty good job explaining the history of fingerspelling tools.

    This is something that happens every few years. Someone makes an awesome prototype which can recognize textbook fingerspelling, then we never hear about it again. The folks who create these things either don't work with the Deaf community or they do, but then drop the project after they graduate/win a prize/lose finding/get bored. The Deaf community has been burned over and over by this.

    It will someday be possible to translate, and maybe even interpret signed languages. The driving force behind a working translation or interpretation tool must be the Deaf community, they're the ones who use it as a native language. A dead giveaway that the community wasn't involved is that this looks as hands but nothing about faces or body language is mentioned. Without that information, you're missing half the language. Aside from that, fingerspelling isn't quite as straightforward as letter to letter, you will see something similar to a contraction happen with some words. Here's an example of lexicalized fingerspelling for the sites "style".

    Deaf people aren't helpless, some are engineers and computer scientists. They have cell phones that they use to communicate every day with folks who don't know any sign. It's so great that folks want to help out, but no software project is going to do well without working side by side with the intended users.

    Disclaimer: I am not part of the Deaf community and do not mean to speak for them out of turn. The information here is based off of discussions I've had on the topic with several people who are part of the Deaf community and are very tired of explaining how these tools can end up hurting the very people they're meant to help.

    19 votes
    1. burkaman
      Link Parent
      It seems like a real sign language recognition tool would need an enormous amount of video to train from, just like voice recognition needs a huge corpus of audio. Language is too complicated to...

      It seems like a real sign language recognition tool would need an enormous amount of video to train from, just like voice recognition needs a huge corpus of audio. Language is too complicated to just program the rules in directly, you have to get enough data and train a complex enough machine learning algorithm to get something real working.

      It looks like there have been some attempts at building a dataset like this (https://www.cs.bu.edu/groups/ivc/pubs/Athitsos_cslt2010_a4.pdf for example), but none of them are nearly large enough, and they all include a lot of unrealistically simple videos that don't represent what you'd see in a real-world conversation.

      2 votes
  2. [2]
    Hello
    Link
    Not sure who the intended users of this would be. This apparently only work with fingerspelled letters, which is a tiny part of sign language, so it could only work if somebody communicated...

    Not sure who the intended users of this would be. This apparently only work with fingerspelled letters, which is a tiny part of sign language, so it could only work if somebody communicated entirely by fingerspelling every single word, which pretty much nobody ever does because ain't nobody got time for that. Saying that you can "translate sign language" with these glasses is about as exaggerated as saying you can "translate Chinese" if you know pinyin.

    After watching the demo video, it looks even more useless, because it looks like the hand has to be held in position rather carefully and held for a fairly long duration, which will be hopeless for understanding Deaf people who tend to fingerspell very rapidly and use a lot of shortcuts.

    13 votes
    1. moocow1452
      Link Parent
      This is not an end all be all solution to ASL translation and can't really be done in real time for standard vocabulary, but it is a good first step in an unexplored area on a project not intended...

      This is not an end all be all solution to ASL translation and can't really be done in real time for standard vocabulary, but it is a good first step in an unexplored area on a project not intended to be an end product. I look forward to see where the devs of this are five years from now.

      4 votes
  3. Johz
    Link
    This is a cool project, it's one that a lot of people seem to do as a uni project with various focuses. It's a cool idea and it's interesting to see how different people approach this idea. I like...

    This is a cool project, it's one that a lot of people seem to do as a uni project with various focuses. It's a cool idea and it's interesting to see how different people approach this idea. I like the idea of using a Pi to get this to work in a relatively affordable way.

    That said, it's not really feasible that any of these "recognise individual symbols" approaches will produce good or useful results, because that's not really how sign languages work. It's like trying to translate English to Chinese by converting each word individually and expecting the result to make sense.

    My guess is that a big blocker here is the relative lack of written sign language. There are some written forms of different sign languages, but I don't believe they're widely used. A lot of translation tools use public domain documents written in multiple languages to train on, for example EU legal documents which are often translated into all official languages within the EU. But that corpus of data doesn't exist for written forms of sign languages, so the training data for converting whole sentences from one form to another just isn't there.

    A better approach might be to take sign-translated, subtitled TV broadcasts and use those as the two sides of the training data. But I suspect those would be fairly complex to analyse and train on. And at that point, you'd probably start running into the limitations of computer vision for quick, complex movements. But I suspect it's a more viable path in general to translating semantic meaning, rather than just letter-for-letter or word-for-word translation.

    5 votes
  4. Habituallytired
    Link
    I like the idea of this project, and it would be cool for someone like me who is learning sign language. I just think they didn't do much research into the area. There absolutely are really good...

    I like the idea of this project, and it would be cool for someone like me who is learning sign language. I just think they didn't do much research into the area. There absolutely are really good resources and apps to learn ASL (or other sign language languages). I'm currently using one and it's been great! I've been learning for two months now, and I had a very rudimentary sign conversation with a HoH person last week after learning on the app.

    I would love to see this work one day, but with all languages, that would be so cool. As of now, it doesn't really work and the article is not well-researched.

    1 vote
  5. Omnicrola
    Link
    This is an article (XDA-Devs), about an article (Tom's Hardware) about an actual project page(hackster.io)

    This is an article (XDA-Devs), about an article (Tom's Hardware) about an actual project page(hackster.io)