Move Mirror AI matches your pose with one in 80,000 images | Computing
Artificial intelligence has become so sophisticated these days that it can identify objects and, in the case of Amazon, even help you order that object. But stationary objects with fixed shapes are one thing. Moving bodies with moving parts and uncommon positions are another. Trying to identify your pose and match it with a set of photos with similar poses is the holy grail of pose estimation and it is exactly what Google is presenting with its Move mirror AI Experiment. Best of all, all you need is a web browser and a webcam.
While our brains have the innate ability to identify body parts and discern poses from their positions and orientations, computers are not as talented. For that very purpose, Google developed the PoseNet neural network model that is able to extract data from images, no matter the quality of the image. Google's TensorFlow team developed the first Move Mirror to demonstrate PoseNet's awesomeness but they quickly ran into some practical problems.
The team wanted to share Move Mirror to the world. The AI experiment already ran on browsers because they used PoseNet's Web API. The machine learning part, however, relied on beefy hardware and software libraries that most users won't have access to. They could just send the user's webcam output to their servers for processing, but that would open a can of worms as far as privacy goes.
Move Mirror may seem like a frivolous, but fun, AI demo, but it does have some positive implications for AI. It's definitely impressive how sophisticated machine learning can now be done just in web browsers. And it's definitely reassuring to know that you don't always need to send your data, much less your photos, to some computer in the cloud just to reap the benefits of AI.