Tuesday, April 9, 2013

Yandex, Russian ‘Homegrown Google - Based Interfaces To Power Apps 2013


Russian search giant Yandex has collaborated on developing an experimental gesture-based interface to explore how similar technology could be incorporated into future social apps and mobile products. The company offers digital services beyond search already, launching and expanding mapping services and translation apps, for instance, in a bid to drive growth as its domestic search share (60.5% as of Q4 2012) has not grown significantly in recent quarters. Future business growth for Yandex looks likely to depend on its ability to produce a pipeline of innovative products and services — hence its dabbling with gestures.


Yandex Labs, the division that came up with its voice-powered social search app Wonder (an app that was quickly blocked by Facebook), has been working with Carnegie Mellon University on a research project to create a gesture-based social interface designed for an Internet-connected TV. The interface, demoed in the above video, pulls in data from Facebook, Instagram and Foursquare to display personalised content that is navigated by the TV viewer from the comfort of their armchair using a range of hand gestures.
Here’s how Yandex describes the app on its blog:
The application features videos, music, photos and news shared by the user’s friends on social networks in a silent ‘screen saver’ mode. As soon as the user notices something interesting on the TV screen, they can easily play, open or interact with the current media object using hand gestures. For example, they can swipe their hand horizontally to flip through featured content, push a “magnetic button” to play music or video, move hands apart to open a news story for reading and then swipe vertically to scroll through it
The app, which was built on a Mac OS X platform using Microsoft’s Kinect peripheral for gesture recognition, remains a prototype/research project, with no plans to make it into a commercial product. But Yandex is clearly probing the potential of gestures to power future apps.
Asked what sort of applications it believes could be suitable for the tech, Grigory Bakunov, Director of Technologies at Yandex, said mobile apps are a key focus. “Almost any [Yandex services] that are available on mobiles now: search (to interact with search results, to switch between different search verticals, like search in pictures/video/music), probably maps apps and so forth [could incorporate a gesture-based interface],” he told TechCrunch when asked which of its applications might benefit from the research.
Bakunov stressed these suggestions are not concrete plans as yet — just “possible” developments as it figures out how gesture interfaces can be incorporated into its suite of services in future. ”We chose social newsfeeds to test the system [demoed in the video] as it can bring different types of content on TV screen like music listened by friends, photo they shared or just status updates. Good way to check all types in one app,” he added.
As well as researching the potential use-cases for gesture interfaces, Yandex also wanted to investigate alternatives to using Microsoft’s proprietary Kinect technology.
“Microsoft Kinect has its own gesture system and machine learning behind it. But the problem is that if you want to use it for other, non-Microsoft products you should license it (and it costs quite a lot), plus it has been controlling by Microsoft fully. So, one of the target was to find out more opened alternative with accessible APIs, better features and more cost-effective,” said Bakunov.
Yandex worked with Carnegie Mellon students and Professor Ian Lane to train gesture recognition and evaluate several machine learning techniques, including Neural Networks, Hidden Markov Models and Support Vector Machines — with the latter technique showing accuracy improvements of a fifth vs the other evaluated systems, according to Yandex.
The blog adds:
They [students] put a lot of effort in building a real training set – they collected 1,500 gesture recordings, each gesture sequenced into 90 frames, and manually labeled from 4,500 to 5,600 examples of each gesture. By limiting the number of gestures to be recognized at any given moment and taking into account the current type of content, the students were able to significantly improve the gesture recognition rate

No comments:

Post a Comment