British Boffins Make Touchless Computing Tech On The Cheap

Researchers at University College London have developed software that lets users control a computer using voice, facial expressions, hand gestures, eye movements, and larger body motions. All that's required is a regular webcam — no special hardware needed.

The software, called MotionInput, was developed to enable touchless computing for a variety of purposes, from making computers easier to use for people with disabilities, to aiding professionals who have their hands full, like surgeons.

The software, which is now onto the third version, is available to download for free for non-commercial purposes. It's currently only supported by Windows though there are plans to extend support to Linux, MacOS, and Android.

MotionInput can be customized to translate a variety of movements and vocal commands into mouse, keyboard and joystick signals. This means users can use anything from their nose to their hands to browse the internet, compose a document, or play a game.

Youtube Video

What's particularly neat is that it doesn't require an internet connection or hooks to the cloud since all computing is handled locally on the CPU, and not even a high-end device with loads or compute and memory. The boffins recommendation is an Intel Core processor from 2017 and 4GB of RAM at minimum, although more CPU cores and the use of an SSD will lead to a better experience.

The software can interpret movements and voice thanks to a mix of computer vision models, which have been optimized to run well on regular CPUs using Intel's OpenVINO toolkit, according to a press release from Intel. The chipmaker provided technical assistance to the researchers alongside mentors from Microsoft and IBM.

Costas Stylianou, a technical specialist at Intel, said these optimizations will enable more people to use MotionInput since OpenVINO gives it "several orders of magnitude improvements in efficiency and an architecture for supporting the growth of touchless computing apps as an ecosystem."

The researchers didn't even need to train the models on expensive systems typically needed for such work because they were already trained by Intel and made available through OpenVINO.

"What makes this software so special is that it is fully accessible," said Phillippa Chick, global account director for health and life science at Intel.

"The code does not require expensive equipment to run. It works with any standard webcam, including the one in your laptop. It's just a case of downloading and you are ready to go." ®

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more