#computer vision

LIVE
OctiiOS video app employs Computer Vision and Machine Learning to recognize full body forms to applyOctiiOS video app employs Computer Vision and Machine Learning to recognize full body forms to applyOctiiOS video app employs Computer Vision and Machine Learning to recognize full body forms to applyOctiiOS video app employs Computer Vision and Machine Learning to recognize full body forms to applyOctiiOS video app employs Computer Vision and Machine Learning to recognize full body forms to apply

Octi

iOS video app employs Computer Vision and Machine Learning to recognize full body forms to apply various effects, ranging from chromakey filters to gesture actions, and additional 3D characters that can imitate your movements:

Octi is an AI video platform that sees and understands humans in your videos, allowing you to do new, exciting things with your phone camera.

*Create custom, interactive stickers of you and your friends. Insert them in cool videos to share.
*Change your body with stunning visual effects. Turn your body into diamonds!
*Learn unique body moves that will trigger instant effects in your videos. Say hello with a rainbow or make it rain with dollars.  
*Collaborate on shared augmented videos with your connections- create the ultimate expression with video.
*With Octi Dittos you have your own mini clone that will copy your body moves exactly!
*Octi is at the beginning of an Augmented Video revolution- more coming soon! 

More Here

EDIT:

image

One of the developers of this app, Sam Loeschen, shares potential graphical effects that could be added in the future [source]


Post link
 Making Amazon Alexa respond to Sign Language using AILatest project from Abhishek Singh utilizes Ma Making Amazon Alexa respond to Sign Language using AILatest project from Abhishek Singh utilizes Ma

Making Amazon Alexa respond to Sign Language using AI

Latest project from Abhishek Singh utilizes Machine Learning and Computer Vision to enable Sign Language as an input to Smart Speaker technology:

If voice is the future of computing what about those who cannot speak or hear? I used deep learning with TensorFlow.js to make Amazon Echo respond to sign language. 

Link


Post link

Novel view synthesis extrapolates a scene to a different camera viewpoint, while video prediction extrapolates to a future

loading