Body Labs SOMA PR
Body Labs, provider of advanced technology for analysing body shape and motion, has announced the launch of SOMA, a human-aware 3D artificial intelligence (AI) platform.
SOMA enables business and developers to predict 3D human motion and shape from everyday photos or videos.
Body Labs believe the technology can be harnessed to measure the size and shape of customers for clothing; for use in mobility to detect and predict pedestrian action using conventional cameras; and to detect motion to transfer actions into interactive games or virtual environments. The company says SOMA can also be used to connect consumers for peer-to-peer communication and recommendations centred around the products they are interested in. When detecting motion, it enables hardware or software to understand body gestures without the need for controllers or voice prompts.
Body Labs SOMA PR
“At Body Labs, we deeply believe that the future of computer vision and AI hinges on understanding people,” said Eric Rachlin, chief technology officer and co-founder at Body Labs. “Today most of what computers know about people comes from speech, faces, and their behaviors online. In contrast, Body Labs' goal is to provide the world's most comprehensive set of tools for understanding the human form to bridge these physical and digital worlds. Being able to create highly realistic 3D models of the human body is essential for making meaningful progress in areas such as personalised shopping, autonomous vehicles, mixed reality, and smart homes.”
Bill O’Farrell, CEO and Co-Founder of Body Labs added: “With SOMA, we’ve created the first human-aware computing platform that understands the way we move and how we’re shaped to make our world more personal. Today, products and services are getting increasingly more personalized and now SIMA is putting the human body at the centre of both design and development.”
SOMA’s human-aware artificial intelligence (AI) is trained using body data and computer vision to accurately understand both 3D human motion and shape from any input. It uses neural networks to predict major joints, landmarks, facial features, and 3D shape from just photos or videos of people.