BodyPix 2.0 – Person Segmentation in the Browser #tensorflow #machinelearning #deeplearning @tylerzhu3 @oveddan

Bodypix 2.0 Segmentation of @tannewt and @minichre with the live browser demo.

 

The release of BodyPix 2.0 was announced on the TensorFlow blog this week. It boasts improved accuracy and multiperson support. The BodyPix model is open source and is able to do realtime mapping of a person into 24 segments. The model is based on the ResNet-50 model which is a convolutional neural network trained on images from the ImageNet database.

BodyPix 2.0 has applications in augmented reality, photography, and video editing. The new release has a live in-browser demo using TensorFlow.js. Take a look at the example above and give it a try! Having access to the browser increases accessibility removing the need for costly hardware. All you need is internet and a webcam:

Why would you want to do this in the browser? Similar to the case of PoseNet, real-time person segmentation was only possible to do before with specialized hardware or hard-to-install software with steep system requirements. Instead both BodyPix and PoseNet can be used without installation and just a few lines of code. You don’t need any specialized lenses to use these models — they work with any basic webcam or mobile camera. 

If you would like to learn more check out the project repo on GitHub!

 

Written by Rebecca Minich, Product Analyst in Data Science at Google. Opinions expressed are solely my own and do not express the views or opinions of my employer.