Google has been knee deep in the machine learning sector for a number of years, and the company isn’t afraid to share some of its knowledge with the world for the greater good. The company has been particularly skillful in creating robust AI-powered image manipulation technologies, which is one of the reasons its Pixel 2 and Pixel 2 XL boast really impressive photo capabilities.
This week, the big G open sourced its DeepLab-v3+ “semantic image segmentation” model, implemented in TensorFlow. With it, AI can be used to separate people (or other objects) from the background, allowing users to create really stunning photos with minimal effort. Bokeh effects are all the rage (and for good reason), and today’s smartphones are powerful enough to bring the effect to everyone – not just those with high-end equipment.
Google notes that a verbatim DeepLab-v3+ isn’t what’s used in the Pixel 2, so its own tech is likely to have more polish the company didn’t want to dole out to others. Instead, it wants to provide a baseline, and have others optimize with it. Others may even take different approaches, ultimately teaching Google something. That’s the goal here: to make this piece of technology even better.
Since its introduction of DeepLab three years ago, Google has improved the CNN (convolutional neural network) which has brought us to a point in accuracy that Google’s own engineers would have been skeptical to believe five years ago that we’d be at this point right now. That says a lot, and bodes well for the future of AI and machine-learning, since in the grand scheme, we really are still in early days.