New release of BrainChip Studio, the AI-powered video analysis software, provides a dramatic increase in face classification accuracy
BrainChip Holdings has announced the BrainChip Studio 2018.3 update for its BrainChip Studio AI-powered video analysis software. The latest update boasts a powerful new mode that improves the software’s face classification accuracy by 10-30 percent.
The company claims that BrainChip Studio’s unique facial classification technology works in environments where traditional biometric-based face recognition systems fail, including low-light, low-resolution, and visually-noisy environments. The software is primarily used by law enforcement, intelligence, and counter-terrorism agencies that use existing CCTV infrastructure.
“We are always looking for ways to continually improve our products by listening to our customer requests,” said Bob Beachler, BrainChip’s Senior Vice President of Marketing and Business Development. “Not surprisingly, improving accuracy is typically at the top of list for video analytic software. With BrainChip Studio 2018.3 we were able to provide a dramatic increase in accuracy.”
How the software improves facial classifications
To date, BrainChip Studio utilized spiking neural networks to enable facial classification on partial faces. This partial-face mode is useful in situations where the probe image or the extracted faces may be obscured due to hats, masks, scarves or camera angle. BrainChip Studio 2018.3 uses a full-face mode to perform facial classifications. In situations where the entire face is visible in the probe image or in the extracted faces, this new mode provides a significant increase in facial classification accuracy.
“Depending on the dataset used, testing indicates this mode provides a 10-30 percent improvement in accuracy, without impacting throughput”, the company said.
According to MarketsandMarkets, the facial recognition market is expected to be over $7 billion by 2022.
BrainChip Studio 2018.3 is currently available. For more information click here.