Facebook's AI Model Is 5 Times Faster Than Google's On GPUs

Publish On: 13 Apr, 2020 11:37 AM | Updated   |   Madhurima  

SAN FRANCISCO: A team from Facebook AI Research (FAIR) has developed a completely unique low-dimensional design space called 'RegNet' that outperforms traditional available models like from Google and runs five times faster on GPUs.

RegNet produces simple, fast and versatile networks and in experiments, it outperformed Google's SOTA EfficientNet models, said the researchers during a paper titled 'Designing Network Design Spaces; published on pre-print repository ArXiv.

Also Read: Apple And Google Working Together To Make COVID-19 Contact Tracing Tech

The researchers aimed for "interpretability and to find general design principles that describe networks that are simple, work well, and generalize across settings".

The Facebook AI team conducted controlled comparisons with EfficientNet with no training-time enhancements and under identical training setup.

Introduced in 2019, Google's EfficientNet uses a mixture of NAS and model scaling rules and represents the present SOTA.
With comparable training settings and Flops, RegNet models outperformed EfficientNet models while being up to 5× faster on GPUs.

Rather than designing and developing individual networks, the team focused on designing actual network design spaces comprising huge and possibly infinite populations of model architectures.

Design space quality is analyzed using error empirical distribution function (EDF).

Analyzing the RegNet design space also provided researchers with other unexpected insights into network design.

They noticed, for instance, that the depth of the most effective models is stable across compute regimes with an optimal depth of 20 blocks (60 layers).

"While it's common to ascertain modern mobile networks employ inverted bottlenecks, researchers noticed that using inverted bottlenecks degrades performance. the best models don't use either a bottleneck or an inverted bottleneck, said the paper.

Facebook AI research team recently developed a tool that tricks the face recognition system to wrongly identify an individual in a video.

The "de-identification" system, which also works in live videos, uses machine learning to alter key facial features of an issue during a video.

FAIR is advancing the state-of-the-art in AI through fundamental and applied research in open collaboration with the community.

The social networking giant created the Facebook AI Research (FAIR) group in 2014 to advance the state of the art of AI through open research for the advantage of all.

Also Read: Lawsuit Filed Against Zoom App For Security Violations

Since then, FAIR has grown into a world research organization with labs in Menlo Park, New York, Paris, Montreal, Tel Aviv, Seattle, Pittsburgh, and London.