Intel AI at Baidu Create: AI Camera, FPGA-based Acceleration and Xeon Scalable Optimizations for Deep Learning

Intel Baidu Create 1 2x1Intel Vice President Gadi Singer announces a series of collaborations with Baidu on artificial intelligence at Baidu Create on Wednesday, July 4, 2018, in Beijing. (Credit: Intel Corporation)
» Download full-size imageWhat’s New: Today at Baidu* Create in Beijing, Intel Vice President Gadi Singer shared a series of collaborations with Baidu on artificial intelligence (AI), including powering Baidu’s Xeye* a new AI retail camera with Intel® Movidius™ vision processing units (VPUs); highlighting Baidu’s plans to offer workload acceleration as a service using Intel® FPGAs; and optimizing PaddlePaddle*, Baidu’s deep learning framework for Intel® Xeon® Scalable processors.

“From enabling in-device intelligence, to providing data center scale on Intel Xeon Scalable processors, to accelerating workloads with Intel FPGAs, to making it simpler for PaddlePaddle developers to code across platforms, Baidu is taking advantage of Intel’s products and expertise to bring its latest AI advancements to life.”
–Gadi Singer, vice president and architecture general manager, Artificial Intelligence Products Group, Intel

How the Camera Works: Baidu’s Xeye camera uses Intel® Movidius™ Myriad™ 2 VPUs to deliver low-power, high-performance visual intelligence for retailers. Thanks to Intel’s purpose-built VPU solutions coupled with Baidu’s advanced machine learning algorithms, the camera can analyze objects and gestures, while also detecting people to provide personalized shopping experiences in retail settings.

How Acceleration Works: Baidu is developing a heterogeneous computing platform based on Intel’s latest field programmable gate arrays (FPGA) technology. Intel FPGAs will accelerate performance and energy efficiency, add flexibility to data center workloads, and enable workload acceleration as a service on Baidu Cloud.

How PaddlePaddle is Updated: With PaddlePaddle now optimized for Intel Xeon Scalable processors, developers and data scientists can now use the same hardware that powers the world’s data centers and clouds to advance their AI algorithms.

PaddlePaddle is optimized for Intel technology at several levels, including compute, memory, architecture and communication. For example:

  • Optimized math operations through advanced vector extensions (AVX) intrinsics, BLAS libraries (e.g. Intel® MKL, OpenBLAS) or customized CPU kernels.
  • Optimized CNN networks through Intel® MKL-DNN library.

Intel and Baidu are also exploring integration of PaddlePaddle and nGraph, a framework-neutral, deep neural network (DNN) model compiler that can target a variety of devices. Intel open-sourced nGraph in March. With nGraph, data scientists can write once, without worrying about how to adapt their DNN models to train and run efficiently on different hardware platforms.

Intel, the Intel logo, Intel FPGA, Intel Movidius, Myriad, and Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

Nguồn: newsroom.intel.com