FeatherCNN

FeatherCNN is a high performance inference engine for convolutional neural networks.

Github星跟踪图

license
Release Version
PRs Welcome

Introduction

FeatherCNN is a high-performance lightweight CNN inference library, developed by Tencent AI Platform Department.
FeatureCNN origins from our game AI project for King of Glory (Chinese: 王者荣耀), in which we aim to build a neural model for MOBA game AI and run it on mobile devices.
FeatherCNN currently targets at ARM CPUs.
We will extend it to cover other architecutures in the near future.

Comparing with other libraries, FeatherCNN has the following features:

  • High Performance FeatherCNN delivers state-of-the-art inference computing performance on a wide range of devices, including mobile phones (iOS/Android), embedded devices (Linux) as well as ARM-based servers (Linux).

  • Easy Deployment FeatherCNN packs everything in a single code base to get rid of third-party dependencies. Hence, it facilitates deployment on mobile platforms.

  • Featherweight The compiled FeatherCNN library is small-sized (hundreds of KBs).

Please kindly open an issue in this repo for bug reports and enhancement suggests. We are grateful to user responses and will actively polish this library.

Citation

FeatherCNN: Fast Inference Computation with TensorGEMM on ARM Architectures (TPDS September 2019, In press, DOI:10.1109/TPDS.2019.2939785)

Clone hints

The FeatherCNN repository has a heavy development history, please only clone the master branch as follows:

git clone -b master --single-branch https://github.com/tencent/FeatherCNN.git

Detailed Instructions for iOS/Android/Linux

Build From Source

iOS Guide

Android Guide

Android ADB Guide

Usage

Model Format Conversion

FeatherCNN accepts Caffemodels. It merges the structure file (.prototxt) and the weight file (.caffemodel) into a single binary model (.feathermodel). The convert tool requires protobuf, but you don't need them for the library.

Model Convert Guide.

Runtime Interfaces

The basic user interfaces are listed in feather/net.h. Currently we are using raw pointers to reference data.
We may provide more convenient interfaces in the near future.

Before inference, FeatherCNN requires two steps to initialize the network.

feather::Net forward_net(num_threads);
forward_net.InitFromPath(FILE_PATH_TO_FEATHERMODEL);

The net can also be initialized with raw buffers and FILE pointers.
We can perform forward computation with raw float* buffer consequently.

forward_net.Forward(PTR_TO_YOUR_INPUT_DATA);

The output can be extracted from the net by the name of blobs. The blob names are kept consistent with caffe prototxt.

forward_net.ExtractBlob(PTR_TO_YOUR_OUTPUT_BUFFER, BLOB_NAME);

BTW, you can also get the blob's data size by calling

size_t data_size = 0;
forward_net.GetBlobDataSize(&data_size, BLOB_NAME);

Performance Benchmarks

We have tested FeatherCNN on a bunch of devices, see this page for details.

User Groups

Telegram: https://t.me/FeatherCNN

QQ: 728147343

主要指标

概览
名称与所有者Tencent/FeatherCNN
主编程语言C++
编程语言CMake (语言数: 4)
平台
许可证
所有者活动
创建于2018-04-27 02:21:30
推送于2019-09-24 06:34:01
最后一次提交2019-09-24 14:33:44
发布数2
最新版本名称v0.1-beta (发布于 )
第一版名称v0.1-alpha (发布于 )
用户参与
星数1.2k
关注者数99
派生数282
提交数19
已启用问题?
问题数44
打开的问题数18
拉请求数4
打开的拉请求数2
关闭的拉请求数2
项目设置
已启用Wiki?
已存档?
是复刻?
已锁定?
是镜像?
是私有?