layer

以 Unix 方式进行神经网络推理。「Neural network inference the Unix way」

layer - neural network inference from the command line

layer is a program for doing neural network inference the Unix way. Many
modern neural network operations can be represented as sequential,
unidirectional streams of data processed by pipelines of filters.
The computations at each layer in these neural networks are equivalent to an
invocation of the layer program, and multiple invocations can be chained
together to represent the entirety of such networks.

For example, performing inference on a neural network with two fully-connected
layers might look something like this:

cat input, layer full -w w.1 --input-shape=2 -f tanh, layer full -w w.2 --input-shape=3 -f sigmoid

layer applies the Unix philosophy to neural network inference. Each type of
a neural network layer is a distinct subcommand. Simple text streams of
delimited numeric values serve as the interface between different layers of a
neural network. Each invocation of layer does one thing: it feeds the numeric
input values forward through an instantiation of a neural network layer, then
emits the resulting output numeric values.

Usage

Example: a convolutional neural network for CIFAR-10.

$ cat cifar10_x.csv \, layer convolutional -w w0.csv -b b0.csv --input-shape=32,32,3  --filter-shape=3,3 --num-filters=32 -f relu \, layer convolutional -w w1.csv -b b1.csv --input-shape=30,30,32 --filter-shape=3,3 --num-filters=32 -f relu \, layer pooling --input-shape=28,28,32 --filter-shape=2,2 --stride=2 -f max

Example: a multi-layer perceptron for XOR.

$ # Fully connected layer with three neurons
echo "-2.35546875,-2.38671875,3.63671875,3.521484375,-2.255859375,-2.732421875" > layer1.weights
echo "0.7958984375,0.291259765625,1.099609375" > layer1.biases

$ # Fully connected layer with one neuron
echo "-5.0625,-3.515625,-5.0625" > layer2.weights
echo "1.74609375" > layer2.biases

$ # Compute XOR for all possible binary inputs
echo -e "0,0\n0,1\n1,0\n1,1" \, layer full -w layer1.weights -b layer1.biases --input-shape=2 -f tanh \, layer full -w layer2.weights -b layer2.biases --input-shape=3 -f sigmoid
0.00129012749948779
0.99147053740106
0.991243357927591
0.0111237568184365

Installation

Requirements: BLAS 3.6.0+

  1. Download a release
  2. Install BLAS 3.6.0+
  • On Debian-based systems: apt-get install -y libblas3
  • On RPM-based system: yum install -y blas
  • On macOS 10.3+, BLAS is pre-installed as part of the
    Accelerate framework
  1. Unzip the release and run [sudo] ./install.sh, or manually relocate the
    binaries to the path of your choice.

About

layer is currently implemented as a proof-of-concept and supports a limited
number of neural network layer types. The types of layers are currently limited
to feed-forward layers that can be modeled as sequential, unidirectional
pipelines.

Input values, weights and biases for parameterized layers, and output values
are all read and written in row-major order,
based on the shape parameters specified for each layer.

layer is implemented in CHICKEN Scheme.

License

Copyright © 2018-2019

主要指標

概覽
名稱與所有者cloudkj/layer
主編程語言Scheme
編程語言Scheme (語言數: 2)
平台
許可證MIT License
所有者活动
創建於2018-12-08 09:38:55
推送於2019-04-21 07:04:23
最后一次提交2019-04-21 00:04:08
發布數1
最新版本名稱v0.1.0 (發布於 )
第一版名稱v0.1.0 (發布於 )
用户参与
星數557
關注者數12
派生數17
提交數52
已啟用問題?
問題數0
打開的問題數0
拉請求數0
打開的拉請求數0
關閉的拉請求數0
项目设置
已啟用Wiki?
已存檔?
是復刻?
已鎖定?
是鏡像?
是私有?