node-feature-discovery

Node feature discovery for Kubernetes

Github stars Tracking Chart

Node feature discovery for Kubernetes

Build Status
Go Report Card

Overview

This software enables node feature discovery for Kubernetes. It detects
hardware features available on each node in a Kubernetes cluster, and advertises
those features using node labels.

NFD consists of two software components:

  1. nfd-master is responsible for labeling Kubernetes node objects
  2. nfd-worker is detects features and communicates them to nfd-master.
    One instance of nfd-worker is supposed to be run on each node of the cluster

Command line interface

You can run NFD in stand-alone Docker containers e.g. for testing
purposes. This is useful for checking features-detection.

NFD-Master

When running as a standalone container labeling is expected to fail because
Kubernetes API is not available. Thus, it is recommended to use --no-publish
command line flag. E.g.

$ docker run --rm --name=nfd-test <NFD_CONTAINER_IMAGE> nfd-master --no-publish
2019/02/01 14:48:21 Node Feature Discovery Master <NFD_VERSION>
2019/02/01 14:48:21 gRPC server serving on port: 8080

Command line flags of nfd-master:

$ docker run --rm <NFD_CONTAINER_IMAGE> nfd-master --help
...
nfd-master.

  Usage:
  nfd-master [--no-publish] [--label-whitelist=<pattern>] [--port=<port>]
     [--ca-file=<path>] [--cert-file=<path>] [--key-file=<path>]
     [--verify-node-name] [--extra-label-ns=<list>]
  nfd-master -h, --help
  nfd-master --version

  Options:
  -h --help                       Show this screen.
  --version                       Output version and exit.
  --port=<port>                   Port on which to listen for connections.
                                  [Default: 8080]
  --ca-file=<path>                Root certificate for verifying connections
                                  [Default: ]
  --cert-file=<path>              Certificate used for authenticating connections
                                  [Default: ]
  --key-file=<path>               Private key matching --cert-file
                                  [Default: ]
  --verify-node-name              Verify worker node name against CN from the TLS
                                  certificate. Only has effect when TLS authentication
                                  has been enabled.
  --no-publish                    Do not publish feature labels
  --label-whitelist=<pattern>     Regular expression to filter label names to
                                  publish to the Kubernetes API server. [Default: ]
  --extra-label-ns=<list>         Comma separated list of allowed extra label namespaces
                                  [Default: ]

NFD-Worker

In order to run nfd-worker as a "stand-alone" container against your
standalone nfd-master you need to run them in the same network namespace:

$ docker run --rm --network=container:nfd-test <NFD_CONTAINER_IMAGE> nfd-worker
2019/02/01 14:48:56 Node Feature Discovery Worker <NFD_VERSION>
...

If you just want to try out feature discovery without connecting to nfd-master,
pass the --no-publish flag to nfd-worker.

Command line flags of nfd-worker:

$ docker run --rm <CONTAINER_IMAGE_ID> nfd-worker --help
...
nfd-worker.

  Usage:
  nfd-worker [--no-publish] [--sources=<sources>] [--label-whitelist=<pattern>]
     [--oneshot, --sleep-interval=<seconds>] [--config=<path>]
     [--options=<config>] [--server=<server>] [--server-name-override=<name>]
     [--ca-file=<path>] [--cert-file=<path>] [--key-file=<path>]
  nfd-worker -h, --help
  nfd-worker --version

  Options:
  -h --help                   Show this screen.
  --version                   Output version and exit.
  --config=<path>             Config file to use.
                              [Default: /etc/kubernetes/node-feature-discovery/nfd-worker.conf]
  --options=<config>          Specify config options from command line. Config
                              options are specified in the same format as in the
                              config file (i.e. json or yaml). These options
                              will override settings read from the config file.
                              [Default: ]
  --ca-file=<path>            Root certificate for verifying connections
                              [Default: ]
  --cert-file=<path>          Certificate used for authenticating connections
                              [Default: ]
  --key-file=<path>           Private key matching --cert-file
                              [Default: ]
  --server=<server>           NFD server address to connecto to.
                              [Default: localhost:8080]
  --server-name-override=<name> Name (CN) expect from server certificate, useful
                              in testing
                              [Default: ]
  --sources=<sources>         Comma separated list of feature sources.
                              [Default: cpu,iommu,kernel,local,memory,network,pci,storage,system]
  --no-publish                Do not publish discovered features to the
                              cluster-local Kubernetes API server.
  --label-whitelist=<pattern> Regular expression to filter label names to
                              publish to the Kubernetes API server. [Default: ]
  --oneshot                   Label once and exit.
  --sleep-interval=<seconds>  Time to sleep between re-labeling. Non-positive
                              value implies no re-labeling (i.e. infinite
                              sleep). [Default: 60s]

NOTE Some feature sources need certain directories and/or files from the
host mounted inside the NFD container. Thus, you need to provide Docker with the
correct --volume options in order for them to work correctly when run
stand-alone directly with docker run. See the
template spec
for up-to-date information about the required volume mounts.

Feature discovery

Feature sources

The current set of feature sources are the following:

  • CPU
  • IOMMU
  • Kernel
  • Memory
  • Network
  • PCI
  • Storage
  • System
  • Local (hooks for user-specific features)

Feature labels

The published node labels encode a few pieces of information:

  • Namespace, i.e. feature.node.kubernetes.io
  • The source for each label (e.g. cpu).
  • The name of the discovered feature as it appears in the underlying
    source, (e.g. cpuid.AESNI from cpu).
  • The value of the discovered feature.

Feature label names adhere to the following pattern:

<namespace>/<source name>-<feature name>[.<attribute name>]

The last component (i.e. attribute-name) is optional, and only used if a
feature logically has sub-hierarchy, e.g. sriov.capable and
sriov.configure from the network source.

{
  "feature.node.kubernetes.io/cpu-<feature-name>": "true",
  "feature.node.kubernetes.io/iommu-<feature-name>": "true",
  "feature.node.kubernetes.io/kernel-<feature name>": "<feature value>",
  "feature.node.kubernetes.io/memory-<feature-name>": "true",
  "feature.node.kubernetes.io/network-<feature-name>": "true",
  "feature.node.kubernetes.io/pci-<device label>.present": "true",
  "feature.node.kubernetes.io/storage-<feature-name>": "true",
  "feature.node.kubernetes.io/system-<feature name>": "<feature value>",
  "feature.node.kubernetes.io/<file name>-<feature name>": "<feature value>"
}

The --sources flag controls which sources to use for discovery.

Note: Consecutive runs of nfd-worker will update the labels on a
given node. If features are not discovered on a consecutive run, the corresponding
label will be removed. This includes any restrictions placed on the consecutive run,
such as restricting discovered features with the --label-whitelist option.

CPU Features, Feature name, Attribute, Description, -----------------------, ------------------, -----------------------------, cpuid, <cpuid flag>, CPU capability is supported, hardware_multithreading, , Hardware multithreading, such as Intel HTT, enabled (number of logical CPUs is greater than physical CPUs), power, sst_bf.enabled, Intel SST-BF (Intel Speed Select Technology - Base frequency) enabled, pstate, turbo, Set to 'true' if turbo frequencies are enabled in Intel pstate driver, set to 'false' if they have been disabled., rdt, RDTMON, Intel RDT Monitoring Technology, , RDTCMT, Intel Cache Monitoring (CMT), , RDTMBM, Intel Memory Bandwidth Monitoring (MBM), , RDTL3CA, Intel L3 Cache Allocation Technology, , RDTL2CA, Intel L2 Cache Allocation Technology, , RDTMBA, Intel Memory Bandwidth Allocation (MBA) Technology

The (sub-)set of CPUID attributes to publish is configurable via the
attributeBlacklist and attributeWhitelist cpuid options of the cpu source.
If whitelist is specified, only whitelisted attributes will be published. With
blacklist, only blacklisted attributes are filtered out. attributeWhitelist
has priority over attributeBlacklist. For examples and more information
about configurability, see Configuration Options.
By default, the following CPUID flags have been blacklisted:
BMI1, BMI2, CLMUL, CMOV, CX16, ERMS, F16C, HTT, LZCNT, MMX, MMXEXT, NX, POPCNT,
RDRAND, RDSEED, RDTSCP, SGX, SSE, SSE2, SSE3, SSE4.1, SSE4.2 and SSSE3.

NOTE The cpuid features advertise supported CPU capabilities, that is, a
capability might be supported but not enabled.

X86 CPUID Attributes (Partial List), Attribute, Description, ---------, ----------------------------------------------------------------, ADX, Multi-Precision Add-Carry Instruction Extensions (ADX), AESNI, Advanced Encryption Standard (AES) New Instructions (AES-NI), AVX, Advanced Vector Extensions (AVX), AVX2, Advanced Vector Extensions 2 (AVX2)

Arm64 CPUID Attribute (Partial List), Attribute, Description, ---------, ----------------------------------------------------------------, AES, Announcing the Advanced Encryption Standard, EVSTRM, Event Stream Frequency Features, FPHP, Half Precision(16bit) Floating Point Data Processing Instructions, ASIMDHP, Half Precision(16bit) Asimd Data Processing Instructions, ATOMICS, Atomic Instructions to the A64, ASIMRDM, Support for Rounding Double Multiply Add/Subtract, PMULL, Optional Cryptographic and CRC32 Instructions, JSCVT, Perform Conversion to Match Javascript, DCPOP, Persistent Memory Support

IOMMU Features, Feature name, Description, :------------:, :---------------------------------------------------------------------------------:, enabled, IOMMU is present and enabled in the kernel

Kernel Features, Feature, Attribute, Description, -------, -------------------, --------------------------------------------, config, <option name>, Kernel config option is enabled (set 'y' or 'm'). Default options are NO_HZ, NO_HZ_IDLE, NO_HZ_FULL and PREEMPT, selinux, enabled, Selinux is enabled on the node, version, full, Full kernel version as reported by /proc/sys/kernel/osrelease (e.g. '4.5.6-7-g123abcde'), , major, First component of the kernel version (e.g. '4'), , minor, Second component of the kernel version (e.g. '5'), , revision, Third component of the kernel version (e.g. '6')

Kernel config file to use, and, the set of config options to be detected are
configurable.
See configuration options for more information.

Memory Features, Feature, Attribute, Description, -------, ---------, ------------------------------------------------------, numa, , Multiple memory nodes i.e. NUMA architecture detected, nv, present, NVDIMM device(s) are present, nv, dax, NVDIMM region(s) configured in DAX mode are present

Network Features, Feature, Attribute, Description, -------, ----------, -----------------------------------------------------, sriov, capable, Single Root Input/Output Virtualization (SR-IOV) enabled Network Interface Card(s) present, , configured, SR-IOV virtual functions have been configured

PCI Features, Feature, Attribute, Description, --------------------, -------------, -----------------------------------------, <device label>, present, PCI device is detected, <device label>, sriov.capable, Single Root Input/Output Virtualization (SR-IOV) enabled PCI device present

<device label> is composed of raw PCI IDs, separated by underscores.
The set of fields used in <device label> is configurable, valid fields being
class, vendor, device, subsystem_vendor and subsystem_device.
Defaults are class and vendor. An example label using the default
label fields:

feature.node.kubernetes.io/pci-1200_8086.present=true

Also the set of PCI device classes that the feature source detects is
configurable. By default, device classes (0x)03, (0x)0b40 and (0x)12, i.e.
GPUs, co-processors and accelerator cards are detected.

See configuration options
for more information on NFD config.

Storage Features, Feature name, Description, :--------------:, :---------------------------------------------------------------------------------:, nonrotationaldisk, Non-rotational disk, like SSD, is present in the node

System Features, Feature, Attribute, Description, -----------, ----------------, --------------------------------------------, os_release, ID, Operating system identifier, , VERSION_ID, Operating system version identifier (e.g. '6.7'), , VERSION_ID.major, First component of the OS version id (e.g. '6'), , VERSION_ID.minor, Second component of the OS version id (e.g. '7')

Feature Detector Hooks (User-specific Features)

NFD has a special feature source named local which is designed for getting the
labels from user-specific feature detector. It provides a mechanism for users to
implement custom feature sources in a pluggable way, without modifying nfd
source code or Docker images. The local feature source can be used to advertise
new user-specific features, and, for overriding labels created by the other
feature sources.

The local feature source gets its labels by two different ways:

  • It tries to execute files found under /etc/kubernetes/node-feature-discovery/source.d/
    directory. The hook files must be executable and they are supposed to print all
    discovered features in stdout, one per line. With ELF binaries static
    linking is recommended as the selection of system libraries available in the
    NFD release image is very limited. Other runtimes currently supported by the
    NFD stock image are bash and perl.
  • It reads files found under /etc/kubernetes/node-feature-discovery/features.d/
    directory. The file content is expected to be similar to the hook output (described above).

These directories must be available inside the Docker image so Volumes and
VolumeMounts must be used if standard NFD images are used. The given template
files mount by default the source.d and the features.d directories
respectively from /etc/kubernetes/node-feature-discovery/source.d/ and
/etc/kubernetes/node-feature-discovery/features.d/ from the host. You should
update them to match your needs.

In both cases, the labels can be binary or non binary, using either <name> or
<name>=<value> format.

Unlike the other feature sources, the name of the file, instead of the name of
the feature source (that would be local in this case), is used as a prefix in
the label name, normally. However, if the <name> of the label starts with a
slash (/) it is used as the label name as is, without any additional prefix.
This makes it possible for the user to fully control the feature label names,
e.g. for overriding labels created by other feature sources.

You can also override the default namespace of your labels using this format:
<namespace>/<name>[=<value>]. You must whitelist your namespace using the
--extra-label-ns option on the master. In this case, the name of the
file will not be added to the label name. For example, if you want to add the
label my.namespace.org/my-label=value, your hook output or file must contains
my.namespace.org/my-label=value and you must add
--extra-label-ns=my.namespace.org on the master command line.

stderr output of the hooks is propagated to NFD log so it can be used for
debugging and logging.

Injecting Labels from Other Pods

One use case for the hooks and/or feature files is detecting features in other
Pods outside NFD, e.g. in Kubernetes device plugins. It is possible to mount
the source.d and/or features.d directories common with the NFD Pod and
deploy the custom hooks/features there. NFD will periodically scan the
directories and run any hooks and read any feature files it finds. The
example nfd-worker deployment template
contains hostPath mounts for sources.d and features.d directories. By
using the same mounts in the secondary Pod (e.g. device plugin) you have
created a shared area for delivering hooks and feature files to NFD.

A Hook Example

User has a shell script
/etc/kubernetes/node-feature-discovery/source.d/my-source which has the
following stdout output:

MY_FEATURE_1
MY_FEATURE_2=myvalue
/override_source-OVERRIDE_BOOL
/override_source-OVERRIDE_VALUE=123
override.namespace/value=456

which, in turn, will translate into the following node labels:

feature.node.kubernetes.io/my-source-MY_FEATURE_1=true
feature.node.kubernetes.io/my-source-MY_FEATURE_2=myvalue
feature.node.kubernetes.io/override_source-OVERRIDE_BOOL=true
feature.node.kubernetes.io/override_source-OVERRIDE_VALUE=123
override.namespace/value=456

A File Example

User has a file
/etc/kubernetes/node-feature-discovery/features.d/my-source which contains the
following lines:

MY_FEATURE_1
MY_FEATURE_2=myvalue
/override_source-OVERRIDE_BOOL
/override_source-OVERRIDE_VALUE=123
override.namespace/value=456

which, in turn, will translate into the following node labels:

feature.node.kubernetes.io/my-source-MY_FEATURE_1=true
feature.node.kubernetes.io/my-source-MY_FEATURE_2=myvalue
feature.node.kubernetes.io/override_source-OVERRIDE_BOOL=true
feature.node.kubernetes.io/override_source-OVERRIDE_VALUE=123
override.namespace/value=456

NFD tries to run any regular files found from the hooks directory. Any
additional data files your hook might need (e.g. a configuration file) should
be placed in a separate directory in order to avoid NFD unnecessarily trying to
execute these. You can use a subdirectory under the hooks directory, for
example /etc/kubernetes/node-feature-discovery/source.d/conf/.

NOTE! NFD will blindly run any executables placed/mounted in the hooks
directory. It is the user's responsibility to review the hooks for e.g.
possible security implications.

NOTE! Be careful when creating and/or updating hook or feature files while
NFD is running. In order to avoid race conditions you should write into a
temporary file (outside the source.d and features.d directories), and,
atomically create/update the original file by doing a filesystem move
operation.

Getting started

For a stable version with ready-built images see the
latest released version (release notes).

If you want to use the latest development version (master branch) you need to
build your own custom image.

System requirements

  1. Linux (x86_64/Arm64)
  2. kubectl (properly set up and configured to work with your
    Kubernetes cluster)
  3. Docker (only required to build and push docker images)

Usage

nfd-master

Nfd-master runs as a DaemonSet, by default in the master node(s)
only. You can use the template spec provided to deploy nfd-master, or
use nfd-master.yaml generated by Makefile. The latter includes
image: and namespace: definitions that match the latest built
image. Example:

make IMAGE_TAG=<IMAGE_TAG>
docker push <IMAGE_TAG>
kubectl create -f nfd-master.yaml

Nfd-master listens for connections from nfd-worker(s) and connects to the
Kubernetes API server to add node labels advertised by them.

If you have RBAC authorization enabled (as is the default e.g. with clusters
initialized with kubeadm) you need to configure the appropriate ClusterRoles,
ClusterRoleBindings and a ServiceAccount in order for NFD to create node
labels. The provided template will configure these for you.

nfd-worker

Nfd-worker is preferably run as a Kubernetes DaemonSet. There is an
example spec (nfd-worker-daemonset.yaml.template) that can be used
as a template, or, as is when just trying out the service. Similarly
to nfd-master above, the Makefile also generates
nfd-worker-daemonset.yaml from the template that you can use to
deploy the latest image. Example:

make IMAGE_TAG=<IMAGE_TAG>
docker push <IMAGE_TAG>
kubectl create -f nfd-worker-daemonset.yaml

Nfd-worker connects to the nfd-master service to advertise hardware features.

When run as a daemonset, nodes are re-labeled at an interval specified using
the --sleep-interval option. In the template the default interval is set to 60s
which is also the default when no --sleep-interval is specified.

Feature discovery can alternatively be configured as a one-shot job. There is
an example script in this repo that demonstrates how to deploy the job in the cluster.

./label-nodes.sh [<IMAGE_TAG>]

The label-nodes.sh script tries to launch as many jobs as there are Ready nodes.
Note that this approach does not guarantee running once on every node.
For example, if some node is tainted NoSchedule or fails to start a job for some other reason, then some other node will run extra job instance(s) to satisfy the request and the tainted/failed node does not get labeled.

nfd-master and nfd-worker in the same Pod

You can also run nfd-master and nfd-worker inside a single pod (skip the sed
part if running the latest released version):

sed -E s',^(\s*)image:.+$,\1image: <YOUR_IMAGE_REPO>:<YOUR_IMAGE_TAG>,' nfd-daemonset-combined.yaml.template > nfd-daemonset-combined.yaml
kubectl apply -f nfd-daemonset-combined.yaml

Similar to the nfd-worker setup above, this creates a DaemonSet that schedules
an NFD Pod an all worker nodes, with the difference that the Pod also also
contains an nfd-master instance. In this case no nfd-master service is run on
the master node(s), but, the worker nodes are able to label themselves.

This may be desirable e.g. in single-node setups.

TLS authentication

NFD supports mutual TLS authentication between the nfd-master and nfd-worker
instances. That is, nfd-worker and nfd-master both verify that the other end
presents a valid certificate.

TLS authentication is enabled by specifying --ca-file, --key-file and
--cert-file args, on both the nfd-master and nfd-worker instances.
The template specs provided with NFD contain (commented out) example
configuration for enabling TLS authentication.

The Common Name (CN) of the nfd-master certificate must match the DNS name of
the nfd-master Service of the cluster. By default, nfd-master only check that
the nfd-worker has been signed by the specified root certificate (--ca-file).
Additional hardening can be enabled by specifying --verify-node-name in
nfd-master args, in which case nfd-master verifies that the NodeName presented
by nfd-worker matches the Common Name (CN) of its certificate. This means that
each nfd-worker requires a individual node-specific TLS certificate.

Usage demo

asciicast

Configuration options

Nfd-worker supports a configuration file. The default location is
/etc/kubernetes/node-feature-discovery/nfd-worker.conf, but,
this can be changed by specifying the--config command line flag. The file is
read inside the container, and thus, Volumes and VolumeMounts are needed to
make your configuration available for NFD. The preferred method is to use a
ConfigMap.
For example, create a config map using the example config as a template:

cp nfd-worker.conf.example nfd-worker.conf
vim nfd-worker.conf  # edit the configuration
kubectl create configmap nfd-worker-config --from-file=nfd-worker.conf

Then, configure Volumes and VolumeMounts in the Pod spec (just the relevant
snippets shown below):

...
  containers:
      volumeMounts:
        - name: nfd-worker-config
          mountPath: "/etc/kubernetes/node-feature-discovery/"
...
  volumes:
    - name: nfd-worker-config
      configMap:
        name: nfd-worker-config
...

You could also use other types of volumes, of course. That is, hostPath if
different config for different nodes would be required, for example.

The (empty-by-default)
example config
is used as a config in the NFD Docker image. Thus, this can be used as a default
configuration in custom-built images.

Configuration options can also be specified via the --options command line
flag, in which case no mounts need to be used. The same format as in the config
file must be used, i.e. JSON (or YAML). For example:

--options='{"sources": { "pci": { "deviceClassWhitelist": ["12"] } } }'

Configuration options specified from the command line will override those read
from the config file.

Currently, the only available configuration options are related to the
CPU, PCI and Kernel
feature sources.

Building from source

Download the source code:

git clone https://github.com/kubernetes-sigs/node-feature-discovery

Build the container image:
See customizing the build below for altering the
container image registry, for example.

cd <project-root>
make

Push the container image:
Optional, this example with Docker.

docker push <IMAGE_TAG>

Change the job spec to use your custom image (optional):

To use your published image from the step above instead of the
quay.io/kubernetes_incubator/node-feature-discovery image, edit image
attribute in the spec template(s) to the new location
(<quay-domain-name>/<registry-user>/<image-name>[:<version>]).

Customizing the Build

There are several Makefile variables that control the build process and the
name of the resulting container image., Variable, Description, Default value, ----------------------, --------------------------------------------, -----------, IMAGE_BUILD_CMD, Command to build the image, docker build, IMAGE_BUILD_EXTRA_OPTS, Extra options to pass to build command, empty, IMAGE_PUSH_CMD, Command to push the image to remote registry, docker push, IMAGE_REGISTRY, Container image registry to use, quay.io/kubernetes_incubator, IMAGE_NAME, Container image name, node-feature-discovery, IMAGE_TAG_NAME, Container image tag name, <nfd version>, IMAGE_REPO, Container image repository to use, <IMAGE_REGISTRY>/<IMAGE_NAME>, IMAGE_TAG, Full image:tag to tag the image with, <IMAGE_REPO>/<IMAGE_NAME>, K8S_NAMESPACE, nfd-master and nfd-worker namespace, kube-system, KUBECONFIG, Kubeconfig for running e2e-tests, empty

For example, to use a custom registry:

make IMAGE_REGISTRY=<my custom registry uri>

Or to specify a build tool different from Docker:

make IMAGE_BUILD_CMD="buildah bud"

Testing

Unit tests are automatically run as part of the container image build. You can
also run them manually in the source code tree by simply running:

make test

End-to-end tests are built on top of the e2e test framework of Kubernetes, and,
they required a cluster to run them on. For running the tests on your test
cluster you need to specify the kubeconfig to be used:

make e2e-test KUBECONFIG=$HOME/.kube/config

Targeting Nodes with Specific Features

Nodes with specific features can be targeted using the nodeSelector field. The
following example shows how to target nodes with Intel TurboBoost enabled.

apiVersion: v1
kind: Pod
metadata:
  labels:
    env: test
  name: golang-test
spec:
  containers:
    - image: golang
      name: go1
  nodeSelector:
    feature.node.kubernetes.io/cpu-pstate.turbo: 'true'

For more details on targeting nodes, see node selection.

References

Github issues

Design proposal

Governance

This is a SIG-node
subproject, hosted under the
Kubernetes SIGs organization in
Github. The project was established in 2016 as a
Kubernetes Incubator
project and migrated to Kubernetes SIGs in 2018.

License

This is open source software released under the Apache 2.0 License.

Demo

A demo on the benefits of using node feature discovery can be found in demo.

Main metrics

Overview
Name With Ownerkubernetes-sigs/node-feature-discovery
Primary LanguageGo
Program languageMakefile (Language Count: 7)
Platform
License:Apache License 2.0
所有者活动
Created At2016-07-23 05:07:52
Pushed At2025-04-25 08:09:42
Last Commit At2025-04-25 01:08:35
Release Count75
Last Release Namev0.17.3 (Posted on 2025-04-24 18:30:51)
First Release Namev0.1.0 (Posted on 2016-12-07 14:04:47)
用户参与
Stargazers Count861
Watchers Count27
Fork Count259
Commits Count2.9k
Has Issues Enabled
Issues Count407
Issue Open Count23
Pull Requests Count1551
Pull Requests Open Count16
Pull Requests Close Count148
项目设置
Has Wiki Enabled
Is Archived
Is Fork
Is Locked
Is Mirror
Is Private