CNI (Container Network Interface)

容器网络接口 -- Linux 容器的联网。「Container Network Interface - networking for Linux containers」

Github stars Tracking Chart

CNI -- 容器网络接口(the Container Network Interface)

什么是 CNI?

CNI(Container Network Interface,容器网络接口)是 云原生计算基金会 的一个项目,它由编写插件的规范和库组成,用于配置 Linux 容器中的网络接口,以及一些支持的插件。CNI 只关注容器的网络连接,以及在容器被删除时删除分配的资源。由于这种关注点,CNI 的支持范围很广,规范的实现也很简单。

除了 规范 之外,这个资源库还包含了用于 将 CNI 集成到应用程序中的库 的 Go 源代码,以及用于执行 CNI 插件的 命令行工具示例。另一个包含参考插件的单独的资源库和用于制作新插件的模板。

模板代码可以直接为现有的容器网络项目创建 CNI 插件。CNI 也是一个很好的框架,可以从头开始创建一个新的容器网络项目。

以下是 CNI 维护者在 KubeCon/CloudNativeCon 2019 上举办的两场会议的录音。

为什么要开发 CNI?

Linux 上的应用容器是一个快速发展的领域,而在这个领域中,网络问题并没有得到很好的解决,因为它与环境高度相关。我们相信,许多容器运行时和编排器都会寻求解决同样的问题,即使网络层可插拔。

为了避免重复,我们认为在网络插件和容器执行之间定义一个公共接口是很谨慎的:因此我们提出了这个规范,以及 Go 的库和一组插件。

谁在使用 CNI?

容器运行时

第三方插件

CNI 团队还在一个单独的资源库中维护了一些 核心插件

为 CNI 做贡献

我们欢迎大家的贡献,包括 错误报告、代码和文档的改进。如果您打算对代码或文档做出贡献,请阅读 CONTRIBUTING.md。也请参见本 README 中的联系部分。

如何使用 CNI?

使用要求

CNI 规范是语言无关的。要使用这个仓库中的 Go 语言库,你需要一个最新版本的 Go。你可以在 .travis.yaml 中找到我们的 自动测试 所覆盖的 Go 版本。

参考插件

CNI 项目维护了一套实现 CNI 规范的 参考插件。注意:参考插件曾经在这个版本库中,但从2017年5月开始,已经被拆分到一个单独的版本库中。

运行插件

构建并安装好参考插件后,可以使用 scripts/ 目录下的 priv-net-run.sh 和 docker-run.sh 脚本来行使插件。

注意 --priv-net-run.sh 依赖于 jq。

首先创建一个 netconf 文件来描述网络。

$ mkdir -p /etc/cni/net.d
$ cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
    "cniVersion": "0.2.0",
    "name": "mynet",
    "type": "bridge",
    "bridge": "cni0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "subnet": "10.22.0.0/16",
        "routes": [
            { "dst": "0.0.0.0/0" }
        ]
    }
}
EOF
$ cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
    "cniVersion": "0.2.0",
    "name": "lo",
    "type": "loopback"
}
EOF

目录 /etc/cni/net.d 是脚本寻找 net 配置的默认位置。

接下来,构建插件。

$ cd $GOPATH/src/github.com/containernetworking/plugins
$ ./build_linux.sh # or build_windows.sh

最后,在已经加入 mynet 网络的私有网络命名空间中执行一个命令(本例中的 ifconfig)。

$ CNI_PATH=$GOPATH/src/github.com/containernetworking/plugins/bin
$ cd $GOPATH/src/github.com/containernetworking/cni/scripts
$ sudo CNI_PATH=$CNI_PATH ./priv-net-run.sh ifconfig
eth0      Link encap:Ethernet  HWaddr f2:c2:6f:54:b8:2b  
          inet addr:10.22.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f0c2:6fff:fe54:b82b/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:90 (90.0 B)  TX bytes:0 (0.0 B)
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

环境变量 CNI_PATH 告诉脚本和库在哪里寻找插件可执行文件。

使用 CNI 插件设置的网络命名空间运行一个 Docker 容器。

使用上一节的说明来定义 netconf 并构建插件。接下来,docker-run.sh 脚本包装 docker run,以便在进入容器之前执行插件。

$ CNI_PATH=$GOPATH/src/github.com/containernetworking/plugins/bin
$ cd $GOPATH/src/github.com/containernetworking/cni/scripts
$ sudo CNI_PATH=$CNI_PATH ./docker-run.sh --rm busybox:latest ifconfig
eth0      Link encap:Ethernet  HWaddr fa:60:70:aa:07:d1  
          inet addr:10.22.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f860:70ff:feaa:7d1/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:90 (90.0 B)  TX bytes:0 (0.0 B)
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

CNI 未来可能做什么?

CNI 目前由于其简单的模型和 API,覆盖了网络配置的广泛需求。然而,在未来CNI可能会想向其他方向发展。

对现有网络配置进行动态更新
网络带宽和防火墙规则的动态策略

如果对这些话题感兴趣,请通过邮件列表或 IRC 联系团队,在社区里找一些志同道合的人一起提出建议。

二进制文件在哪里?

插件移到了一个单独的 repo:https://github.com/containernetworking/plugins,那里发布的版本包括二进制文件和校验和。

在发布 0.7.0 之前,cni 版本还包括一个 cnitool 二进制文件;由于这是一个开发者工具,我们建议您自己构建。

联系我们

关于 CNI 的任何问题,请通过以下方式联系我们。
电子邮件:cni-dev
IRC: freenode.net 上的 #containernetworking 频道。
Slack: CNCF slack 上的 #cni。注意:之前的 CNI Slack (con containernetworking.slack.com) 已被取消。

如果您有安全问题要报告,请私下向 MAINTAINERS 文件中列出的电子邮件地址报告。


(The first version translated by vz on 2020.09.24)

Overview

Name With Ownercontainernetworking/cni
Primary LanguageGo
Program languageShell (Language Count: 3)
PlatformLinux, Windows
License:Apache License 2.0
Release Count47
Last Release Namev1.2.0 (Posted on )
First Release Namev0.1.0 (Posted on 2015-07-22 15:20:15)
Created At2015-04-05 03:35:49
Pushed At2024-04-22 15:20:14
Last Commit At
Stargazers Count5.3k
Watchers Count220
Fork Count1.1k
Commits Count1.2k
Has Issues Enabled
Issues Count384
Issue Open Count105
Pull Requests Count505
Pull Requests Open Count19
Pull Requests Close Count167
Has Wiki Enabled
Is Archived
Is Fork
Is Locked
Is Mirror
Is Private

Linux Build Status
Windows Build Status
Coverage Status

CNI Logo


CNI at KubeCon / CloudNativeCon

The CNI maintainers are hosting two sessions at KubeCon / CloudNativeCon 2019:


CNI Slack

The CNI slack has been sunsetted - please join us in #cni and #cni-dev on the CNCF slack


CNI - the Container Network Interface

What is CNI?

CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins.
CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted.
Because of this focus, CNI has a wide range of support and the specification is simple to implement.

As well as the specification, this repository contains the Go source code of a library for integrating CNI into applications and an example command-line tool for executing CNI plugins. A separate repository contains reference plugins and a template for making new plugins.

The template code makes it straight-forward to create a CNI plugin for an existing container networking project.
CNI also makes a good framework for creating a new container networking project from scratch.

Why develop CNI?

Application containers on Linux are a rapidly evolving area, and within this area networking is not well addressed as it is highly environment-specific.
We believe that many container runtimes and orchestrators will seek to solve the same problem of making the network layer pluggable.

To avoid duplication, we think it is prudent to define a common interface between the network plugins and container execution: hence we put forward this specification, along with libraries for Go and a set of plugins.

Who is using CNI?

Container runtimes

3rd party plugins

The CNI team also maintains some core plugins in a separate repository.

Contributing to CNI

We welcome contributions, including bug reports, and code and documentation improvements.
If you intend to contribute to code or documentation, please read CONTRIBUTING.md. Also see the contact section in this README.

How do I use CNI?

Requirements

The CNI spec is language agnostic. To use the Go language libraries in this repository, you'll need a recent version of Go. You can find the Go versions covered by our automated tests in .travis.yaml.

Reference Plugins

The CNI project maintains a set of reference plugins that implement the CNI specification.
NOTE: the reference plugins used to live in this repository but have been split out into a separate repository as of May 2017.

Running the plugins

After building and installing the reference plugins, you can use the priv-net-run.sh and docker-run.sh scripts in the scripts/ directory to exercise the plugins.

note - priv-net-run.sh depends on jq

Start out by creating a netconf file to describe a network:

$ mkdir -p /etc/cni/net.d
$ cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
	"cniVersion": "0.2.0",
	"name": "mynet",
	"type": "bridge",
	"bridge": "cni0",
	"isGateway": true,
	"ipMasq": true,
	"ipam": {
		"type": "host-local",
		"subnet": "10.22.0.0/16",
		"routes": [
			{ "dst": "0.0.0.0/0" }
		]
	}
}
EOF
$ cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
	"cniVersion": "0.2.0",
	"name": "lo",
	"type": "loopback"
}
EOF

The directory /etc/cni/net.d is the default location in which the scripts will look for net configurations.

Next, build the plugins:

$ cd $GOPATH/src/github.com/containernetworking/plugins
$ ./build_linux.sh # or build_windows.sh

Finally, execute a command (ifconfig in this example) in a private network namespace that has joined the mynet network:

$ CNI_PATH=$GOPATH/src/github.com/containernetworking/plugins/bin
$ cd $GOPATH/src/github.com/containernetworking/cni/scripts
$ sudo CNI_PATH=$CNI_PATH ./priv-net-run.sh ifconfig
eth0      Link encap:Ethernet  HWaddr f2:c2:6f:54:b8:2b  
          inet addr:10.22.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f0c2:6fff:fe54:b82b/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:90 (90.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

The environment variable CNI_PATH tells the scripts and library where to look for plugin executables.

Running a Docker container with network namespace set up by CNI plugins

Use the instructions in the previous section to define a netconf and build the plugins.
Next, docker-run.sh script wraps docker run, to execute the plugins prior to entering the container:

$ CNI_PATH=$GOPATH/src/github.com/containernetworking/plugins/bin
$ cd $GOPATH/src/github.com/containernetworking/cni/scripts
$ sudo CNI_PATH=$CNI_PATH ./docker-run.sh --rm busybox:latest ifconfig
eth0      Link encap:Ethernet  HWaddr fa:60:70:aa:07:d1  
          inet addr:10.22.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f860:70ff:feaa:7d1/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:90 (90.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

What might CNI do in the future?

CNI currently covers a wide range of needs for network configuration due to its simple model and API.
However, in the future CNI might want to branch out into other directions:

  • Dynamic updates to existing network configuration
  • Dynamic policies for network bandwidth and firewall rules

If these topics are of interest, please contact the team via the mailing list or IRC and find some like-minded people in the community to put a proposal together.

Where are the binaries?

The plugins moved to a separate repo:
https://github.com/containernetworking/plugins, and the releases there
include binaries and checksums.

Prior to release 0.7.0 the cni release also included a cnitool
binary; as this is a developer tool we suggest you build it yourself.

Contact

For any questions about CNI, please reach out on the mailing list:

To the top