calico

环境

IP 节点 环境 系统
192.168.15.33 node1 docker+etcd+calicoctl CentOS 7
192.168.15.34 node2 docker+etcd+calicoctl CentOS 7

步骤

1)环境准备

  1. 修改主机名称

node1

1
2
[root@localhost ~]# hostnamectl --static set-hostname node1
[root@localhost ~]# echo "node1" > /etc/hostname

node2

1
2
[root@localhost ~]# hostnamectl --static set-hostname node2
[root@localhost ~]# echo "node2" > /etc/hostname
  1. 关闭两台主机防火墙,若开启iptables防火墙,则打开2380端口
1
2
3
4
5
[root@localhost ~]# systemctl disable firewalld.service
[root@localhost ~]# systemctl stop firewalld.service
[root@localhost ~]# iptables -F
[root@localhost ~]# firewall-cmd --state
not running
  1. 两台主机均设置hosts,都执行以下命令
1
2
3
4
[root@localhost ~]# vim /etc/hosts

192.168.15.133 node1
192.168.15.134 node2
  1. 将机器上的ip转发功能打开
1
2
[root@localhost ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@localhost ~]# sysctl -p

2)安装docker【两台机器上均安装】

1
2
3
4
[root@localhost ~]# yum install -y docker
[root@localhost ~]# systemctl start docker
[root@localhost ~]# systemctl enable docker

3)安装etcd【两台机器上均安装】

1
[root@localhost ~]# yum install etcd -y

4)配置etcd集群

node1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@localhost ~]# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak
[root@localhost ~]#cat > /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

ETCD_NAME="node1"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.15.133:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.15.133:2379"
ETCD_INITIAL_CLUSTER="node1=http://192.168.15.133:2380,node2=http://192.168.15.134:2380"
EOF

node2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@localhost ~]# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak
[root@localhost ~]# cat > /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

ETCD_NAME="node2"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.15.134:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.15.134:2379"
ETCD_INITIAL_CLUSTER="node1=http://192.168.15.133:2380,node2=http://192.168.15.134:2380"
EOF

启动两个节点的etcd服务,以node1为例

1
2
[root@localhost ~]# systemctl enable etcd
[root@localhost ~]# systemctl start etcd

查看集群成员

1
2
3
4
5
6
7
[root@localhost ~]# etcdctl member list
8e92f64982d9786c: name=node2 peerURLs=http://192.168.15.134:2380 clientURLs=http://192.168.15.134:2379 isLeader=false
d3e45f62ae9d5a52: name=node1 peerURLs=http://192.168.15.133:2380 clientURLs=http://192.168.15.133:2379 isLeader=true
[root@localhost ~]# etcdctl cluster-health
member 8e92f64982d9786c is healthy: got healthy result from http://192.168.15.134:2379
member d3e45f62ae9d5a52 is healthy: got healthy result from http://192.168.15.133:2379
cluster is healthy

5)修改docker启动文件以支持etcd

node1

在ExecStart区域内添加 (在–seccomp-profile 这一行的下面一行添加–cluster-store=etcd://192.168.15.133:2379 \ )

1
2
3
4
5
6
7
8
[root@localhost ~]# cp /usr/lib/systemd/system/docker.service /usr/lib/systemd/system/docker.service.bak
[root@localhost ~]# vim /usr/lib/systemd/system/docker.service
........
--seccomp-profile=/etc/docker/seccomp.json \
--cluster-store=etcd://192.168.15.133:2379 \

[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker

node2

在ExecStart区域内添加 (在–seccomp-profile 这一行的下面一行添加–cluster-store=etcd://192.168.15.134:2379 \ )

1
2
3
4
5
6
7
8
[root@localhost ~]# cp /usr/lib/systemd/system/docker.service /usr/lib/systemd/system/docker.service.bak
[root@localhost ~]# vim /usr/lib/systemd/system/docker.service
........
--seccomp-profile=/etc/docker/seccomp.json \
--cluster-store=etcd://192.168.15.134:2379 \

[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker

结果

发现当前docker支持了etcd,且由于etcd服务已经启动,所以可以看到etcd进程

1
2
3
4
[root@localhost ~]# ps -ef|grep etcd
etcd 10293 1 0 19:47 ? 00:00:33 /usr/bin/etcd --name=node1 --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://0.0.0.0:2379
root 10426 1 0 19:54 ? 00:00:07 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json --cluster-store=etcd://192.168.15.133:2379 --selinux-enabled --log-driver=journald --signature-verification=false --storage-driver overlay2
root 13333 1279 0 21:34 pts/0 00:00:00 grep --color=auto etcd

6)安装calico网络通信环境

节点上先下载calico容器镜像

1
[root@localhost ~]# docker pull quay.io/calico/node:v2.6.10

安装calicoctl【所有节点都需要安装】

1
2
3
4
5
[root@localhost ~]# wget https://github.com/projectcalico/calicoctl/releases/download/v1.1.0/calicoctl
[root@localhost ~]# chmod 755 calicoctl
[root@localhost ~]# mv calicoctl /usr/local/bin/
[root@localhost ~]# calicoctl --version
calicoctl version v1.1.0, build 882dd008

分别在两个节点上创建calico容器

node1

1
2
[root@localhost ~]# docker run --net=host --privileged --name=calico-node -d --restart=always -e NODENAME=node1  -e CALICO_NETWORKING_BACKEND=bird -e CALICO_LIBNETWORK_ENABLED=true -e IP=192.168.15.133 -e ETCD_ENDPOINTS=http://127.0.0.1:2379 -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /run:/run -v /run/docker/plugins:/run/docker/plugins -v /var/run/docker.sock:/var/run/docker.sock quay.io/calico/node:v2.6.10

node2

1
[root@localhost ~]#  -e CALICO_NETWORKING_BACKEND=bird -e CALICO_LIBNETWORK_ENABLED=true -e IP=192.168.15.134 -e ETCD_ENDPOINTS=http://127.0.0.1:2379 -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /run:/run -v /run/docker/plugins:/run/docker/plugins -v /var/run/docker.sock:/var/run/docker.sock quay.io/calico/node:v2.6.10

查看calico容器创建情况(以node1为例)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2d9f4e006b37 quay.io/calico/node:v2.6.10 "start_runit" About an hour ago Up About an hour calico-node

[root@localhost ~]# ps -ef|grep calico
root 11112 11109 0 20:24 ? 00:00:00 svlogd -tt /var/log/calico/bird6
root 11113 11109 0 20:24 ? 00:00:00 bird6 -R -s /var/run/calico/bird6.ctl -d -c /etc/calico/confd/config/bird6.cfg
root 11114 11110 0 20:24 ? 00:00:00 svlogd /var/log/calico/confd
root 11115 11110 0 20:24 ? 00:00:00 confd -confdir=/etc/calico/confd -interval=5 -watch --log-level=info -node=http://127.0.0.1:2379 -client-key= -client-cert= -client-ca-keys=
root 11116 11111 0 20:24 ? 00:00:00 svlogd /var/log/calico/libnetwork
root 11118 11107 0 20:24 ? 00:00:00 svlogd /var/log/calico/felix
root 11120 11108 0 20:24 ? 00:00:00 svlogd -tt /var/log/calico/bird
root 11121 11108 0 20:24 ? 00:00:00 bird -R -s /var/run/calico/bird.ctl -d -c /etc/calico/confd/config/bird.cfg
root 11455 11107 0 20:34 ? 00:00:28 calico-felix
root 13771 1279 0 21:49 pts/0 00:00:00 grep --color=auto calico

查看calico状态

node1

1
2
3
4
5
6
7
8
9
10
11
12
[root@localhost ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+----------------+-------------------+-------+----------+-------------+
| 192.168.15.134 | node-to-node mesh | up | 12:24:19 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

node2

1
2
3
4
5
6
7
8
9
10
11
12
[root@localhost ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+----------------+-------------------+-------+----------+-------------+
| 192.168.15.133 | node-to-node mesh | up | 12:24:21 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

7)添加calico网络

注:只需要在一个节点配置添加即可

node1

创建ip pool

1
2
3
4
5
6
7
8
9
10
11
12
[root@localhost ~]# cat >ipPool.yaml <<EOF
- apiVersion: v1
kind: ipPool
metadata:
cidr: 10.20.0.0/24
spec:
ipip:
enabled: true
nat-outgoing: true
EOF

[root@localhost ~]# calicoctl create -f ipPool.yaml

在任意节点查看添加ip pool情况

1
2
3
4
5
[root@localhost ~]# calicoctl get ipPool
CIDR
10.20.0.0/24
192.168.0.0/16
fd80:24e2:f998:72d6::/64

在上面创建的ip pool(10.20.0.0/24)里创建子网络

1
2
[root@localhost ~]# docker network create --driver calico --ipam-driver calico-ipam  --subnet 10.20.0.0/24 net1
[root@localhost ~]# docker network create --driver calico --ipam-driver calico-ipam --subnet 10.20.0.0/24 net2

查看网络创建情况,可以看到net1、net2的网络已经存在

1
2
3
4
5
6
7
[root@localhost ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
dd3ae1400375 bridge bridge local
b87973f6e3da host host local
aaea1c24a3d6 net1 calico global
efebcca875b3 net2 calico global
0fd941dfe66b none null local

在node1和node2上创建容器来测试下容器网络的连通性

下载busybox工具,方便测试使用

1
[root@localhost ~]# docker pull busybox

node1

1
2
[root@localhost ~]# docker run --net net1 --name workload-A -tid busybox
[root@localhost ~]# docker run --net net2 --name workload-B -tid busybox

node2

1
[root@localhost ~]# docker run --net net1 --name workload-E -tid busybox

同一网络内的容器(即使不在同一节点主机上)可以使用容器名来访问

node1

1
2
3
4
5
6
7
8
9
10
[root@localhost ~]# docker exec workload-A ping -c 4 workload-E.net1
PING workload-E.net1 (10.20.0.1): 56 data bytes
64 bytes from 10.20.0.1: seq=0 ttl=62 time=0.811 ms
64 bytes from 10.20.0.1: seq=1 ttl=62 time=0.220 ms
64 bytes from 10.20.0.1: seq=2 ttl=62 time=0.157 ms
64 bytes from 10.20.0.1: seq=3 ttl=62 time=0.231 ms

--- workload-E.net1 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.157/0.354/0.811 ms

node2

1
2
3
4
5
6
7
8
9
10
[root@localhost ~]# docker exec workload-E ping -c 4 workload-A.net1
PING workload-A.net1 (10.20.0.129): 56 data bytes
64 bytes from 10.20.0.129: seq=0 ttl=62 time=1.964 ms
64 bytes from 10.20.0.129: seq=1 ttl=62 time=1.152 ms
64 bytes from 10.20.0.129: seq=2 ttl=62 time=0.521 ms
64 bytes from 10.20.0.129: seq=3 ttl=62 time=0.503 ms

--- workload-A.net1 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.503/1.035/1.964 ms

不同网络内的容器需要使用容器ip来访问

使用容器名会报:bad address

1
2
[root@localhost ~]# docker exec workload-A ping -c 4 workload-B.net2
ping: bad address 'workload-B.net2'

使用容器ip(这个还没尝试成功)

1
docker exec workload-A ping -c 2  `docker inspect --format "{{ .NetworkSettings.Networks.net2.IPAddress }}" workload-B`

最后更新: 2019年05月14日 16:18

原始链接: https://silence-linhl.github.io/blog/2019/03/16/calico/

× 请我吃糖~
打赏二维码