Skip to content

Commit

Permalink
Add a Docker multi-node setup.
Browse files Browse the repository at this point in the history
  • Loading branch information
brendandburns committed Apr 8, 2015
1 parent 42e4eaa commit b3c46b5
Show file tree
Hide file tree
Showing 11 changed files with 414 additions and 94 deletions.
3 changes: 2 additions & 1 deletion cluster/images/hyperkube/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,6 @@ RUN apt-get -yy -q install iptables
COPY hyperkube /hyperkube
RUN chmod a+rx /hyperkube

COPY master.json /etc/kubernetes/manifests/master.json

COPY master-multi.json /etc/kubernetes/manifests-multi/master.json
COPY master.json /etc/kubernetes/manifests/master.json
45 changes: 45 additions & 0 deletions cluster/images/hyperkube/master-multi.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
{
"apiVersion": "v1beta3",
"kind": "Pod",
"metadata": {"name":"k8s-master"},
"spec":{
"hostNetwork": true,
"containers":[
{
"name": "controller-manager",
"image": "gcr.io/google_containers/hyperkube:v0.14.1",
"command": [
"/hyperkube",
"controller-manager",
"--master=127.0.0.1:8080",
"--machines=127.0.0.1",
"--sync_nodes=true",
"--v=2"
]
},
{
"name": "apiserver",
"image": "gcr.io/google_containers/hyperkube:v0.14.1",
"command": [
"/hyperkube",
"apiserver",
"--portal_net=10.0.0.1/24",
"--address=0.0.0.0",
"--etcd_servers=http://127.0.0.1:4001",
"--cluster_name=kubernetes",
"--v=2"
]
},
{
"name": "scheduler",
"image": "gcr.io/google_containers/hyperkube:v0.14.1",
"command": [
"/hyperkube",
"scheduler",
"--master=127.0.0.1:8080",
"--v=2"
]
}
]
}
}
2 changes: 1 addition & 1 deletion cluster/images/hyperkube/master.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"apiVersion": "v1beta3",
"kind": "Pod",
"metadata": {"name":"nginx"},
"metadata": {"name":"k8s-master"},
"spec":{
"hostNetwork": true,
"containers":[
Expand Down
3 changes: 2 additions & 1 deletion docs/getting-started-guides/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ Vmware | CoreOS | CoreOS | flannel | [docs](../../docs/getting
Azure | Saltstack | Ubuntu | OpenVPN | [docs](../../docs/getting-started-guides/azure.md) | Community (@jeffmendoza) |
Bare-metal | custom | Ubuntu | _none_ | [docs](../../docs/getting-started-guides/ubuntu_single_node.md) | Community (@jainvipin) |
Bare-metal | custom | Ubuntu Cluster | flannel | [docs](../../docs/getting-started-guides/ubuntu_multinodes_cluster.md) | Community (@resouer @WIZARD-CXY) | use k8s version 0.12.0
Docker | custom | N/A | local | [docs](docker.md) | Project (@brendandburns) | Tested @ 0.14.1 |
Docker Single Node | custom | N/A | local | [docs](docker.md) | Project (@brendandburns) | Tested @ 0.14.1 |
Docker Multi Node | Flannel| N/A | local | [docs](docker-multinode.md) | Project (@brendandburns) | Tested @ 0.14.1 |
Local | | | _none_ | [docs](../../docs/getting-started-guides/locally.md) | Community (@preillyme) |
Ovirt | | | | [docs](../../docs/getting-started-guides/ovirt.md) | Inactive |
Rackspace | CoreOS | CoreOS | Rackspace | [docs](../../docs/getting-started-guides/rackspace.md) | Inactive |
Expand Down
119 changes: 28 additions & 91 deletions docs/getting-started-guides/docker-multinode.md
Original file line number Diff line number Diff line change
@@ -1,106 +1,43 @@
### Running Multi-Node Kubernetes Using Docker

_Note_: These instructions are somewhat significantly more advanced than the [single node](docker.md) instructions. If you are
_Note_:
These instructions are somewhat significantly more advanced than the [single node](docker.md) instructions. If you are
interested in just starting to explore Kubernetes, we recommend that you start there.

## Table of Contents
* [Overview](#overview)
* [Installing the master node](#master-node)
* [Installing a worker node](#adding-a-worker-node)
* [Testing your cluster](#testing-your-cluster)

## Master Node
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine is MASTER_IP

### Setup Docker-Bootstrap
We're going to use ```flannel``` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with
```--iptables=false``` so that it can only run containers with ```--net=host```. That's sufficient to bootstrap our system.

Run:
```sh
sudo docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false >> /var/log/docker-bootstrap.log &&
```

### Startup etcd for flannel to use
Run:
```
docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d kubernetes/etcd:2.0.5.1 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
```

Next,, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using:

```sh
docker -H unix:///var/run/docker-bootstrap.sock run --net=host kubernetes/etcd:2.0.5.1 etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
```


### Bring down Docker
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.

Turning down Docker is system dependent, it may be:

```sh
/etc/init.d/docker stop
```
## Overview
This guide will set up a 2-node kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
times to create larger clusters.

or
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](k8s-docker.png)

```sh
systemctl stop docker
```
### Bootstrap Docker
This guide also uses a pattern of running two instances of the Docker daemon
1) A _bootstrap_ Docker instance which is used to start system daemons like ```flanneld``` and ```etcd```
2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers

or it may be something else.
This pattern is necessary because the ```flannel``` daemon is responsible for setting up and managing the network that interconnects
all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However,
it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this.

### Run flannel

Now run flanneld itself:
```sh
docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.3.0
```

The previous command should have printed a really long hash, copy this hash.

Now get the subnet settings from flannel:
```
docker exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```

### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.

This may be in ```/etc/docker/default``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.

Regardless, you need to add the following to the docker comamnd line:
```sh
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```

### Remove the existing Docker bridge
Docker creates a bridge named ```docker0``` by default. You need to remove this:

```sh
sudo ifconfig docker0 down
sudo brctl delbr docker0
```

### Restart Docker
Again this is system dependent, it may be:
## Master Node
The first step in the process is to initialize the master node.

```sh
sudo /etc/init.d/docker start
```
See [here](docker-multinode/master.md) for detailed instructions.

it may be:
```sh
systemctl start docker
```
## Adding a worker node

### Starting the Kubernetes Master
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case:
Once your master is up and running you can add one or more workers on different machines.

```sh
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.1 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
```
See [here](docker-multinode/worker.md) for detailed instructions.

### Also run the service proxy
```sh
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.14.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
```
## Testing your cluster

### Adding a new node
Once your cluster has been created you can [test it out](docker-multinode/testing.md)
143 changes: 143 additions & 0 deletions docs/getting-started-guides/docker-multinode/master.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
## Installing a Kubernetes Master Node via Docker
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine is ```${MASTER_IP}```

There are two main phases to installing the master:
* [Setting up ```flanneld``` and ```etcd```](#setting-up-flanneld-and-etcd)
* [Starting the Kubernetes master components](#starting-the-kubernetes-master)


## Setting up flanneld and etcd

### Setup Docker-Bootstrap
We're going to use ```flannel``` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with
```--iptables=false``` so that it can only run containers with ```--net=host```. That's sufficient to bootstrap our system.

Run:
```sh
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```

_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.


### Startup etcd for flannel and the API server to use
Run:
```
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d kubernetes/etcd:2.0.5.1 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
```

Next, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using:

```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host kubernetes/etcd:2.0.5.1 etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
```


### Set up Flannel on the master node
Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplfied networking between our Pods of containers.

Flannel re-configures the bridge that Docker uses for networking. As a result we need to stop Docker, reconfigure its networking, and then restart Docker.

#### Bring down Docker
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.

Turning down Docker is system dependent, it may be:

```sh
sudo /etc/init.d/docker stop
```

or

```sh
sudo systemctl stop docker
```

or it may be something else.

#### Run flannel

Now run flanneld itself:
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.3.0
```

The previous command should have printed a really long hash, copy this hash.

Now get the subnet settings from flannel:
```
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```

#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.

This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.

Regardless, you need to add the following to the docker comamnd line:
```sh
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```

#### Remove the existing Docker bridge
Docker creates a bridge named ```docker0``` by default. You need to remove this:

```sh
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
```

You may need to install the ```bridge-utils``` package for the ```brctl``` binary.

#### Restart Docker
Again this is system dependent, it may be:

```sh
sudo /etc/init.d/docker start
```

it may be:
```sh
systemctl start docker
```

## Starting the Kubernetes Master
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components.

```sh
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.1 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests-multi
```

### Also run the service proxy
```sh
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.14.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
```

### Test it out
At this point, you should have a functioning 1-node cluster. Let's test it out!

Download the kubectl binary
([OS X](http://storage.googleapis.com/kubernetes-release/release/v0.14.1/bin/darwin/amd64/kubectl))
([linux](http://storage.googleapis.com/kubernetes-release/release/v0.14.1/bin/linux/amd64/kubectl))

List the nodes

```sh
kubectl get nodes
```

This should print:
```
NAME LABELS STATUS
127.0.0.1 <none> Ready
```

If the status of the node is ```NotReady``` or ```Unknown``` please check that all of the containers you created are successfully running.
If all else fails, ask questions on IRC at #google-containers.


### Next steps
Move on to [adding one or more workers](worker.md)
Loading

0 comments on commit b3c46b5

Please sign in to comment.