Skip to content

Commit

Permalink
Make docs links go through docs.k8s.io
Browse files Browse the repository at this point in the history
  • Loading branch information
thockin committed Apr 23, 2015
1 parent e8b28c5 commit 12e4e8f
Show file tree
Hide file tree
Showing 27 changed files with 127 additions and 127 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## 0.15.0
* Enables v1beta3 API and sets it to the default API version (#6098)
* See the [v1beta3 conversion guide](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api.md#v1beta3-conversion-tips)
* See the [v1beta3 conversion guide](http://docs.k8s.io/api.md#v1beta3-conversion-tips)
* Added multi-port Services (#6182)
* New Getting Started Guides
* Multi-node local startup guide (#6505)
Expand Down
2 changes: 1 addition & 1 deletion docs/availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ then you need `R + U` clusters. If it is not (e.g you want to ensure low latenc
cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone.

Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then
you may need even more clusters. Our [roadmap](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/roadmap.md)
you may need even more clusters. Our [roadmap](http://docs.k8s.io/roadmap.md)
calls for maximum 100 node clusters at v1.0 and maximum 1000 node clusters in the middle of 2015.

## Working with multiple clusters
Expand Down
2 changes: 1 addition & 1 deletion docs/design/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ We want to be able to assign IP addresses externally from Docker ([Docker issue

In addition to enabling self-registration with 3rd-party discovery mechanisms, we'd like to setup DDNS automatically ([Issue #146](https://github.com/GoogleCloudPlatform/kubernetes/issues/146)). hostname, $HOSTNAME, etc. should return a name for the pod ([Issue #298](https://github.com/GoogleCloudPlatform/kubernetes/issues/298)), and gethostbyname should be able to resolve names of other pods. Probably we need to set up a DNS resolver to do the latter ([Docker issue #2267](https://github.com/dotcloud/docker/issues/2267)), so that we don't need to keep /etc/hosts files up to date dynamically.

[Service](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md) endpoints are currently found through environment variables. Both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) variables and kubernetes-specific variables ({NAME}_SERVICE_HOST and {NAME}_SERVICE_BAR) are supported, and resolve to ports opened by the service proxy. We don't actually use [the Docker ambassador pattern](https://docs.docker.com/articles/ambassador_pattern_linking/) to link containers because we don't require applications to identify all clients at configuration time, yet. While services today are managed by the service proxy, this is an implementation detail that applications should not rely on. Clients should instead use the [service portal IP](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md) (which the above environment variables will resolve to). However, a flat service namespace doesn't scale and environment variables don't permit dynamic updates, which complicates service deployment by imposing implicit ordering constraints. We intend to register each service portal IP in DNS, and for that to become the preferred resolution protocol.
[Service](http://docs.k8s.io/services.md) endpoints are currently found through environment variables. Both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) variables and kubernetes-specific variables ({NAME}_SERVICE_HOST and {NAME}_SERVICE_BAR) are supported, and resolve to ports opened by the service proxy. We don't actually use [the Docker ambassador pattern](https://docs.docker.com/articles/ambassador_pattern_linking/) to link containers because we don't require applications to identify all clients at configuration time, yet. While services today are managed by the service proxy, this is an implementation detail that applications should not rely on. Clients should instead use the [service portal IP](http://docs.k8s.io/services.md) (which the above environment variables will resolve to). However, a flat service namespace doesn't scale and environment variables don't permit dynamic updates, which complicates service deployment by imposing implicit ordering constraints. We intend to register each service portal IP in DNS, and for that to become the preferred resolution protocol.

We'd also like to accommodate other load-balancing solutions (e.g., HAProxy), non-load-balanced services ([Issue #260](https://github.com/GoogleCloudPlatform/kubernetes/issues/260)), and other types of groups (worker pools, etc.). Providing the ability to Watch a label selector applied to pod addresses would enable efficient monitoring of group membership, which could be directly consumed or synced with a discovery mechanism. Event hooks ([Issue #140](https://github.com/GoogleCloudPlatform/kubernetes/issues/140)) for join/leave events would probably make this even easier.

Expand Down
6 changes: 3 additions & 3 deletions docs/design/secrets.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ service would also consume the secrets associated with the MySQL service.

### Use-Case: Secrets associated with service accounts

[Service Accounts](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/service_accounts.md) are proposed as a
[Service Accounts](http://docs.k8s.io/design/service_accounts.md) are proposed as a
mechanism to decouple capabilities and security contexts from individual human users. A
`ServiceAccount` contains references to some number of secrets. A `Pod` can specify that it is
associated with a `ServiceAccount`. Secrets should have a `Type` field to allow the Kubelet and
Expand Down Expand Up @@ -236,7 +236,7 @@ memory overcommit on the node.

#### Secret data on the node: isolation

Every pod will have a [security context](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/security_context.md).
Every pod will have a [security context](http://docs.k8s.io/design/security_context.md).
Secret data on the node should be isolated according to the security context of the container. The
Kubelet volume plugin API will be changed so that a volume plugin receives the security context of
a volume along with the volume spec. This will allow volume plugins to implement setting the
Expand All @@ -248,7 +248,7 @@ Several proposals / upstream patches are notable as background for this proposal

1. [Docker vault proposal](https://github.com/docker/docker/issues/10310)
2. [Specification for image/container standardization based on volumes](https://github.com/docker/docker/issues/9277)
3. [Kubernetes service account proposal](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/service_accounts.md)
3. [Kubernetes service account proposal](http://docs.k8s.io/design/service_accounts.md)
4. [Secrets proposal for docker (1)](https://github.com/docker/docker/pull/6075)
5. [Secrets proposal for docker (2)](https://github.com/docker/docker/pull/6697)

Expand Down
12 changes: 6 additions & 6 deletions docs/design/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,14 +63,14 @@ Automated process users fall into the following categories:
A pod runs in a *security context* under a *service account* that is defined by an administrator or project administrator, and the *secrets* a pod has access to is limited by that *service account*.


1. The API should authenticate and authorize user actions [authn and authz](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/access.md)
1. The API should authenticate and authorize user actions [authn and authz](http://docs.k8s.io/design/access.md)
2. All infrastructure components (kubelets, kube-proxies, controllers, scheduler) should have an infrastructure user that they can authenticate with and be authorized to perform only the functions they require against the API.
3. Most infrastructure components should use the API as a way of exchanging data and changing the system, and only the API should have access to the underlying data store (etcd)
4. When containers run on the cluster and need to talk to other containers or the API server, they should be identified and authorized clearly as an autonomous process via a [service account](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/service_accounts.md)
4. When containers run on the cluster and need to talk to other containers or the API server, they should be identified and authorized clearly as an autonomous process via a [service account](http://docs.k8s.io/design/service_accounts.md)
1. If the user who started a long-lived process is removed from access to the cluster, the process should be able to continue without interruption
2. If the user who started processes are removed from the cluster, administrators may wish to terminate their processes in bulk
3. When containers run with a service account, the user that created / triggered the service account behavior must be associated with the container's action
5. When container processes run on the cluster, they should run in a [security context](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/security_context.md) that isolates those processes via Linux user security, user namespaces, and permissions.
5. When container processes run on the cluster, they should run in a [security context](http://docs.k8s.io/design/security_context.md) that isolates those processes via Linux user security, user namespaces, and permissions.
1. Administrators should be able to configure the cluster to automatically confine all container processes as a non-root, randomly assigned UID
2. Administrators should be able to ensure that container processes within the same namespace are all assigned the same unix user UID
3. Administrators should be able to limit which developers and project administrators have access to higher privilege actions
Expand All @@ -79,7 +79,7 @@ A pod runs in a *security context* under a *service account* that is defined by
6. Developers may need to ensure their images work within higher security requirements specified by administrators
7. When available, Linux kernel user namespaces can be used to ensure 5.2 and 5.4 are met.
8. When application developers want to share filesytem data via distributed filesystems, the Unix user ids on those filesystems must be consistent across different container processes
6. Developers should be able to define [secrets](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/secrets.md) that are automatically added to the containers when pods are run
6. Developers should be able to define [secrets](http://docs.k8s.io/design/secrets.md) that are automatically added to the containers when pods are run
1. Secrets are files injected into the container whose values should not be displayed within a pod. Examples:
1. An SSH private key for git cloning remote data
2. A client certificate for accessing a remote system
Expand All @@ -93,11 +93,11 @@ A pod runs in a *security context* under a *service account* that is defined by

### Related design discussion

* Authorization and authentication https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/access.md
* Authorization and authentication http://docs.k8s.io/design/access.md
* Secret distribution via files https://github.com/GoogleCloudPlatform/kubernetes/pull/2030
* Docker secrets https://github.com/docker/docker/pull/6697
* Docker vault https://github.com/docker/docker/issues/10310
* Service Accounts: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/service_accounts.md
* Service Accounts: http://docs.k8s.io/design/service_accounts.md
* Secret volumes https://github.com/GoogleCloudPlatform/kubernetes/4126

## Specific Design Points
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/centos/centos_manual_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...

This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.

The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the minion and run kubelet, proxy, cadvisor and docker.

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/cloudstack.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ There are currently two deployment techniques.
This uses [libcloud](http://libcloud.apache.org) to launch CoreOS instances and pass the appropriate cloud-config setup using userdata. Several manual steps are required. This is obsoleted by the Ansible playbook detailed below.

* [Ansible playbook](https://github.com/runseb/ansible-kubernetes).
This is completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/coreos/coreos_multinode_cluster.md).
This is completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](http://docs.k8s.io/getting-started-guides/coreos/coreos_multinode_cluster.md).

#Ansible playbook

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/coreos/bare_metal_offline.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ Now for the good stuff!
## Cloud Configs
The following config files are tailored for the OFFLINE version of a Kubernetes deployment.

These are based on the work found here: [master.yml](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/coreos/cloud-configs/master.yaml), [node.yml](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/coreos/cloud-configs/node.yaml)
These are based on the work found here: [master.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/master.yaml), [node.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/node.yaml)


### master.yml
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ docker run --net=host -d kubernetes/etcd:2.0.5.1 /usr/local/bin/etcd --addr=127.
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
```

This actually runs the kubelet, which in turn runs a [pod](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md) that contains the other master components.
This actually runs the kubelet, which in turn runs a [pod](http://docs.k8s.io/pods.md) that contains the other master components.

### Step Three: Run the service proxy
*Note, this could be combined with master above, but it requires --privileged for iptables manipulation*
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/fedora/fedora_manual_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...

This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.

The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.

Expand Down
4 changes: 2 additions & 2 deletions docs/getting-started-guides/locally.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,8 +67,8 @@ cluster/kubectl.sh get replicationControllers

### Running a user defined pod

Note the difference between a [container](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/containers.md)
and a [pod](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you.
Note the difference between a [container](http://docs.k8s.io/containers.md)
and a [pod](http://docs.k8s.io/pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you.
However you can't view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`).

You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/ubuntu_multinodes_cluster.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Kubernetes deployed on multiple ubuntu nodes

This document describes how to deploy kubernetes on multiple ubuntu nodes, including 1 master node and 3 minion nodes, and people uses this approach can scale to **any number of minion nodes** by changing some settings with ease. Although there exists saltstack based ubuntu k8s installation , it may be tedious and hard for a guy that knows little about saltstack but want to build a really distributed k8s cluster. This approach is inspired by [k8s deploy on a single node](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/ubuntu_single_node.md).
This document describes how to deploy kubernetes on multiple ubuntu nodes, including 1 master node and 3 minion nodes, and people uses this approach can scale to **any number of minion nodes** by changing some settings with ease. Although there exists saltstack based ubuntu k8s installation , it may be tedious and hard for a guy that knows little about saltstack but want to build a really distributed k8s cluster. This approach is inspired by [k8s deploy on a single node](http://docs.k8s.io/getting-started-guides/ubuntu_single_node.md).

[Cloud team from ZJU](https://github.com/ZJU-SEL) will keep updating this work.

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/ubuntu_single_node.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This document describes how to get started to run kubernetes services on a singl
3. Customizing ubuntu launch

### 1. Make kubernetes and etcd binaries
Either build or download the latest [kubernetes binaries] (https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/binary_release.md)
Either build or download the latest [kubernetes binaries] (http://docs.k8s.io/getting-started-guides/binary_release.md)

Copy the kube binaries into `/opt/bin` or a path of your choice

Expand Down
Loading

0 comments on commit 12e4e8f

Please sign in to comment.