Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
116 changes: 38 additions & 78 deletions content/docs/latest/try/getting-started-k8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,14 +16,15 @@ This guide walks you through getting a SPIRE Server and SPIRE Agent running in a

In this introduction to SPIRE on Kubernetes you will learn how to:

* Create the appropriate Kubernetes namespaces and service accounts to deploy SPIRE
* Deploy the SPIRE Server as a Kubernetes statefulset
* Deploy the SPIRE Agent as a Kubernetes daemonset
* Deploy SPIRE and SPIFFE with helm
* Configure a registration entry for a workload
* Fetch an x509-SVID over the SPIFFE Workload API
* Learn where to find resources for more complex installations

The steps in this guide have been tested on these Kubernetes versions: 1.13.1, 1.12.4, and 1.10.12.
The steps in this guide have been tested on these
Kubernetes versions: 1.26
Helm Chart version: 0.10.1
App Version: 1.7.0

{{< info >}}
If you are using Minikube to run this tutorial you should specify some special flags as described [here](#considerations-when-using-minikube).
Expand All @@ -35,61 +36,25 @@ If you are using Kubeadm to run this tutorial, a default storage class and an as

# Obtain the Required Files

This guide requires a number of **.yaml** files. To obtain this directory of files clone **https://github.com/spiffe/spire-tutorials** and obtain the **.yaml** files from the **spire-tutorials/k8s/quickstart** subdirectory. Remember to run all kubectl commands in the directory in which those files reside.
This guide requires a number of **.yaml** files. To obtain this directory of files clone **https://github.com/spiffe/spire-tutorials** and obtain the **.yaml** files from the **spire-tutorials/k8s/quickstart-helm** subdirectory. Remember to run all kubectl commands in the directory in which those files reside.

Set up a Kubernetes environment on a provider of your choice or use Minikube. Link the Kubernetes environment to the kubectl command.

# Configure Kubernetes Namespace for SPIRE Components

Follow these steps to configure the **spire** namespace in which SPIRE Server and SPIRE Agent are deployed.

1. Change to the directory containing the **.yaml** files.

2. Create the namespace:

# Install with Helm
```bash
$ kubectl apply -f spire-namespace.yaml
$ helm repo add spiffe https://spiffe.github.io/helm-charts/
$ helm update
$ helm -n spire install spire spiffe/spire -f values.yaml --create-namespace
```

3. Run the following command and verify that *spire* is listed in the output:
# Verify
## Verify Namespace
Run the following command and verify that *spire* is listed in the output:

```bash
$ kubectl get namespaces
```

# Configure SPIRE Server

## Create Server Bundle Configmap, Role & ClusterRoleBinding

For the server to function, it is necessary for it to provide agents with certificates that they can use to verify the identity of the server when establishing a connection.

In a deployment such as this, where the agent and server share the same cluster, SPIRE can be configured to automatically generate these certificates on a periodic basis and update a configmap with contents of the certificate. To do that, the server needs the ability to get and patch a configmap object in the `spire` namespace.

To allow the server to read and write to this configmap, a ClusterRole must be created that confers the appropriate entitlements to Kubernetes RBAC, and that ClusterRoleBinding must be associated with the service account created in the previous step.

Create the server's service account, configmap and associated role bindings as follows:

```bash
$ kubectl apply \
-f server-account.yaml \
-f spire-bundle-configmap.yaml \
-f server-cluster-role.yaml
```

## Create Server Configmap

The server is configured in the Kubernetes configmap specified in server-configmap.yaml, which specifies a number of important directories, notably **/run/spire/data** and **/run/spire/config**. These volumes are bound in when the server container is deployed.

Deploy the server configmap and statefulset by applying the following files via kubectl:

```bash
$ kubectl apply \
-f server-configmap.yaml \
-f server-statefulset.yaml \
-f server-service.yaml
```

This creates a statefulset called **spire-server** in the **spire** namespace and starts up a **spire-server** pod, as demonstrated in the output of the following commands:
## Verify Statefulset
This creates a statefulset called **spire-server** in the **spire** namespace and starts up a **spire-server** pod, as demonstrated in the output of the following commands:

```bash
$ kubectl get statefulset --namespace spire
Expand All @@ -107,26 +72,8 @@ $ kubectl get services --namespace spire
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
spire-server NodePort 10.107.205.29 <none> 8081:30337/TCP 88m
```

# Configure and deploy the SPIRE Agent

To allow the agent read access to the kubelet API to perform workload attestation, a Service Account and ClusterRole must be created that confers the appropriate entitlements to Kubernetes RBAC, and that ClusterRoleBinding must be associated with the service account created in the previous step.

```bash
$ kubectl apply \
-f agent-account.yaml \
-f agent-cluster-role.yaml
```

Apply the **agent-configmap.yaml** configuration file to create the agent configmap and deploy the Agent as a daemonset that runs one instance of each Agent on each Kubernetes worker node.

```bash
$ kubectl apply \
-f agent-configmap.yaml \
-f agent-daemonset.yaml
```

This creates a daemonset called **spire-agent** in the **spire** namespace and starts up a **spire-agent** pod along side **spire-server**, as demonstrated in the output of the following commands:
## Verify Agent
This creates a daemonset called **spire-agent** in the **spire** namespace and starts up a **spire-agent** pod along side **spire-server**, as demonstrated in the output of the following commands:

```bash
$ kubectl get daemonset --namespace spire
Expand All @@ -140,14 +87,14 @@ NAME READY STATUS RESTARTS AGE
spire-agent-88cpl 1/1 Running 0 6m45s
spire-server-0 1/1 Running 0 103m
```

As a daemonset, you'll see as many **spire-agent** pods as you have nodes.

# Register Workloads

In order to enable SPIRE to perform workload attestation -- which allows the agent to identify the workload to attest to its agent -- you must register the workload in the server. This tells SPIRE how to identify the workload and which SPIFFE ID to give it.

1. Create a new registration entry for the node, specifying the SPIFFE ID to allocate to the node:
> **Note** change -selector k8s_sat:cluster:demo-cluster to your cluster name

```shell
$ kubectl exec -n spire spire-server-0 -- \
Expand All @@ -174,7 +121,7 @@ In order to enable SPIRE to perform workload attestation -- which allows the age

In this section, you configure a workload container to access SPIRE. Specifically, you are configuring the workload container to access the Workload API UNIX domain socket.

The **client-deployment.yaml** file configures a no-op container using the **spire-k8s** docker image used for the server and agent. Examine the `volumeMounts` and `volumes configuration` stanzas to see how the UNIX domain `agent.sock` is bound in.
The **client-deployment.yaml** file configures a no-op container using the **spire-k8s** docker image used for the server and agent. Examine the `volumeMounts` and `volumes configuration` stanzas to see how the UNIX domain `spire-agent.sock` is bound in.

You can test that the agent socket is accessible from an application container by issuing the following commands:

Expand All @@ -188,7 +135,7 @@ You can test that the agent socket is accessible from an application container b

```bash
$ kubectl exec -it $(kubectl get pods -o=jsonpath='{.items[0].metadata.name}' \
-l app=client) -- /opt/spire/bin/spire-agent api fetch -socketPath /run/spire/sockets/agent.sock
-l app=client) -- /opt/spire/bin/spire-agent api fetch -socketPath /run/spire/agent-sockets/spire-agent.sock
```

If the agent is not running, you’ll see an error message such as “no such file or directory" or “connection refused”.
Expand All @@ -206,15 +153,28 @@ You can test that the agent socket is accessible from an application container b
2. Run the following command to delete all deployments and configurations for the agent, server, and namespace:

```bash
$ helm -n spire uninstall spire
$ kubectl delete namespace spire
```
# Additional Information

3. Run the following commands to delete the ClusterRole and ClusterRoleBinding settings:
## SPIRE Server Information

```bash
$ kubectl delete clusterrole spire-server-trust-role spire-agent-cluster-role
$ kubectl delete clusterrolebinding spire-server-trust-role-binding spire-agent-cluster-role-binding
```
For the server to function, it is necessary for it to provide agents with certificates that they can use to verify the identity of the server when establishing a connection.

In a deployment such as this, where the agent and server share the same cluster, SPIRE can be configured to automatically generate these certificates on a periodic basis and update a configmap with contents of the certificate. To do that, the server needs the ability to get and patch a configmap object in the `spire` namespace.

To allow the server to read and write to this configmap, a ClusterRole is created that confers the appropriate entitlements to Kubernetes RBAC, and that ClusterRoleBinding is associated with the service account created in the helm chart.

The server is configured in the Kubernetes configmap specified in server-configmap.yaml, which specifies a number of important directories, notably **/run/spire/data** and **/run/spire/config**. These volumes are bound in when the server container is deployed.

Deploy the server configmap and statefulset by applying the following files via kubectl:

## SPIRE Agent Information

To allow the agent read access to the kubelet API to perform workload attestation, a Service Account and ClusterRole must be created that confers the appropriate entitlements to Kubernetes RBAC, and that ClusterRoleBinding must be associated with the service account created in the helm chart.

The Agent as a daemonset that runs one instance of each Agent on each Kubernetes worker node.

# Considerations When Using Minikube

Expand Down