Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions content/patterns/layered-zero-trust/_index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,11 @@ The solution integrates many Red{nbsp}Hat components to offer:
* Certificate management for secure communications.
* External secret management integration.

It also optionally integrates hosting of workloads in using confidential containers:

* Per container confidential workload management using OpenShift Sandboxed containers, built on Kata containers
* Remote attestation enforced with a key broker service via Red Hat build of Trustee.

[id="architecture"]
=== Architecture

Expand All @@ -94,6 +99,15 @@ The pattern consists of the following key components:
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.14[{rh-rhacm-first}]
** Provides a management control plane in multi-cluster scenarios.

Optionally:

* link:https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.11/html/deploying_confidential_containers/index[Red{nbsp}Hat OpenShift sandboxed containers]
** Provides the ability to create confidential containers

* link:https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.11/html/deploying_red_hat_build_of_trustee/index[Red{nbsp}Hat build of Trustee]
** Acts as a key broker service and measures the security of confidential containers


[id="sidecar-pattern"]
==== Sidecar pattern

Expand Down Expand Up @@ -127,3 +141,4 @@ The following technologies are used in this solution:
* *Compliance Operator*: Provides ability to scan and remediate cluster hardening based on profiles
* *QTodo application*: Serves as a sample Quarkus-based application to show zero trust principles.
* *PostgreSQL database*: Provides the backend database for the demonstration application.

215 changes: 215 additions & 0 deletions content/patterns/layered-zero-trust/lzt-confidential-containers.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,215 @@
---
title: Confidenial containers variant
weight: 10
aliases: /layered-zero-trust/lzt-confidential-containers/
---


:toc:
:imagesdir: /images
:_mod-docs-content-type: ASSEMBLY
include::modules/comm-attributes.adoc[]

[id="lzt-about-coco"]

Confidential computing is a technology to prortect data in use.
Red{nbsp}Hat's link:link:https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.11/html/deploying_confidential_containers/index[OpenShift sandboxed containers new Confidential Containers feature] (CoCo) uses Trusted Execution Environments (TEEs), which are specialized CPU features from AMD, Intel, and others that create isolated, encrypted memory spaces (data in use) with cryptographic proof of integrity.
These hardware guarantees mean workloads can prove they haven't been tampered with, and secrets can be protected, even from infrastructure administrators.

Confidential containers within the layered-zero-trust pattern integrates zero trust workload identity management.
You get defense in depth: cryptographic identity verification PLUS hardware rooted trust.

For the layered-zero-trust pattern the use of confidential containers is an optional configuration as it imposes specific hardware constraints.

[WARNING]
====
Using confidential containers restricts the platform the user is on to Microsoft Azure.
It also requires access and quota for link:https://learn.microsoft.com/en-us/azure/confidential-computing/virtual-machine-options[specific Azure instance types] for the region the cluster is deployed.

Ensure you pre-check availability of appropriate confidential virtual machines when testing.
====


[id="lzt-deploying-coco"]
=
Confidential containers is intentionally not the default deployment option.
This introduces a small number of extra steps to deploy confidential containers.


== Azure cluster setup
Confidential containers on Azure use link:https://www.redhat.com/ja/blog/red-hat-openshift-sandboxed-containers-peer-pods-solution-overview[peer-pods].
This does not impose requirements on the base cluster type beyond sufficient capacity.
This pattern has been tested with both Azure Red{nbsp}Hat OpenShift clusters as well as OpenShift clusters installed using `openshift_installer`.

[NOTE]
====
In order to provision peer-pods the OpenShift cluster will need to be able to communicate with Azure's apis.
The pattern uses the same Azure service account used during cluster provisioning to create:

* VM Templates
* The peer-pod VMs
* A link:https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.10/html/deploying_confidential_containers/deploying-cc_azure-cc#configuring-outbound-connections_azure-cc[NAT gateway] to allow outbound traffic from the peer-pods
====


== Repository setup

Prerequisite: First read the repository setup here link:../lzt-getting-started#lzt-repository-setup[here].

. Assuming you are checked out on `my-branch`
+
[source,terminal]
----
$ git status
On branch my-branch
Your branch is up to date with 'origin/my-branch'.

nothing to commit, working tree clean
----

. Change the `clusterGroupName` to `coco-dev` in `values-global.yaml` e.g.:
+
[source,yaml]
----
...
main:
clusterGroupName: coco-dev
...
----
+
. Commit and push the change to your branch
+
[source, terminal]
---
$ git add values-global.yaml
$ git commit -m 'Change to CoCo cluster group'
$ git push origin my-branch
---
+
. Your respository is now ready.

== Secrets setup

In order to secure confidential containers the key broker service, Red{nbsp}Hat build of Trustee, requires extra secrets to be configured.
Most credentials are automatically generated on the cluster Trustee is deployed on. However, the administrative credentials for Trustee must be generated off cluster.

[NOTE]
====
It's highly recommended to read through the full instructions on link:https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.11/html/deploying_red_hat_build_of_trustee/deploying-trustee_azure-trustee[configuring and deploying Red{nbsp}Hat build of Trustee].
Trustee's role is security sensitive.
====
. Create a local copy of the secret values file that can safely include
credentials. Run the following command:
+
[source,terminal]
----
$ cp values-secret.yaml.template ~/values-secret-layered-zero-trust.yaml
----
+
. Uncomment the required additional secrets for the `coco-dev` chart. Each of the secrets required has `# Required for coco-dev clusterGroup` above the secret.
+
[source,terminal]
----
$ vim ~/values-secret-layered-zero-trust.yaml
----
+
. Generate the link:https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.11/html/deploying_red_hat_build_of_trustee/deploying-trustee_azure-trustee#creating-trustee-secret_azure-trustee[admin API authentiation secret].
+
[source,terminal]
----
$ cd ~/
openssl genpkey -algorithm ed25519 > kbsPrivateKey
openssl pkey -in kbsPrivateKey -pubout -out kbsPublicKey
----
+
[NOTE]
====
The file name of the `kbsPublicKey` specified here is referenced in the `values-secret.yaml.template` file.
Using a different path will require changes to `~/values-secrets-layered-zero-trust.yaml`
====

[id="deploying-coco-patternsh-file"]
== Deploying the Confidential Containers variant of the layed-zero trust pattern

The deployment of the confidential containers variant mirrors that of the default version:

. Login to your {ocp} cluster:

.. By using the `oc` CLI:
* Get an API token by visiting `pass:[https://oauth-openshift.apps.<your_cluster>.<domain>/oauth/token/request]`.
* Log in with the retrieved token:
+
[source,terminal]
----
$ oc login --token=<retrieved_token> --server=https://api.<your_cluster>.<domain>:6443
----

.. By using KUBECONFIG:
+
[source,terminal]
----
$ export KUBECONFIG=~/<path_to_kubeconfig>
----

. Run the pattern deployment script:
+
[source,terminal]
----
$ ./pattern.sh make install
----

[NOTE]
====
The deployment of the OpenShift sandboxed containers operator takes time and may result in time-outs in the installation script.
This is as the `./pattern.sh make install` script waiting for the ArgoCD applications to become healthy.
====


[id="lzt-verify-deployment-coco"]
=== Verify the deployment

The Layered Zero-Trust pattern provisions every component and manages them through {ocp} GitOps. After you deploy the pattern, verify that all components are running correctly.

The Layered Zero-Trust pattern installs the following two {ocp} GitOps instances on your Hub cluster. You can view these instances in the {ocp} web console by using the **Application Selector** (the icon with nine small squares) in the top navigation bar.

. **Cluster Argo CD**: Deploys an *App-of-Apps* application named `layered-zero-trust-coco-dev`. This application installs the pattern's components.
. **Coco-debuggingv Argo CD**: Manages Cluster Argo CD instance and the individual components that belong to the pattern on the hub {ocp} instance.

If every Argo CD application reports a **Healthy** status, the pattern has been deployed successfully.


=== Likely deployment failures for confidential containers workloads.

If you encounter an issue with the confidential containers variant of the layered zero trust pattern it's highly recommended to first test deploying the default `hub` `clusterGroup`.
Below are some triage steps:


. Run a healthcheck of ArgoCD applications:
+
[source,terminal]
----
$ ./pattern.sh make argo-healthcheck
----
+

. If all the applications except for `hello-coco` are healthy the operators have deployed, but as yet not the peer-pods.

. Check whether the pod is visible in namespace:
+
[source,terminal]
----
$ oc get pods -n zero-trust-workload-identity-manager spire-agent-cc -o yaml
----
+

. If the pod manifest is not visible the sandboxed container operator has not yet deployed

. If the pod is visible check for the existence and events for the peer pods
+
[source,terminal]
----
$ oc get peerpods -A -o yaml
----
+

. The most likely failure here is insufficient Azure quota.