This is the multi-page printable view of this section. Click here to print.
Demos
- 1: CCv0 Operator Demo
- 2: SSH Demo
1 - CCv0 Operator Demo
Warning
TODO: This was copied with few adaptations from here: https://github.com/confidential-containers/confidential-containers/tree/main/demos/operator-demo. This needs to be tested and verified if the instructions still work and needs a rework.Demo Video
Demo Environment setup
Kubernetes cluster
Setup a two nodes Kubernetes cluster using Ubuntu 20.04. You can use your preferred Kubernetes setup tool. Here is an example using kcli.
Download ubuntu 20.04 image if not present by running the following command:
kcli download image ubuntu2004
Install the cluster:
kcli create kube generic -P image=ubuntu2004 -P workers=1 testk8s
Replace containerd
Replace containerd on the worker node by building a new containerd from the following branch: https://github.com/confidential-containers/containerd/tree/ali-CCv0.
Modify systemd configuration to use the new binary and restart containerd
and kubelet
.
Verify if the cluster nodes are all up
kubectl get nodes
Sample output from the demo environment:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cck8s-demo-master-0 Ready control-plane,master 25d v1.22.3
cck8s-demo-worker-0 Ready worker 25d v1.22.3
Operator Setup
kubectl apply -f https://raw.githubusercontent.com/confidential-containers/operator/ccv0-demo/deploy/deploy.yaml
The operator installs everything under the confidential-containers-system
namespace:
Verify if the operator is running by runngint the following command:
kubectl get pods -n confidential-containers-system
Sample output from the demo environment:
$ kubectl get pods -n confidential-containers-system
NAME READY STATUS RESTARTS AGE
cc-operator-controller-manager-7f8d6dd988-t9zdm 2/2 Running 0 13s
Confidential Containers Runtime setup
Creating a CCruntime
object sets up the container runtime. The default payload image sets up the CCv0 demo image of the kata-containers runtime.
cat << EOF | kubectl create -f -
apiVersion: confidentialcontainers.org/v1beta1
kind: CcRuntime
metadata:
name: ccruntime-sample
namespace: confidential-containers-system
spec:
# Add fields here
runtimeName: kata
config:
installType: bundle
payloadImage: quay.io/confidential-containers/runtime-payload:ccv0-ssh-demo
EOF
This will create an install daemonset targeting the worker nodes for installation. You can verify the status under the confidential-containers-system
namespace.
$ kubectl get pods -n confidential-containers-system
NAME READY STATUS RESTARTS AGE
cc-operator-controller-manager-7f8d6dd988-t9zdm 2/2 Running 0 82s
cc-operator-daemon-install-p9ntc 1/1 Running 0 45s
On successful installation, you’ll see the following runtimeClasses
being setup:
$ kubectl get runtimeclasses.node.k8s.io
NAME HANDLER AGE
kata kata 92s
kata-cc kata-cc 92s
kata-qemu kata-qemu 92s
kata-cc
runtimeclass uses CCv0 specific configurations.
Now you can deploy the PODs targeting the specific runtimeclasses. The SSH demo can be used as a compatible workload.
2 - SSH Demo
Warning
TODO: This was copied with few adaptations from here: https://github.com/confidential-containers/confidential-containers/tree/main/demos/ssh-demo. This needs to be tested and verified if the instructions still work and needs a rework.To demonstrate confidential containers capabilities, we run a pod with SSH public key authentication.
Compared to the execution of and login to a shell on a pod, an SSH connection is cryptographically secured and requires a private key. It cannot be established by unauthorized parties, such as someone who controls the node. The container image contains the SSH host key that can be used for impersonating the host we will connect to. Because this container image is encrypted, and the key to decrypting this image is only provided in measurable ways (e.g. attestation or encrypted initrd), and because the pod/guest memory is protected, even someone who controls the node cannot steal this key.
Using a pre-provided container image
If you would rather build the image with your own keys, skip to Building the container image. The operator can be used to set up a compatible runtime.
A demo image is provided at docker.io/katadocker/ccv0-ssh.
It is encrypted with Attestation Agent’s offline file system key broker and aa-offline_fs_kbc-keys.json
as its key file.
The private key for establishing an SSH connection to this container is given in ccv0-ssh
.
To use it with SSH, its permissions should be adjusted: chmod 600 ccv0-ssh
.
The host key fingerprint is SHA256:wK7uOpqpYQczcgV00fGCh+X97sJL3f6G1Ku4rvlwtR0
.
All keys shown here are for demonstration purposes. To achieve actually confidential containers, use a hardware trusted execution environment and do not reuse these keys.
Continue at Connecting to the guest.
Building the container image
The image built should be encrypted. To receive a decryption key at run time, the Confidential Containers project utilizes the Attestation Agent.
Generating SSH keys
ssh-keygen -t ed25519 -f ccv0-ssh -P "" -C ""
generates an SSH key ccv0-ssh
and the correspondent public key ccv0-ssh.pub
.
Building the image
The provided Dockerfile
expects ccv0-sh.pub
to exist.
Using Docker, you can build with
docker build -t ccv0-ssh .
Alternatively, Buildah can be used (buildah build
or formerly buildah bud
).
The SSH host key fingerprint is displayed during the build.
Connecting to the guest
A Kubernetes YAML file specifying the kata
runtime is included.
If you use a self-built image, you should replace the image specification with the image you built.
The default tag points to an amd64
image, an s390x
tag is also available.
With common CNI setups, on the same host, with the service running, you can connect via SSH with
ssh -i ccv0-ssh root@$(kubectl get service ccv0-ssh -o jsonpath="{.spec.clusterIP}")
You will be prompted about whether the host key fingerprint is correct. This fingerprint should match the one specified above/displayed in the Docker build.
crictl
-compatible sandbox and container configurations are also included, which forward the pod SSH port (22) to 2222 on the host (use the -p
flag in SSH).