The documentation for each sub-project of the Confidential Containers project is available in the respective tabs, checkout the links below.
These slides provide a high-level overview of all the subprojects in the Confidential Containers.
This is the multi-page printable view of this section. Click here to print.
The documentation for each sub-project of the Confidential Containers project is available in the respective tabs, checkout the links below.
These slides provide a high-level overview of all the subprojects in the Confidential Containers.
Setup a two nodes Kubernetes cluster using Ubuntu 20.04. You can use your preferred Kubernetes setup tool. Here is an example using kcli.
Download ubuntu 20.04 image if not present by running the following command:
kcli download image ubuntu2004
Install the cluster:
kcli create kube generic -P image=ubuntu2004 -P workers=1 testk8s
Replace containerd on the worker node by building a new containerd from the following branch: https://github.com/confidential-containers/containerd/tree/ali-CCv0.
Modify systemd configuration to use the new binary and restart containerd
and kubelet
.
kubectl get nodes
Sample output from the demo environment:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cck8s-demo-master-0 Ready control-plane,master 25d v1.22.3
cck8s-demo-worker-0 Ready worker 25d v1.22.3
kubectl apply -f https://raw.githubusercontent.com/confidential-containers/operator/ccv0-demo/deploy/deploy.yaml
The operator installs everything under the confidential-containers-system
namespace:
Verify if the operator is running by runngint the following command:
kubectl get pods -n confidential-containers-system
Sample output from the demo environment:
$ kubectl get pods -n confidential-containers-system
NAME READY STATUS RESTARTS AGE
cc-operator-controller-manager-7f8d6dd988-t9zdm 2/2 Running 0 13s
Creating a CCruntime
object sets up the container runtime. The default payload image sets up the CCv0 demo image of the kata-containers runtime.
cat << EOF | kubectl create -f -
apiVersion: confidentialcontainers.org/v1beta1
kind: CcRuntime
metadata:
name: ccruntime-sample
namespace: confidential-containers-system
spec:
# Add fields here
runtimeName: kata
config:
installType: bundle
payloadImage: quay.io/confidential-containers/runtime-payload:ccv0-ssh-demo
EOF
This will create an install daemonset targeting the worker nodes for installation. You can verify the status under the confidential-containers-system
namespace.
$ kubectl get pods -n confidential-containers-system
NAME READY STATUS RESTARTS AGE
cc-operator-controller-manager-7f8d6dd988-t9zdm 2/2 Running 0 82s
cc-operator-daemon-install-p9ntc 1/1 Running 0 45s
On successful installation, you’ll see the following runtimeClasses
being setup:
$ kubectl get runtimeclasses.node.k8s.io
NAME HANDLER AGE
kata kata 92s
kata-cc kata-cc 92s
kata-qemu kata-qemu 92s
kata-cc
runtimeclass uses CCv0 specific configurations.
Now you can deploy the PODs targeting the specific runtimeclasses. The SSH demo can be used as a compatible workload.
To demonstrate confidential containers capabilities, we run a pod with SSH public key authentication.
Compared to the execution of and login to a shell on a pod, an SSH connection is cryptographically secured and requires a private key. It cannot be established by unauthorized parties, such as someone who controls the node. The container image contains the SSH host key that can be used for impersonating the host we will connect to. Because this container image is encrypted, and the key to decrypting this image is only provided in measurable ways (e.g. attestation or encrypted initrd), and because the pod/guest memory is protected, even someone who controls the node cannot steal this key.
If you would rather build the image with your own keys, skip to Building the container image. The operator can be used to set up a compatible runtime.
A demo image is provided at docker.io/katadocker/ccv0-ssh.
It is encrypted with Attestation Agent’s offline file system key broker and aa-offline_fs_kbc-keys.json
as its key file.
The private key for establishing an SSH connection to this container is given in ccv0-ssh
.
To use it with SSH, its permissions should be adjusted: chmod 600 ccv0-ssh
.
The host key fingerprint is SHA256:wK7uOpqpYQczcgV00fGCh+X97sJL3f6G1Ku4rvlwtR0
.
All keys shown here are for demonstration purposes. To achieve actually confidential containers, use a hardware trusted execution environment and do not reuse these keys.
Continue at Connecting to the guest.
The image built should be encrypted. To receive a decryption key at run time, the Confidential Containers project utilizes the Attestation Agent.
ssh-keygen -t ed25519 -f ccv0-ssh -P "" -C ""
generates an SSH key ccv0-ssh
and the correspondent public key ccv0-ssh.pub
.
The provided Dockerfile
expects ccv0-sh.pub
to exist.
Using Docker, you can build with
docker build -t ccv0-ssh .
Alternatively, Buildah can be used (buildah build
or formerly buildah bud
).
The SSH host key fingerprint is displayed during the build.
A Kubernetes YAML file specifying the kata
runtime is included.
If you use a self-built image, you should replace the image specification with the image you built.
The default tag points to an amd64
image, an s390x
tag is also available.
With common CNI setups, on the same host, with the service running, you can connect via SSH with
ssh -i ccv0-ssh root@$(kubectl get service ccv0-ssh -o jsonpath="{.spec.clusterIP}")
You will be prompted about whether the host key fingerprint is correct. This fingerprint should match the one specified above/displayed in the Docker build.
crictl
-compatible sandbox and container configurations are also included, which forward the pod SSH port (22) to 2222 on the host (use the -p
flag in SSH).
Welcome to Kata Containers!
This repository is the home of the Kata Containers code for the 2.0 and newer releases.
If you want to learn about Kata Containers, visit the main Kata Containers website.
Kata Containers is an open source project and community working to build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs.
The code is licensed under the Apache 2.0 license. See the license file for further details.
Kata Containers currently runs on 64-bit systems supporting the following technologies:
Architecture | Virtualization technology |
---|---|
x86_64 , amd64 |
Intel VT-x, AMD SVM |
aarch64 ("arm64 ") |
ARM Hyp |
ppc64le |
IBM Power |
s390x |
IBM Z & LinuxONE SIE |
The Kata Containers runtime provides a command to determine if your host system is capable of running and creating a Kata Container:
kata-runtime check
Notes:
This command runs a number of checks including connecting to the network to determine if a newer release of Kata Containers is available on GitHub. If you do not wish this to check to run, add the
--no-network-checks
option.By default, only a brief success / failure message is printed. If more details are needed, the
--verbose
flag can be used to display the list of all the checks performed.If the command is run as the
root
user additional checks are run (including checking if another incompatible hypervisor is running). When running asroot
, network checks are automatically disabled.
See the installation documentation.
See the official documentation including:
Kata Containers uses a single configuration file which contains a number of sections for various parts of the Kata Containers system including the runtime, the agent and the hypervisor.
See the hypervisors document and the Hypervisor specific configuration details.
To learn more about the project, its community and governance, see the community repository. This is the first place to go if you wish to contribute to the project.
See the community section for ways to contact us.
Please raise an issue in this repository.
Note: If you are reporting a security issue, please follow the vulnerability reporting process
See the developer guide.
The table below lists the core parts of the project:
Component | Type | Description |
---|---|---|
runtime | core | Main component run by a container manager and providing a containerd shimv2 runtime implementation. |
runtime-rs | core | The Rust version runtime. |
agent | core | Management process running inside the virtual machine / POD that sets up the container environment. |
dragonball |
core | An optional built-in VMM brings out-of-the-box Kata Containers experience with optimizations on container workloads |
documentation | documentation | Documentation common to all components (such as design and install documentation). |
tests | tests | Excludes unit tests which live with the main code. |
The table below lists the remaining parts of the project:
Component | Type | Description |
---|---|---|
packaging | infrastructure | Scripts and metadata for producing packaged binaries (components, hypervisors, kernel and rootfs). |
kernel | kernel | Linux kernel used by the hypervisor to boot the guest image. Patches are stored here. |
osbuilder | infrastructure | Tool to create “mini O/S” rootfs and initrd images and kernel for the hypervisor. |
kata-debug | infrastructure | Utility tool to gather Kata Containers debug information from Kubernetes clusters. |
agent-ctl |
utility | Tool that provides low-level access for testing the agent. |
kata-ctl |
utility | Tool that provides advanced commands and debug facilities. |
log-parser-rs |
utility | Tool that aid in analyzing logs from the kata runtime. |
trace-forwarder |
utility | Agent tracing helper. |
runk |
utility | Standard OCI container runtime based on the agent. |
ci |
CI | Continuous Integration configuration files and scripts. |
katacontainers.io |
Source for the katacontainers.io site. |
Kata Containers is now available natively for most distributions.
See the metrics documentation.
See the glossary of terms related to Kata Containers.
This repository includes tools and components for confidential container images.
Attestation Agent: An agent for facilitating attestation protocols. Can be built as a library to run in a process-based enclave or built as a process that runs inside a confidential vm.
image-rs: Rust implementation of the container image management library.
ocicrypt-rs: Rust implementation of the OCI image encryption library.
api-server-rest](api-server-rest): CoCo Restful API server.
CoCo guest components use lightweight ttRPC for internal communication to reduce the memory footprint and dependency. But many internal services also needed by containers like get_resource
, get_evidence
and get_token
, we export these services with restful API, now CoCo containers can easy access these API with http client. Here are some examples, for detail info, please refer rest API
$ ./api-server-rest --features=all
Starting API server on 127.0.0.1:8006
API Server listening on http://127.0.0.1:8006
$ curl http://127.0.0.1:8006/cdh/resource/default/key/1
12345678901234567890123456xxxx
$ curl http://127.0.0.1:8006/aa/evidence\?runtime_data\=xxxx
{"svn":"1","report_data":"eHh4eA=="}
$ curl http://127.0.0.1:8006/aa/token\?token_type\=kbs
{"token":"eyJhbGciOiJFi...","tee_keypair":"-----BEGIN... "}
Attestation Agent (AA for short) is a service function set for attestation procedure in Confidential Containers. It provides kinds of service APIs that need to make requests to the Relying Party (Key Broker Service) in Confidential Containers, and performs an attestation and establishes connection between the Key Broker Client (KBC) and corresponding KBS, so as to obtain the trusted services or resources of KBS.
Current consumers of AA include:
The main body of AA is a rust library crate, which contains KBC modules used to communicate with various KBS. In addition, this project also provides a gRPC service application, which allows callers to call the services provided by AA through gRPC.
Import AA in Cargo.toml
of your project with specific KBC(s):
attestation-agent = { git = "https://github.com/confidential-containers/guest-components", features = ["sample_kbc"] }
Note: When the version is stable, we will release AA on https://crate.io.
Here are the steps of building and running gRPC application of AA:
Build and install with default KBC modules:
git clone https://github.com/confidential-containers/guest-components
cd guest-components/attestation-agent
make && make install
or explicitly specify the KBS modules it contains. Taking sample_kbc
as example:
make KBC=sample_kbc
To build and install with musl, just run:
make LIBC=musl && make install
To build and install with openssl support (which is helpful in specific machines like s390x
)
make OPENSSL=1 && make install
For help information, just run:
attestation-agent --help
Start AA and specify the endpoint of AA’s gRPC service:
attestation-agent --keyprovider_sock 127.0.0.1:50000 --getresource_sock 127.0.0.1:50001
Or start AA with default keyprovider address (127.0.0.1:50000) and default getresource address (127.0.0.1:50001):
attestation-agent
If you want to see the runtime log:
RUST_LOG=attestation_agent attestation-agent --keyprovider_sock 127.0.0.1:50000 --getresource_sock 127.0.0.1:50001
To build and install ttRPC Attestation Agent, just run:
make ttrpc=true && make install
ttRPC AA now only support Unix Socket, for example:
attestation-agent --keyprovider_sock unix:///tmp/keyprovider.sock --getresource_sock unix:///tmp/getresource.sock
AA provides a flexible KBC module mechanism to support different KBS protocols required to make the communication between KBC and KBS. If the KBC modules currently supported by AA cannot meet your use requirement (e.g, need to use a new KBS protocol), you can write a new KBC module complying with the KBC development GUIDE. Welcome to contribute new KBC module to this project!
List of supported KBC modules:
KBC module name | README | KBS protocol | Maintainer |
---|---|---|---|
sample_kbc | Null | Null | Attestation Agent Authors |
offline_fs_kbc | Offline file system KBC | Null | IBM |
eaa_kbc | EAA KBC | EAA protocol | Alibaba Cloud |
offline_sev_kbc | Offline SEV KBC | Null | IBM |
online_sev_kbc | Online SEV KBC | simple-kbs | IBM |
cc_kbc | CC KBC | CoCo KBS protocol | CoCo Community |
CC KBC supports different kinds of hardware TEE attesters, now
Attester name | Info |
---|---|
tdx-attester | Intel TDX |
sgx-attester | Intel SGX DCAP |
snp-attester | AMD SEV-SNP |
az-snp-vtpm-attester | Azure SEV-SNP CVM |
To build cc kbc with all available attesters and install, use
make KBC=cc_kbc && make install
Confidential Data Hub is a service running inside guest to provide resource related APIs.
Build and install with default KBC modules:
git clone https://github.com/confidential-containers/guest-components
cd guest-components/confidential-data-hub
make
or explicitly specify the confidential resource provider and KMS plugin, please refer to Supported Features
make RESOURCE_PROVIDER=kbs PROVIDER=aliyun
Confidential resource providers (flag RESOURCE_PROVIDER
)
Feature name | Note |
---|---|
kbs | For TDX/SNP/Azure-SNP-vTPM based on KBS Attestation Protocol |
sev | For SEV based on efi secret pre-attestation |
Note: offline-fs
is built-in, we do not need to manually enable. If no RESOURCE_PROVIDER
is given, all features will be enabled.
KMS plugins (flag PROVIDER
)
Feature name | Note |
---|---|
aliyun | Use aliyun KMS suites to unseal secrets, etc. |
Note: If no PROVIDER
is given, all features will be enabled.
Container Images Rust Crate
This repo contains the rust version of the containers/ocicrypt library.
The Confidential Containers Key Broker Service (KBS) is a remote server which facilitates remote attestation. It is the reference implementation of Relying Party and Verifier in RATS role terminology.
This project relies on the Attestation-Service (AS) to verify TEE evidence.
The following TEE platforms are currently supported:
KBS has two deployment modes, which are consistent with RATS
The name of Background Check is from RATS architecture.
In this mode, the Client in TEE conveys Evidence to KBS, which treats it as opaque and simply forwards it to an integrated Attestation Service. AS compares the Evidence against its appraisal policy, and returns an Attestation Token (including parsed evidence claims) to KBS. The KBS then compares the Attestation Token against its own appraisal policy and return the requested resource data to client.
Here, the KBS is corresponding to the Relying Party of RATS and the AS is corresponding to the Verifier of RATS.
Build and install KBS with native integrated AS in background check mode:
make background-check-kbs
make install-kbs
The optional compile parameters that can be added are as follows:
make background-check-kbs [HTTPS_CRYPTO=?] [POLICY_ENGINE=?] [AS_TYPES=?] [COCO_AS_INTEGRATION_TYPE=?]
where:
HTTPS_CRYPTO
:
Can be rustls
or openssl
. Specify the library KBS uses to support HTTPS.
Default value is rustls
POLICY_ENGINE
: Can be opa
.
Specify the resource policy engine type of KBS.
If not set this parameter, KBS will not integrate resource policy engine.AS_TYPES
: can be coco-as
or amber-as
.
Specify the Attestation Service type KBS relies on.COCO_AS_INTEGRATION_TYPE
: can be grpc
or builtin
. This parameter only takes effect when AS_TYPES=coco-as
.
Specify the integration mode of CoCo Attestation Service.The name of Passport is from RATS architecture.
In this mode, the Client in TEE conveys Evidence to one KBS which is responsible for issuing token, this KBS relies on an integrated AS to verify the Evidence against its appraisal policy. This KBS then gives back the Attestation Token which the Client treats as opaque data. The Client can then present the Attestation Token (including parsed evidence claims) to the other KBS, which is responsible for distributing resources. This KBS then compares the Token’s payload against its appraisal policy and returns the requested resource data to client.
Here, the KBS for issueing token is corresponding to the Verifier of RATS and the KBS for distributing resources is corresponding to the Rely Party of RATS.
Build and install KBS for issueing token:
make passport-issuer-kbs [HTTPS_CRYPTO=?] [AS_TYPES=?] [COCO_AS_INTEGRATION_TYPE=?]
make install-issuer-kbs
The explanation for compiling optional parameters is the same as above.
Build and install KBS for distributing resources:
make passport-resource-kbs [HTTPS_CRYPTO=?] [POLICY_ENGINE=?]
make install-resource-kbs
The explanation for compiling optional parameters is the same as above.
We provide a quick start guide to deploy KBS locally and conduct configuration and testing on Ubuntu 22.04.
The KBS implements and supports a simple, vendor and hardware-agnostic implementation protocol to perform attestation.
KBS implements an HTTP-based, OpenAPI 3.1 compliant API. This API is formally described in its OpenAPI formatted specification.
The resource repository where KBS store resource data.
A custom, JSON-formatted configuration file can be provided to configure KBS.
We provide a docker compose
script for quickly deploying the KBS in Background check with gRPC AS,
the Reference Value Provider and the Key Provider
as local cluster services. Please refer to the Cluster Guide
for a quick start.
We provide a KBS client rust SDK and binary cmdline tool.
Build the KBS container (background check mode with native AS) image:
DOCKER_BUILDKIT=1 docker build -t kbs:coco-as . -f docker/Dockerfile
Attestation Service
This repository contains the implementation of Kata remote hypervisor. Kata remote hypervisor enables creation of Kata VMs on any environment without requiring baremetal servers or nested virtualization support.
The background and description of the components involved in ‘peer pods’ can be found in the architecture documentation.
cloud-api-adator
implements the remote hypervisor support.Please refer to the instructions mentioned in the following doc.
Please refer to the instructions mentioned in the following doc.
This project uses the Apache 2.0 license. Contribution to this project requires the DCO 1.1 process to be followed.
ContainerCreating
stateLet’s start by looking at the pods deployed in the confidential-containers-system
namespace:
$ kubectl get pods -n confidential-containers-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cc-operator-controller-manager-76755f9c96-pjj92 2/2 Running 0 1h 10.244.0.14 aks-nodepool1-22620003-vmss000000 <none> <none>
cc-operator-daemon-install-79c2b 1/1 Running 0 1h 10.244.0.16 aks-nodepool1-22620003-vmss000000 <none> <none>
cc-operator-pre-install-daemon-gsggj 1/1 Running 0 1h 10.244.0.15 aks-nodepool1-22620003-vmss000000 <none> <none>
cloud-api-adaptor-daemonset-2pjbb 1/1 Running 0 1h 10.224.0.4 aks-nodepool1-22620003-vmss000000 <none> <none>
It is possible that the cloud-api-adaptor-daemonset
is not deployed correctly. To see what is wrong with it run the following command and look at the events to get insights:
$ kubectl -n confidential-containers-system describe ds cloud-api-adaptor-daemonset
Name: cloud-api-adaptor-daemonset
Selector: app=cloud-api-adaptor
Node-Selector: node-role.kubernetes.io/worker=
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 8m13s daemonset-controller Created pod: cloud-api-adaptor-daemonset-2pjbb
But if the cloud-api-adaptor-daemonset
is up and in the Running
state, like shown above then look at the pods’ logs, for more insights:
kubectl -n confidential-containers-system logs daemonset/cloud-api-adaptor-daemonset
Note: This is a single node cluster. So there is only one pod named
cloud-api-adaptor-daemonset-*
. But if you are running on a multi-node cluster then look for the node your workload fails to come up and only see the logs of corresponding CAA pod.
If the problem hints that something is wrong with the configuration then look at the configmaps or secrets needed to run CAA:
$ kubectl -n confidential-containers-system get cm
NAME DATA AGE
cc-operator-manager-config 1 1h
kube-root-ca.crt 1 1h
peer-pods-cm 7 1h
$ kubectl -n confidential-containers-system get secret
NAME TYPE DATA AGE
peer-pods-secret Opaque 0 1h
ssh-key-secret Opaque 1 1h
Set AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
for AWS cli access
Install packer by following the instructions in the following link
Install packer’s Amazon plugin packer plugins install github.com/hashicorp/amazon
export AWS_REGION="us-east-1" # mandatory
export PODVM_DISTRO=rhel # mandatory
export INSTANCE_TYPE=t3.small # optional, default is t3.small
export IMAGE_NAME=peer-pod-ami # optional
export VPC_ID=vpc-01234567890abcdef # optional, otherwise, it creates and uses the default vpc in the specific region
export SUBNET_ID=subnet-01234567890abcdef # must be set if VPC_ID is set
If you want to change the volume size of the generated AMI, then set the VOLUME_SIZE
environment variable.
For example if you want to set the volume size to 40 GiB, then do the following:
export VOLUME_SIZE=40
NOTE: For setting up authenticated registry support read this documentation.
cd image
make image
You can also build the custom AMI by running the packer build inside a container:
docker build -t aws \
--secret id=AWS_ACCESS_KEY_ID \
--secret id=AWS_SECRET_ACCESS_KEY \
--build-arg AWS_REGION=${AWS_REGION} \
-f Dockerfile .
If you want to use an existing VPC_ID
with public SUBNET_ID
then use the following command:
docker build -t aws \
--secret id=AWS_ACCESS_KEY_ID \
--secret id=AWS_SECRET_ACCESS_KEY \
--build-arg AWS_REGION=${AWS_REGION} \
--build-arg VPC_ID=${VPC_ID} \
--build-arg SUBNET_ID=${SUBNET_ID}\
-f Dockerfile .
If you want to build a CentOS based custom AMI then you’ll need to first accept the terms by visiting this link
Once done, run the following command:
docker build -t aws \
--secret id=AWS_ACCESS_KEY_ID \
--secret id=AWS_SECRET_ACCESS_KEY \
--build-arg AWS_REGION=${AWS_REGION} \
--build-arg BINARIES_IMG=quay.io/confidential-containers/podvm-binaries-centos-amd64 \
--build-arg PODVM_DISTRO=centos \
-f Dockerfile .
Once the image creation is complete, you can use the following CLI command as well to
get the AMI_ID. The command assumes that you are using the default AMI name: peer-pod-ami
aws ec2 describe-images --query "Images[*].[ImageId]" --filters "Name=name,Values=peer-pod-ami" --region ${AWS_REGION} --output text
mkdir -p qcow2-img && cd qcow2-img
curl -LO https://raw.githubusercontent.com/confidential-containers/cloud-api-adaptor/staging/podvm/hack/download-image.sh
bash download-image.sh quay.io/confidential-containers/podvm-generic-ubuntu-amd64:latest . -o podvm.qcow2
qemu-img
tool for conversion.qemu-img convert -O raw podvm.qcow2 podvm.raw
curl -L0 https://raw.githubusercontent.com/confidential-containers/cloud-api-adaptor/staging/aws/raw-to-ami.sh
bash raw-to-ami.sh podvm.raw <Some S3 Bucket Name> <AWS Region>
On success, the command will generate the AMI_ID
, which needs to be used to set the value of PODVM_AMI_ID
in peer-pods-cm
configmap.
Update kustomization.yaml with your AMI_ID
Deploy Cloud API Adaptor by following the install guide
This documentation will walk you through setting up CAA (a.k.a. Peer Pods) on Azure Kubernetes Service (AKS). It explains how to deploy:
Follow the instructions here to install Azure CLI.
Follow the instructions here to install kubectl.
There are a bunch of steps that require you to be logged into your Azure account:
az login
Retrieve your “Subscription ID” and set your preferred region:
export CAA_VERSION="0.8.0"
export AZURE_SUBSCRIPTION_ID=$(az account show --query id --output tsv)
export AZURE_REGION="eastus"
Note: Skip this step if you already have a resource group you want to use. Please, export the resource group name in the
AZURE_RESOURCE_GROUP
environment variable.
Create an Azure resource group by running the following command:
export AZURE_RESOURCE_GROUP="caa-rg-$(date '+%Y%m%b%d%H%M%S')"
az group create \
--name "${AZURE_RESOURCE_GROUP}" \
--location "${AZURE_REGION}"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/tags/v${CAA_VERSION}.tar.gz"
tar -xvzf "v${CAA_VERSION}.tar.gz"
cd "cloud-api-adaptor-${CAA_VERSION}"
Make changes to the following environment variable as you see fit:
export CLUSTER_NAME="caa-$(date '+%Y%m%b%d%H%M%S')"
export AKS_WORKER_USER_NAME="azuser"
export AKS_RG="${AZURE_RESOURCE_GROUP}-aks"
export SSH_KEY=~/.ssh/id_rsa.pub
Note: Optionally, deploy the worker nodes into an existing Azure Virtual Network (VNet) and Subnet by adding the following flag:
--vnet-subnet-id $SUBNET_ID
.
Deploy AKS with single worker node to the same resource group you created earlier:
az aks create \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--node-resource-group "${AKS_RG}" \
--name "${CLUSTER_NAME}" \
--enable-oidc-issuer \
--enable-workload-identity \
--location "${AZURE_REGION}" \
--node-count 1 \
--node-vm-size Standard_F4s_v2 \
--nodepool-labels node.kubernetes.io/worker= \
--ssh-key-value "${SSH_KEY}" \
--admin-username "${AKS_WORKER_USER_NAME}" \
--os-sku Ubuntu
Download kubeconfig locally to access the cluster using kubectl
:
az aks get-credentials \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--name "${CLUSTER_NAME}"
Note: If you are using Calico Container Network Interface (CNI) on the Kubernetes cluster, then, configure Virtual Extensible LAN (VXLAN) encapsulation for all inter workload traffic.
CAA needs privileges to talk to Azure API. This privilege is granted to CAA by associating a workload identity to the CAA service account. This workload identity (a.k.a. user assigned identity) is given permissions to create VMs, fetch images and join networks in the next step.
Note: If you use an existing AKS cluster it might need to be configured to support workload identity and OpenID Connect (OIDC), please refer to the instructions in this guide.
Start by creating an identity for CAA:
export AZURE_WORKLOAD_IDENTITY_NAME="caa-identity"
az identity create \
--name "${AZURE_WORKLOAD_IDENTITY_NAME}" \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--location "${AZURE_REGION}"
export USER_ASSIGNED_CLIENT_ID="$(az identity show \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--name "${AZURE_WORKLOAD_IDENTITY_NAME}" \
--query 'clientId' \
-otsv)"
Annotate the CAA Service Account with the workload identity’s CLIENT_ID
and make the CAA daemonset use workload identity for authentication:
cat <<EOF > install/overlays/azure/workload-identity.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cloud-api-adaptor-daemonset
namespace: confidential-containers-system
spec:
template:
metadata:
labels:
azure.workload.identity/use: "true"
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cloud-api-adaptor
namespace: confidential-containers-system
annotations:
azure.workload.identity/client-id: "$USER_ASSIGNED_CLIENT_ID"
EOF
For CAA to be able to manage VMs assign the identity VM and Network contributor roles, privileges to spawn VMs in $AZURE_RESOURCE_GROUP
and attach to a VNet in $AKS_RG
.
az role assignment create \
--role "Virtual Machine Contributor" \
--assignee "$USER_ASSIGNED_CLIENT_ID" \
--scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"
az role assignment create \
--role "Reader" \
--assignee "$USER_ASSIGNED_CLIENT_ID" \
--scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"
az role assignment create \
--role "Network Contributor" \
--assignee "$USER_ASSIGNED_CLIENT_ID" \
--scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AKS_RG}"
Create the federated credential for the CAA ServiceAccount using the OIDC endpoint from the AKS cluster:
export AKS_OIDC_ISSUER="$(az aks show \
--name "$CLUSTER_NAME" \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--query "oidcIssuerProfile.issuerUrl" \
-otsv)"
az identity federated-credential create \
--name caa-fedcred \
--identity-name caa-identity \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--issuer "${AKS_OIDC_ISSUER}" \
--subject system:serviceaccount:confidential-containers-system:cloud-api-adaptor \
--audience api://AzureADTokenExchange
Fetch the AKS created VNet name:
export AZURE_VNET_NAME=$(az network vnet list \
--resource-group "${AKS_RG}" \
--query "[0].name" \
--output tsv)
Export the Subnet ID to be used for CAA daemonset deployment:
export AZURE_SUBNET_ID=$(az network vnet subnet list \
--resource-group "${AKS_RG}" \
--vnet-name "${AZURE_VNET_NAME}" \
--query "[0].id" \
--output tsv)
Export this environment variable to use for the peer pod VM:
export AZURE_IMAGE_ID="/CommunityGalleries/cococommunity-42d8482d-92cd-415b-b332-7648bd978eff/Images/peerpod-podvm-ubuntu2204-cvm-snp/Versions/${CAA_VERSION}"
Note:
export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
export CAA_TAG="d4496d008b65c979a4d24767979a77ed1ba21e76"
Note: If you have made changes to the CAA code and you want to deploy those changes then follow these instructions to build the container image. Once the image is built export the environment variables accordingly.
kustomization.yaml
fileReplace the values as needed for the following environment variables:
Note: For non-Confidential VMs use
AZURE_INSTANCE_SIZE="Standard_D2as_v5"
.
Run the following command to update the kustomization.yaml
file:
cat <<EOF > install/overlays/azure/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../yamls
images:
- name: cloud-api-adaptor
newName: "${CAA_IMAGE}"
newTag: "${CAA_TAG}"
generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- name: peer-pods-cm
namespace: confidential-containers-system
literals:
- CLOUD_PROVIDER="azure"
- AZURE_SUBSCRIPTION_ID="${AZURE_SUBSCRIPTION_ID}"
- AZURE_REGION="${AZURE_REGION}"
- AZURE_INSTANCE_SIZE="Standard_DC2as_v5"
- AZURE_RESOURCE_GROUP="${AZURE_RESOURCE_GROUP}"
- AZURE_SUBNET_ID="${AZURE_SUBNET_ID}"
- AZURE_IMAGE_ID="${AZURE_IMAGE_ID}"
secretGenerator:
- name: peer-pods-secret
namespace: confidential-containers-system
- name: ssh-key-secret
namespace: confidential-containers-system
files:
- id_rsa.pub
patchesStrategicMerge:
- workload-identity.yaml
EOF
The SSH public key should be accessible to the kustomization.yaml
file:
cp $SSH_KEY install/overlays/azure/id_rsa.pub
Run the following command to deploy CAA:
CLOUD_PROVIDER=azure make deploy
Generic CAA deployment instructions are also described here.
Verify that the runtimeclass
is created after deploying CAA:
kubectl get runtimeclass
Once you can find a runtimeclass
named kata-remote
then you can be sure that the deployment was successful. A successful deployment will look like this:
$ kubectl get runtimeclass
NAME HANDLER AGE
kata-remote kata-remote 7m18s
Create an nginx
deployment:
echo '
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
runtimeClassName: kata-remote
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
imagePullPolicy: Always
' | kubectl apply -f -
Ensure that the pod is up and running:
kubectl get pods -n default
You can verify that the peer pod VM was created by running the following command:
az vm list \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--output table
Here you should see the VM associated with the pod nginx
.
Note: If you run into problems then check the troubleshooting guide here.
If you wish to clean up the whole set up, you can delete the resource group by running the following command:
az group delete \
--name "${AZURE_RESOURCE_GROUP}" \
--yes --no-wait
If you have made changes to the CAA code that affects the Pod-VM image and you want to deploy those changes then follow these instructions to build the pod-vm image.
An automated job builds the pod-vm image each night at 00:00 UTC. You can use that image by exporting the following environment variable:
SUCCESS_TIME=$(curl -s \
-H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/confidential-containers/cloud-api-adaptor/actions/workflows/azure-podvm-image-nightly-build.yml/runs?status=success" \
| jq -r '.workflow_runs[0].updated_at')
export AZURE_IMAGE_ID="/CommunityGalleries/cocopodvm-d0e4f35f-5530-4b9c-8596-112487cdea85/Images/podvm_image0/Versions/$(date -u -jf "%Y-%m-%dT%H:%M:%SZ" "$SUCCESS_TIME" "+%Y.%m.%d" 2>/dev/null || date -d "$SUCCESS_TIME" +%Y.%m.%d)"
Above image version is in the format YYYY.MM.DD
, so to use the latest image use the date of yesterday.
This guide describes how to set up a demo environment on IBM Cloud for peer pod VMs using the operator deployment approach.
The high level flow involved is:
When building the peer pod VM image, it is simplest to use the container based approach, which only requires either
docker
, or podman
, but it can also be built locally.
Note: the peer pod VM image build and upload is de-coupled from the cluster creation and operator deployment stage, so can be built on a different machine.
There are a number of packages that you will need to install in order to create the Kubernetes cluster and peer pod enable it:
kubectl
are all required for the cluster creation and explained in
the cluster pre-reqs guide.In addition to this you will need to install jq
Tip: If you are using Ubuntu linux, you can run follow command:
$ sudo apt-get install jq
You will also require go and make
to be installed.
A peer pod VM image needs to be created as a VPC custom image in IBM Cloud in order to create the peer pod instances from. The peer pod VM image contains components like the agent protocol forwarder and Kata agent that communicate with the Kubernetes worker node and carry out the received instructions inside the peer pod.
You may skip this step and use one of the release images, skip to Import Release VM Image but for the latest features you may wish to build your own.
You can do this by following the process document. If building within a container ensure that --build-arg CLOUD_PROVIDER=ibmcloud
is set and --build-arg ARCH=s390x
for an s390x
architecture image.
Note: At the time of writing issue, #649 means when creating an
s390x
image you also need to add two extra build args:--build-arg UBUNTU_IMAGE_URL=""
and--build-arg UBUNTU_IMAGE_CHECKSUM=""
Note: If building the peer pod qcow2 image within a VM, it may take a lot of resources e.g. 8 vCPU and 32GB RAM due to the nested virtualization performance limitations. When running without enough resources, the failure seen is similar to:
Build 'qemu.ubuntu' errored after 5 minutes 57 seconds: Timeout waiting for SSH.
You can follow the process documented from the cloud-api-adaptor/ibmcloud/image
to extract and upload
the peer pod image you’ve just built to IBM Cloud as a custom image, noting to replace the
quay.io/confidential-containers/podvm-ibmcloud-ubuntu-s390x
reference with the local container image that you built
above e.g. localhost/podvm_ibmcloud_s390x:latest
.
This script will end with the line: Image <image-name> with id <image-id> is available
. The image-id
field will be
needed in the kustomize step later.
Alternatively to use a pre-built peer pod VM image you can follow the process documented with the release images found at quay.io/confidential-containers/podvm-generic-ubuntu-<ARCH>
. Running this command will require docker or podman, as per tools
./import.sh quay.io/confidential-containers/podvm-generic-ubuntu-s390x eu-gb --bucket example-bucket --instance example-cos-instance
This script will end with the line: Image <image-name> with id <image-id> is available
. The image-id
field will be
needed in later steps.
If you don’t have a Kubernetes cluster for testing, you can follow the open-source instructions to set up a basic cluster where the Kubernetes nodes run on IBM Cloud provided infrastructure.
Deploy cert-manager with:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.9.1/cert-manager.yaml
Wait for the pods to all be in running state with:
kubectl get pods -n cert-manager --watch
From within the root directory of the cloud-api-adaptor
repository, deploy the webhook with:
kubectl apply -f ./webhook/hack/webhook-deploy.yaml
Wait for the pods to all be in running state with:
kubectl get pods -n peer-pods-webhook-system --watch
Advertise the extended resource kata.peerpods.io/vm.
by running the following commands:
pushd webhook/hack/extended-resources
./setup.sh
popd
The caa-provisioner-cli
simplifies deploying the operator and the cloud-api-adaptor resources on to any cluster. See the test/tools/README.md for full instructions. To create an ibmcloud ready version follow these steps
# Starting from the cloud-api-adaptor root directory
pushd test/tools
make BUILTIN_CLOUD_PROVIDERS="ibmcloud" all
popd
This will create caa-provisioner-cli
in the test/tools
directory. To use the tool with an existing self-managed cluster you will need to setup a .properties
file containing the relevant ibmcloud information to enable your cluster to create and use peer-pods. Use the following commands to generate the .properties
file, if not using a selfmanaged cluster please update the terraform
commands with the appropriate values manually.
export IBMCLOUD_API_KEY= # your ibmcloud apikey
export PODVM_IMAGE_ID= # the image id of the peerpod vm uploaded in the previous step
export PODVM_INSTANCE_PROFILE= # instance profile name that runs the peerpod (bx2-2x8 or bz2-2x8 for example)
export CAA_IMAGE_TAG= # cloud-api-adaptor image tag that supports this arch, see quay.io/confidential-containers/cloud-api-adaptor
pushd ibmcloud/cluster
cat <<EOF > ../../selfmanaged_cluster.properties
IBMCLOUD_PROVIDER="ibmcloud"
APIKEY="$IBMCLOUD_API_KEY"
PODVM_IMAGE_ID="$PODVM_IMAGE_ID"
INSTANCE_PROFILE_NAME="$PODVM_INSTANCE_PROFILE"
CAA_IMAGE_TAG="$CAA_IMAGE_TAG"
SSH_KEY_ID="$(terraform output --raw ssh_key_id)"
EOF
popd
This will create a selfmanaged_cluster.properties
files in the cloud-api-adaptor root directory.
The final step is to run the caa-provisioner-cli
to install the operator.
export CLOUD_PROVIDER=ibmcloud
# must be run from the directory containing the properties file
export TEST_PROVISION_FILE="$(pwd)/selfmanaged_cluster.properties"
# prevent the test from removing the cloud-api-adaptor resources from the cluster
export TEST_TEARDOWN="no"
pushd test/tools
./caa-provisioner-cli -action=install
popd
To validate that a cluster has been setup properly, there is a suite of tests that validate peer-pods across different providers, the implementation of these tests can be found in test/e2e/common_suite_test.go).
Assuming CLOUD_PROVIDER
and TEST_PROVISION_FILE
are still set in your current terminal you can execute these tests
from the cloud-api-adaptor root directory by running the following commands
export KUBECONFIG=$(pwd)/ibmcloud/cluster/config
make test-e2e
There are two options for cleaning up the environment once testing has finished, or if you want to re-install from a clean state:
caa-provisioner-cli
to remove the resources.export CLOUD_PROVIDER=ibmcloud
# must be run from the directory containing the properties file
export TEST_PROVISION_FILE="$(pwd)/selfmanaged_cluster.properties"
pushd test/tools
./caa-provisioner-cli -action=uninstall
popd
This document contains instructions for using, developing and testing the cloud-api-adaptor with libvirt.
In this section you will learn how to setup an environment in your local machine to run peer pods with the libvirt cloud API adaptor. Bear in mind that many different tools can be used to setup the environment and here we just make suggestions of tools that seems used by most of the peer pods developers.
You must have a Linux/KVM system with libvirt installed and the following tools:
Assume that you have a ‘default’ network and storage pools created in libvirtd system instance (qemu:///system
). However,
if you have a different pool name then the scripts should be able to handle it properly.
Use the kcli_cluster.sh
script to create a simple two VMs (one control plane and one worker) cluster
with the kcli tool, as:
./kcli_cluster.sh create
With kcli_cluster.sh
you can configure the libvirt network and storage pools that the cluster VMs will be created, among
other parameters. Run ./kcli_cluster.sh -h
to see the help for further information.
If everything goes well you will be able to see the cluster running after setting your Kubernetes config with:
export KUBECONFIG=$HOME/.kcli/clusters/peer-pods/auth/kubeconfig
For example, shown below:
$ kcli list kube
+-----------+---------+-----------+-----------------------------------------+
| Cluster | Type | Plan | Vms |
+-----------+---------+-----------+-----------------------------------------+
| peer-pods | generic | peer-pods | peer-pods-ctlplane-0,peer-pods-worker-0 |
+-----------+---------+-----------+-----------------------------------------+
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
peer-pods-ctlplane-0 Ready control-plane,master 6m8s v1.25.3
peer-pods-worker-0 Ready worker 2m47s v1.25.3
In order to build the Pod VM without installing the build tools, you can use the Dockerfiles hosted on ../podvm directory to run the entire process inside a container. Refer to podvm/README.md for further details. Alternatively you can consume pre-built podvm images as explained here.
Next you will need to create a volume on libvirt’s system storage and upload the image content. That volume is used by the cloud-api-adaptor program to instantiate a new Pod VM. Run the following commands:
export IMAGE=<full-path-to-qcow2>
virsh -c qemu:///system vol-create-as --pool default --name podvm-base.qcow2 --capacity 20G --allocation 2G --prealloc-metadata --format qcow2
virsh -c qemu:///system vol-upload --vol podvm-base.qcow2 $IMAGE --pool default --sparse
You should see that the podvm-base.qcow2
volume was properly created:
$ virsh -c qemu:///system vol-info --pool default podvm-base.qcow2
Name: podvm-base.qcow2
Type: file
Capacity: 6.00 GiB
Allocation: 631.52 MiB
The easiest way to install the cloud-api-adaptor along with Confidential Containers in the cluster is through the
Kubernetes operator available in the install
directory of this repository.
Start by creating a public/private RSA key pair that will be used by the cloud-api-provider program, running on the
cluster workers, to connect with your local libvirtd instance without password authentication. Assume you are in the
libvirt
directory, do:
cd ../install/overlays/libvirt
ssh-keygen -f ./id_rsa -N ""
cat id_rsa.pub >> ~/.ssh/authorized_keys
Note: ensure that ~/.ssh/authorized_keys
has the right permissions (read/write for the user only) otherwise the
authentication can silently fail. You can run chmod 600 ~/.ssh/authorized_keys
to set the right permissions.
You will need to figure out the IP address of your local host (e.g. 192.168.122.1). Then try to remote connect with libvirt to check the keys setup is fine, for example:
$ virsh -c "qemu+ssh://$USER@192.168.122.1/system?keyfile=$(pwd)/id_rsa" nodeinfo
CPU model: x86_64
CPU(s): 12
CPU frequency: 1084 MHz
CPU socket(s): 1
Core(s) per socket: 6
Thread(s) per core: 2
NUMA cell(s): 1
Memory size: 32600636 KiB
Now you should finally install the Kubernetes operator in the cluster with the help of the install_operator.sh
script. Ensure that you have your IP address exported in the environment, as shown below, then run the install script:
cd ../../../libvirt/
export LIBVIRT_IP="192.168.122.1"
export SSH_KEY_FILE="id_rsa"
./install_operator.sh
If everything goes well you will be able to see the operator’s controller manager and cloud-api-adaptor Pods running:
$ kubectl get pods -n confidential-containers-system
NAME READY STATUS RESTARTS AGE
cc-operator-controller-manager-5df7584679-5dbmr 2/2 Running 0 3m58s
cloud-api-adaptor-daemonset-vgj2s 1/1 Running 0 3m57s
$ kubectl logs pod/cloud-api-adaptor-daemonset-vgj2s -n confidential-containers-system
+ exec cloud-api-adaptor libvirt -uri 'qemu+ssh://wmoschet@192.168.122.1/system?no_verify=1' -data-dir /opt/data-dir -pods-dir /run/peerpod/pods -network-name default -pool-name default -socket /run/peerpod/hypervisor.sock
2022/11/09 18:18:00 [helper/hypervisor] hypervisor config {/run/peerpod/hypervisor.sock registry.k8s.io/pause:3.7 /run/peerpod/pods libvirt}
2022/11/09 18:18:00 [helper/hypervisor] cloud config {qemu+ssh://wmoschet@192.168.122.1/system?no_verify=1 default default /opt/data-dir}
2022/11/09 18:18:00 [helper/hypervisor] service config &{qemu+ssh://wmoschet@192.168.122.1/system?no_verify=1 default default /opt/data-dir}
You will also notice that Kubernetes runtimeClass resources were created on the cluster, as for example:
$ kubectl get runtimeclass
NAME HANDLER AGE
kata-remote kata-remote 7m18s
At this point everything should be fine to get a sample Pod created. Let’s first list the running VMs so that we can later check the Pod VM will be really running. Notice below that we got only the cluster node VMs up:
$ virsh -c qemu:///system list
Id Name State
------------------------------------
3 peer-pods-ctlplane-0 running
4 peer-pods-worker-0 running
Create the sample_busybox.yaml file with the following content:
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
containers:
- image: quay.io/prometheus/busybox
name: busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
runtimeClassName: kata-remote
And create the Pod:
$ kubectl apply -f sample_busybox.yaml
pod/busybox created
$ kubectl wait --for=condition=Ready pod/busybox
pod/busybox condition met
Check that the Pod VM is up and running. See on the following listing that podvm-busybox-88a70031 was created:
$ virsh -c qemu:///system list
Id Name State
----------------------------------------
5 peer-pods-ctlplane-0 running
6 peer-pods-worker-0 running
7 podvm-busybox-88a70031 running
You should also check that the container is running fine. For example, compare the kernels are different as shown below:
$ uname -r
5.17.12-100.fc34.x86_64
$ kubectl exec pod/busybox -- uname -r
5.4.0-131-generic
The peer-pods pod can be deleted as any regular pod. On the listing below the pod was removed and you can note that the Pod VM no longer exists on Libvirt:
$ kubectl delete -f sample_busybox.yaml
pod "busybox" deleted
$ virsh -c qemu:///system list
Id Name State
------------------------------------
5 peer-pods-ctlplane-0 running
6 peer-pods-worker-0 running
You might want to reinstall the Confidential Containers and cloud-api-adaptor into your cluster. There are two options:
./kcli_cluster.sh delete
to
wipe out the cluster created with kcliinstall_operator.sh
scriptLet’s show you how to delete the operator resources. On the listing below you can see the actual pods running on the confidential-containers-system namespace:
$ kubectl get pods -n confidential-containers-system
NAME READY STATUS RESTARTS AGE
cc-operator-controller-manager-fbb5dcf9d-h42nn 2/2 Running 0 20h
cc-operator-daemon-install-fkkzz 1/1 Running 0 20h
cloud-api-adaptor-daemonset-libvirt-lxj7v 1/1 Running 0 20h
In order to remove the *-daemon-install-* and *-cloud-api-adaptor-daemonset-* pods, run the following command from the root directory:
CLOUD_PROVIDER=libvirt make delete
It can take some minutes to get those pods deleted, afterwards you will notice that only the controller-manager is still up. Below is shown how to delete that pod and associated resources as well:
$ kubectl get pods -n confidential-containers-system
NAME READY STATUS RESTARTS AGE
cc-operator-controller-manager-fbb5dcf9d-h42nn 2/2 Running 0 20h
$ kubectl delete -f install/yamls/deploy.yaml
namespace "confidential-containers-system" deleted
serviceaccount "cc-operator-controller-manager" deleted
role.rbac.authorization.k8s.io "cc-operator-leader-election-role" deleted
clusterrole.rbac.authorization.k8s.io "cc-operator-manager-role" deleted
clusterrole.rbac.authorization.k8s.io "cc-operator-metrics-reader" deleted
clusterrole.rbac.authorization.k8s.io "cc-operator-proxy-role" deleted
rolebinding.rbac.authorization.k8s.io "cc-operator-leader-election-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "cc-operator-manager-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "cc-operator-proxy-rolebinding" deleted
configmap "cc-operator-manager-config" deleted
service "cc-operator-controller-manager-metrics-service" deleted
deployment.apps "cc-operator-controller-manager" deleted
customresourcedefinition.apiextensions.k8s.io "ccruntimes.confidentialcontainers.org" deleted
$ kubectl get pods -n confidential-containers-system
No resources found in confidential-containers-system namespace.