Skip to main content
Version: 1.3

Elemental the command line way

Follow this guide to have an auto-deployed cluster via rke2/k3s and managed by Rancher with the only help of an Elemental Teal ISO.


  • A Rancher server (v2.7.0 or later) configured (server-url set)
    • To configure the Rancher server-url please check the Rancher docs
  • A machine (bare metal or virtualized) with TPM 2.0
    • Hint 1: Libvirt allows setting virtual TPMs for virtual machines example here
    • Hint 2: You can enable TPM emulation on bare metal machines missing the TPM 2.0 module example here
    • Hint 3: Make sure you're using UEFI (not BIOS) on x86-64, or the ISO won't boot
    • Hint 4: A minimum volume size of 25 GB is recommended. See the Elemental Teal partition table for more details
    • Hint 5: CPU and RAM requirements depend on the Kubernetes version installed, for example K3s or RKE2
  • Helm Package Manager (
  • For ARM (aarch64) - One SD-card (32 GB or more, must be fast - 40MB/s write speed is acceptable) and a USB-stick for installation

Install Elemental Operator

elemental-operator is the management endpoint, running the management cluster and taking care of creating inventories, registrations for machines and much more.

We will use the Helm package manager to install the elemental-operator chart into our cluster.

helm upgrade --create-namespace -n cattle-elemental-system --install elemental-operator-crds oci://
helm upgrade --create-namespace -n cattle-elemental-system --install elemental-operator oci://

Now after a few seconds you should see the operator pod appear on the cattle-elemental-system namespace:

kubectl get pods -n cattle-elemental-system
elemental-operator-64f88fc695-b8qhn 1/1 Running 0 16s
Helm v3.8.0+ required

The Elemental Operator chart is distributed via an OCI registry: Helm correctly supports OCI based registries starting from the v3.8.0 release.

Swap charts installation order when upgrading from elemental-operator release < 1.2.4

When upgrading from an elemental-operator release embedding the Elemental CRDs (version < 1.2.4) the elemental-operator-crds chart installation will fail. You will need to upgrade the elemental-operator chart first, and only then install the elemental-operator-crds chart.

Non-stable installations

Besides the Helm charts listed above, there are two other non-stable versions available.

  • Staging: refers to the latest tagged release from Github. This is documented in the Next pages.

  • Development: refers to the 'tip of HEAD' from Github. This is the ongoing development version and changes constantly.

helm upgrade --create-namespace -n cattle-elemental-system --install elemental-operator-crds oci://
helm upgrade --create-namespace -n cattle-elemental-system --install elemental-operator oci://

Installation options

There are a few options that can be set in the chart install but that is out of scope for this document. You can see all the values on the chart values.yaml.

Prepare your kubernetes resources

Node deployment starts with a MachineRegistration, identifying a set of machines sharing the same configuration (disk drives, network, etc.).

The MachineRegistration is needed to perform the deployment of the Elemental OS on the target hosts. When booting up, each host registers to the Elemental Operator which tracks the new host with a MachineInventory resource.

Then it continues with having a Cluster resource that uses a MachineInventorySelectorTemplate to know which machines are for that cluster.

This selector is a simple matcher based on labels set in the MachineInventory, so if your selector is matching on the label cluster-id with a value cluster-id-val and your MachineInventory has that same cluster-id:cluster-id-val label, it will match and be bootstrapped as part of the cluster.

In this quickstart we are going to deploy the resources to provision a cluster named volcano that will match on MachineInventorys with the label element:fire.

You will need to create the following files:

kind: MachineInventorySelectorTemplate
name: fire-machine-selector
namespace: fleet-default
- key: element
operator: In
values: [ 'fire' ]

As you can see this is a very simple selector that looks for MachineInventorys having a label with the key element and the value fire.

kind: Cluster
name: volcano
namespace: fleet-default
etcd-expose-metrics: false
profile: null
- controlPlaneRole: true
etcdRole: true
kind: MachineInventorySelectorTemplate
name: fire-machine-selector
name: fire-pool
quantity: 1
unhealthyNodeTimeout: 0s
workerRole: true
- config:
protect-kernel-defaults: false
registries: {}
kubernetesVersion: v1.24.8+k3s1

As you can see the machineConfigRef is of kind MachineInventorySelectorTemplate with the name fire-machine-selector: it matches the selector we created.

You can get more information about cluster options like machineGlobalConfig or machineSelectorConfig directly in the Rancher Manager documentation.

kind: MachineRegistration
name: fire-nodes
namespace: fleet-default
- name: root
passwd: root
reboot: true
device: /dev/sda
debug: true
element: fire
manufacturer: "${System Information/Manufacturer}"
productName: "${System Information/Product Name}"
serialNumber: "${System Information/Serial Number}"
machineUUID: "${System Information/UUID}"

The MachineRegistration defines the registration and installation configuration. Once created, the Elemental operator exposes a unique URL to be used with the elemental-register binary to reach out to the management cluster and register the machine during installation: if the registration is successful, the operator creates a MachineInventory tracking the machine, which can be used to provision the machine as a node of our cluster. We define the label matching our selector here, although it can also be added later to the created MachineInventorys.


Make sure to modify the registration.yaml above to set the proper install device to point to a valid device based on your node configuration (i.e. /dev/sda, /dev/vda, /dev/nvme0, etc...).

The SD-card on a Raspberry Pi is usually /dev/mmcblk0.

kind: SeedImage
name: fire-img
namespace: fleet-default
kind: MachineRegistration
name: fire-nodes
namespace: fleet-default

The SeedImage is required to generate the seed image (like a bootable ISO) that will boot and start the Elemental provisioning on the target machines.

Now that we have defined all the configuration files let's apply them to create the proper resources in Kubernetes:

kubectl apply -f selector.yaml 
kubectl apply -f cluster.yaml
kubectl apply -f registration.yaml
kubectl apply -f seedimage.yaml

Preparing the installation (seed) image

This is the last step: you need an Elemental Teal seed image that includes the initial registration config, so it can be auto registered, installed and fully deployed as part of your cluster.


The initial registration config file is generated when you create a Machine Registration.

You can download it with:

wget --no-check-certificate `kubectl get machineregistration -n fleet-default fire-nodes -o jsonpath="{.status.registrationURL}"` -O initial-registration.yaml

The contents of the registration config file are nothing more than the registration URL that the node needs to register, the proper server certificate and few options for the registration process.

Once generated, a seed image can be used to provision any number of machines.

The seed image created by the SeedImage resource above can be downloaded as an ISO via the following script:

kubectl wait --for=condition=ready pod -n fleet-default fire-img
wget --no-check-certificate `kubectl get seedimage -n fleet-default fire-img -o jsonpath="{.status.downloadURL}"` -O elemental-teal.x86_64.iso

The first command waits for the ISO to be built and ready, the second one downloads it in the current directory with the name elemental-teal-x86_64.iso.

You can now boot your nodes with this image and they will:

  • Register with the registrationURL given and create a per-machine MachineInventory
  • Install Elemental Teal to the given device
  • Reboot

Selecting the right machines to join a cluster

The MachineInventorySelectorTemplate selects the machines needed to provision the cluster from the MachineInventorys having the element:fire label. We have added the element:fire label in the MachineRegistration machineInventoryLabels map, so all the MachineInventorys originated from it already have the label. One could anyway skip the label from the MachineRegistration and add it later:

kubectl -n fleet-default label machineinventory $(kubectl get machineinventory -n fleet-default --no-headers -o custom-columns="") element=fire

As soon as MachineInventorys with the element:fire are present, the corresponding machines auto-deploy the cluster via the chosen provider (k3s/rke).

After a few minutes your new cluster will be fully provisioned!!

How can I choose the kubernetes version and deployer for the cluster?

In your cluster.yaml file there is a key in the Spec called kubernetesVersion. That sets the version and deployer that will be used for the cluster, for example Kubernetesv1.24.8 for rke2 would be v1.24.8+rke2r1 and for k3s v1.24.8+k3s1.

To see all compatible versions check the Rancher Support Matrix PDF for rke/rke2/k3s versions and their components.

You can also check our Version doc to know how to obtain those versions.

Check our Cluster Spec page for more info about the Cluster resource.

How can I follow what is going on behind the scenes?

You should be able to follow along what the machine is doing via:

  • During ISO boot:
    • ssh into the machine (user/pass: root/ros):
      • running journalctl -f -t elemental shows you the progress of the registration (elemental-register) and the installation of Elemental (elemental install).
  • Once the system is installed:
    • On the Rancher UI -> Cluster Management allows you to see your new cluster and the Provisioning Log in the cluster details
    • ssh into the machine (user/pass: Whatever your configured on the registration.yaml under
      • running journalctl -f -u elemental-system-agent shows the output of the initial elemental config and the installation of the rancher-system-agent
      • running journalctl -f -u rancher-system-agent shows the output of the boostrap of cluster components like k3s
      • running journalctl -f -u k3s shows the logs of the k3s deployment