Network configuration with Elemental​
The MachineRegistration supports Declarative Networking and integration with CAPI IPAM Providers.
Prerequisites​
-
A DHCP server is still required for the first boot registration and reset of machines. For this reason Lease Time can be kept minimal, as for the entire lifecycle of the machine, the IPAM driven IP Addresses will be used.
-
An IPAM Provider of your choice is installed on the Rancher management cluster.
For example the InCluster IPAM Provider. -
NetworkManager needs to be installed on OS images and it can be directly configured using the
nmconnections
network configurator.
Already included in Elemental provided images. -
(optionally) nmc can be used with
nmc
network configurator. -
(optionally) nmstatectl can be used with
nmstate
network configurator.
Installing nmc or nmstatectl on OS images​
When using the nmc
or nmstate
configurators, the nmc or nmstatectl tools need to be installed on the machine.
Currently this can be achieved by customizing an Elemental OS image with a custom command:
- nmc
- nmstatectl
# Install nmc
RUN curl -LO https://github.com/suse-edge/nm-configurator/releases/download/v0.3.1/nmc-linux-x86_64 && \
install -o root -g root -m 0755 nmc-linux-x86_64 /usr/sbin/nmc
# Install nmstatectl
RUN curl -LO https://github.com/nmstate/nmstate/releases/download/v2.2.37/nmstatectl-linux-x64.zip && \
unzip nmstatectl-linux-x64.zip && \
chmod +x nmstatectl && \
mv ./nmstatectl /usr/sbin/nmstatectl && \
rm nmstatectl-linux-x64.zip
How to install the CAPI IPAM Provider​
The recommended way to install any CAPI Provider into Rancher is to use Rancher Turtles.
Rancher Turtles will allow the user to install and manage the lifecycle of any CAPI Provider.
To install it on your system please follow the documentation.
Once Rancher Turtles is installed, installing an IPAM CAPI Provider, for example the InCluster IPAM Provider, can be accomplished applying the following resource:
kind: CAPIProvider
metadata:
name: in-cluster
namespace: default
spec:
name: in-cluster
type: ipam
fetchConfig:
url: "https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster/releases"
version: v0.1.0
Without Rancher Turtles​
An alternative option to install a CAPI IPAM Provider is to directly apply the manifest in the Rancher cluster.
Note that this solution may eventually lead to conflicts with the applied CRDs and resources, as they need to be applied and maintained manually.
-
The
ipaddresses.ipam.cluster.x-k8s.io
andipaddressclaims.ipam.cluster.x-k8s.io
CRDs must be installed on the Rancher management cluster:kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api/main/config/crd/bases/ipam.cluster.x-k8s.io_ipaddressclaims.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api/main/config/crd/bases/ipam.cluster.x-k8s.io_ipaddresses.yamlinfoThese CRDs are expected to eventually be part of Rancher, not requiring manual installation.
See: https://github.com/rancher/rancher/issues/46385 -
Install the InCluster IPAM Provider from the released manifest:
kubectl apply -f https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster/releases/download/v0.1.0/ipam-components.yaml
Configuring Network​
The network
section of the MachineRegistration
allows users to define:
- A map of IPPool references.
- A network config template (in this case
nmc
configurator is in use).
For example:
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
name: elemental-inventory-pool
namespace: fleet-default
spec:
addresses:
- 192.168.122.150-192.168.122.200
prefix: 24
gateway: 192.168.122.1
---
apiVersion: elemental.cattle.io/v1beta1
kind: MachineRegistration
metadata:
name: fire-nodes
namespace: fleet-default
spec:
machineName: m-${Product/UUID}
config:
network:
configurator: nmc
ipAddresses:
inventory-ip:
apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: elemental-inventory-pool
config:
dns-resolver:
config:
server:
- 192.168.122.1
search: []
routes:
config:
- destination: 0.0.0.0/0
next-hop-interface: eth0
next-hop-address: 192.168.122.1
metric: 150
table-id: 254
interfaces:
- name: eth0
type: ethernet
description: Main-NIC
state: up
ipv4:
enabled: true
dhcp: false
address:
- ip: "{inventory-ip}"
prefix-length: 24
ipv6:
enabled: false
Here we can observe that one InClusterIPPool
has been defined, since we are using the InCluster IPAM Provider for this example.
Next we are going to reference this IPPool in the MachineRegistration
. The key for this reference is inventory-ip
, and we are only going to need one IP per registered Machine. If your machine has more than one NIC, you can define more references, and use different IPPools as well, for example:
ipAddresses:
main-nic-ip:
apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: elemental-inventory-pool
secondary-nic-ip:
apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: elemental-inventory-pool
private-nic-ip:
apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: elemental-private-pool
Each defined IPPool reference key can be used for the network config template:
config:
dns-resolver:
config:
server:
- 192.168.122.1
search: []
routes:
config:
- destination: 0.0.0.0/0
next-hop-interface: eth0
next-hop-address: 192.168.122.1
metric: 150
table-id: 254
interfaces:
- name: eth0
type: ethernet
description: Main-NIC
state: up
ipv4:
enabled: true
dhcp: false
address:
- ip: "{inventory-ip}"
prefix-length: 24
ipv6:
enabled: false
The snippet above is almost 1:1 nm-configurator syntax, with the only exception of the {inventory-ip}
placeholder.
During the installation or reset phases of Elemental machines, the elemental-operator
will claim one IP Address from the referenced IP Pool, and substitute the {inventory-ip}
placeholder with a real IP Address.
Claimed IPAddresses​
The IPAddressClaim
will follow the entire lifecycle of the MachineInventory
, ensuring that each registered machine will be assigned unique IPs.
Each claim is named after the MachineInventory
that uses it, as $MachineIventoryName-$IPPoolRefKey
, for example:
apiVersion: ipam.cluster.x-k8s.io/v1beta1
kind: IPAddressClaim
metadata:
finalizers:
- ipam.cluster.x-k8s.io/ReleaseAddress
name: m-e5331e3b-1e1b-4ce7-b080-235ed9a6d07c-inventory-ip
namespace: fleet-default
ownerReferences:
- apiVersion: elemental.cattle.io/v1beta1
kind: MachineInventory
name: m-e5331e3b-1e1b-4ce7-b080-235ed9a6d07c
spec:
poolRef:
apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: elemental-inventory-pool
status:
addressRef:
name: m-e5331e3b-1e1b-4ce7-b080-235ed9a6d07c-inventory-ip
Whenever a MachineInventory
is deleted, the default (DHCP) network configuration will be restored, deleting any network profile on the machine and restarting the network stack. Finally the assigned IPs will be released.
For more information and details on how troubleshoot issues, please consult the documentation.
Configurators​
On the Elemental machine, elemental-register
can configure the NetworkManager
in different ways.
The configurator in use is defined in the MachineRegistration.spec.network:
- nmc
- nmstate
- nmconnections
The nmc
configurator uses the nm-configurator unified syntax to generate NetworkManager's connection files.
example MachineRegistration using nmc configurator
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
name: elemental-inventory-pool
namespace: fleet-default
spec:
addresses:
- 192.168.122.150-192.168.122.200
prefix: 24
gateway: 192.168.122.1
---
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
name: elemental-secondary-pool
namespace: fleet-default
spec:
addresses:
- 172.16.0.150-172.16.0.200
prefix: 24
gateway: 172.16.0.1
---
apiVersion: elemental.cattle.io/v1beta1
kind: MachineRegistration
metadata:
name: fire-nodes
namespace: fleet-default
spec:
machineName: m-${System Information/UUID}
config:
network:
configurator: nmc
ipAddresses:
inventory-ip:
apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: elemental-inventory-pool
secondary-ip:
apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: elemental-secondary-pool
config:
dns-resolver:
config:
server:
- 192.168.122.1
search: []
routes:
config:
- destination: 0.0.0.0/0
next-hop-interface: eth0
next-hop-address: 192.168.122.1
metric: 150
table-id: 254
- destination: 172.16.0.1/24
next-hop-interface: eth1
next-hop-address: 172.16.0.1
metric: 150
table-id: 254
interfaces:
- name: eth0
type: ethernet
description: Main-NIC
state: up
ipv4:
enabled: true
dhcp: false
address:
- ip: "{inventory-ip}"
prefix-length: 24
ipv6:
enabled: false
- name: eth1
type: ethernet
description: Secondary-NIC
state: up
ipv4:
enabled: true
dhcp: false
address:
- ip: "{secondary-ip}"
prefix-length: 24
ipv6:
enabled: false
The nmstate
configurator uses nmstate syntax to generate NetworkManager's connection files.
Note that nmstatectl needs to be installed on the Elemental system to use this configurator. This is not included by default in Elemental images, but can be installed when building a custom image.
example MachineRegistration using nmstate configurator
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
name: elemental-inventory-pool
namespace: fleet-default
spec:
addresses:
- 192.168.122.150-192.168.122.200
prefix: 24
gateway: 192.168.122.1
---
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
name: elemental-secondary-pool
namespace: fleet-default
spec:
addresses:
- 172.16.0.150-172.16.0.200
prefix: 24
gateway: 172.16.0.1
---
apiVersion: elemental.cattle.io/v1beta1
kind: MachineRegistration
metadata:
name: fire-nodes
namespace: fleet-default
spec:
machineName: m-${System Information/UUID}
config:
network:
configurator: nmstate
ipAddresses:
inventory-ip:
apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: elemental-inventory-pool
secondary-ip:
apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: elemental-secondary-pool
config:
dns-resolver:
config:
server:
- 192.168.122.1
search: []
routes:
config:
- destination: 0.0.0.0/0
next-hop-interface: eth0
next-hop-address: 192.168.122.1
metric: 150
table-id: 254
- destination: 172.16.0.1/24
next-hop-interface: eth1
next-hop-address: 172.16.0.1
metric: 150
table-id: 254
interfaces:
- name: eth0
type: ethernet
description: Main-NIC
state: up
ipv4:
enabled: true
dhcp: false
address:
- ip: "{inventory-ip}"
prefix-length: 24
ipv6:
enabled: false
- name: eth1
type: ethernet
description: Secondary-NIC
state: up
ipv4:
enabled: true
dhcp: false
address:
- ip: "{secondary-ip}"
prefix-length: 24
ipv6:
enabled: false
The nmconnections
configurator is the simplest option available and allows the user to directly write nmconnection
files.
Defining these files for complex network setups may be challenging, but it's always possible to use nmcli, or even nmstate, or nm-configurator, and use the generated nmconnection
files as a template.
This configurator only needs NetworkManager
, without any extra dependency.
example MachineRegistration using nmconnections configurator
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
name: elemental-inventory-pool
namespace: fleet-default
spec:
addresses:
- 192.168.122.150-192.168.122.200
prefix: 24
gateway: 192.168.122.1
---
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
name: elemental-secondary-pool
namespace: fleet-default
spec:
addresses:
- 172.16.0.150-172.16.0.200
prefix: 24
gateway: 172.16.0.1
---
apiVersion: elemental.cattle.io/v1beta1
kind: MachineRegistration
metadata:
name: fire-nodes
namespace: fleet-default
spec:
machineName: test-${System Information/UUID}
config:
network:
configurator: "nmconnections"
ipAddresses:
inventory-ip:
apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: elemental-inventory-pool
secondary-ip:
apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: elemental-secondary-pool
config:
eth0: |
[connection]
id=Wired connection 1
type=ethernet
interface-name=eth0
[ipv4]
address1={inventory-ip}/24,192.168.1.1
dns=192.168.122.1;
method=manual
route1=0.0.0.0/0,192.168.122.1
[ipv6]
method=disabled
eth1: |
[connection]
id=Wired connection 2
type=ethernet
interface-name=eth1
[ipv4]
address1={secondary-ip}/24,172.16.0.1
method=manual
route1=172.16.0.0/24,172.16.0.1,150
[ipv6]
method=disabled