Now raising errors when using nerdctl for the provider
```console
> ./kind --version
kind version 0.26.0-alpha
> nerdctl --version
nerdctl version 2.0.1
> ./kind get clusters
ERROR: failed to list clusters: command "nerdctl ps -a --filter label=io.x-k8s.kind.cluster --format '{{index .Labels "io.x-k8s.kind.cluster"}}'" failed with error: exit status 1
Command Output: time="2024-12-09T18:01:03+09:00" level=fatal msg="template: :1:2: executing \"\" at <index .Labels \"io.x-k8s.kind.cluster\">: error calling index: cannot index slice/array with type string"
```
nerdctl fixed the `.Label` behavior in v1.7.0.
2af4cef9e7
However `index .Labels` syntax is not yet supported at least in v2.0.1.
(The style is also used for podman provider, and it is available)
This commit follows up https://github.com/kubernetes-sigs/kind/pull/3429
Signed-off-by: Kenichi Kamiya <kachick1@gmail.com>
it was used to workaround a kubelet crash issue with rootless
providers.
The Kubelet seems to work fine now with localStorageCapacityIsolation
enabled in a user namespace so drop the special handling. After this
change, ephemeral storage can be used in a rootless cluster.
Closes: https://github.com/kubernetes-sigs/kind/issues/3359
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When called via the library path, the nerdctl provider is
instantiated without a binary name. We still need to do
a lookup to determine if finch or nerdctl is the installed
binary to provide the local runtime command line
Signed-off-by: Phil Estes <estesp@gmail.com>
Adds implementation for a provider based on nerdctl. Several todos
in the code but the core functionality of creating/deleting clusters
is working and a simple application deployed works properly
Signed-off-by: Phil Estes <estesp@gmail.com>
When the default value, 0, is used for the HostPort in ExtraPortMappings
then Kind will determine a random HostPort to use for this mapping. The
validation only allowed a single instance of such a mapping, but now allows
multiple.
Also delay the closing of the randomly determined port until all random
ports have been determined to ensure the same port cannot be returned
multiple times by the operating system.
If the limit is not configured, HAProxy derives it from the file
descriptor limit. The higher the limit, the more memory HAProxy
allocates. That limit can be so high on modern Linux distros that
HAproxy allocates all available memory.
* Extending the no_proxy to automatically include the control plane - issue #2884
* Refactoring the code to follow the existing conventions imposed by NodeNamer
* Adding the equivalent treatment to docker provisioner and adding the external load balancer and etc roles(as they may be created implicitly) to complete the picture
* Post-review: a bit of refactoring to bring the podman provisioner closer to what's done for docker