Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running minikube fails on SUSE Leap 15.6 with rootless Podman #19140

Closed
Jabuk opened this issue Jun 25, 2024 · 10 comments
Closed

Running minikube fails on SUSE Leap 15.6 with rootless Podman #19140

Jabuk opened this issue Jun 25, 2024 · 10 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Jabuk
Copy link

Jabuk commented Jun 25, 2024

What Happened?

I tried to play with minikube, but immediately hit the following error.

Troubleshooting:

  • I ran minikube delete & minikube start again, but to no avail.
  • Tried to run as root, but I met a wise advice "❌ Exiting due to DRV_AS_ROOT: The "podman" driver should not be used with root privileges.".
  • Tried to run with both CRI-O and containerd drivers as suggested by https://minikube.sigs.k8s.io/docs/drivers/podman/, but I encounter the same error.

alpacacorp@Meliodas:~> minikube start
😄 minikube v1.33.1 na Opensuse-Leap 15.6
▪ MINIKUBE_ROOTLESS=true
✨ Automatycznie wybrano sterownik podman
📌 Using rootless Podman driver
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.44 ...
E0625 21:20:31.493666 18875 cache.go:189] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=2800MB) ...
🤦 StartHost failed, but will try again: creating host: create: creating: create kic node: create container: podman run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var:exec --memory=2800mb -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.44: exit status 126
stdout:

stderr:
Error: container create failed (no logs from conmon): conmon bytes "": readObjectStart: expect { or n, but found , error found in #0 byte of ...||..., bigger context ...||...

🔄 Restarting existing podman container for "minikube" ...
😿 Failed to start podman container. Running "minikube delete" may fix it: driver start: start: podman start minikube: exit status 125
stdout:

stderr:
Error: unable to start container "bfd476cb478a23296e5533a904d11bfe4cd15fc36211a32c883ecd4416d7b614": runc: runc create failed: unable to start container process: error during container init: error setting cgroup config for procHooks process: openat2 /sys/fs/cgroup/user.slice/user-1001.slice/[email protected]/user.slice/libpod-bfd476cb478a23296e5533a904d11bfe4cd15fc36211a32c883ecd4416d7b614.scope/memory.swap.max: no such file or directory: OCI runtime attempted to invoke a command that was not found

❌ Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: start: podman start minikube: exit status 125
stdout:

stderr:
Error: unable to start container "bfd476cb478a23296e5533a904d11bfe4cd15fc36211a32c883ecd4416d7b614": runc: runc create failed: unable to start container process: error during container init: error setting cgroup config for procHooks process: openat2 /sys/fs/cgroup/user.slice/user-1001.slice/[email protected]/user.slice/libpod-bfd476cb478a23296e5533a904d11bfe4cd15fc36211a32c883ecd4416d7b614.scope/memory.swap.max: no such file or directory: OCI runtime attempted to invoke a command that was not found

Attach the log file

logs.txt

Operating System

Other

Driver

Podman

@robedpixel
Copy link

I am also facing this issue for the exact same use case (Running rootless podman on opensuse 15.6).

@Jabuk Jabuk changed the title Running minikube fails on SUSE Leap 16 with rootless Podman Running minikube fails on SUSE Leap 15.6 with rootless Podman Jun 28, 2024
@Jabuk
Copy link
Author

Jabuk commented Jun 28, 2024

So apparently the fix is to delegate cgroup controllers to your user slice. This answer describes exactly what needs to be done https://unix.stackexchange.com/a/625079 -it worked for me, just had to change the user.

It's kind of annoying that in order to run minikube with rootless podman you need to learn to manage and configure cgroups. Not sure if we can get minikube to configure delegation for the user at start, but the bare minimum it would be great to improve the minikube error message and the documentation for rootless podman.

@robedpixel
Copy link

robedpixel commented Jun 28, 2024

There at least should be documentation written on the official minikube site for this so that users don't have to spend time searching very deeply to fix this issue. Ideally, maybe specifying the podman driver should trigger a script to check and set the required cgroups if needed.

@AkihiroSuda
Copy link
Member

There at least should be documentation written on the official minikube site for this so that users don't have to spend time searching very deeply to fix this issue.

This is briefly mentioned in "See the Rootless Docker section for the requirements and the restrictions." in https://minikube.sigs.k8s.io/docs/drivers/podman/ .
Probably hard to find though.

@AkihiroSuda
Copy link
Member

For validation this code should be ported from kind to minikube:
https://github.com/kubernetes-sigs/kind/blob/v0.23.0/pkg/cluster/internal/create/create.go#L248-L255

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 29, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 28, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 28, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants