I was trying to see the dashboard, previously works fine...
Now I get using minikube dashboard
λ minikube dashboard
X Exiting due to GUEST_STATUS: state: unknown state "minikube": docker container inspect minikube --format=: exit status 1
stdout:
stderr:
Error: No such container: minikube
*
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please attach the following file to the GitHub issue: │
│ * - C:\Users\JOSELU~1\AppData\Local\Temp\minikube_dashboard_dc37e18dac9641f7847258501d0e823fdfb0604c_0.log │
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
With minikube status
λ minikube status
E0604 13:13:20.260421 27600 status.go:258] status error: host: state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:
stderr:
Error: No such container: minikube
E0604 13:13:20.261425 27600 status.go:261] The "minikube" host does not exist!
minikube
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent
With the command minikube profile list
λ minikube profile list
|----------|-----------|---------|--------------|------|---------|---------|-------|
| Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes |
|----------|-----------|---------|--------------|------|---------|---------|-------|
| minikube | docker | docker | 192.168.49.2 | 8443 | v1.20.2 | Unknown | 1 |
|----------|-----------|---------|--------------|------|---------|---------|-------|
Now,...
What would be it happens?
What would be the best solution?
Thansk...
Remove unused data:
docker system prune
Clear minikube's local state:
minikube delete
Start the cluster:
minikube start --driver=<driver_name>
(In your case driver name is docker as per minikube profile list info shared by you)
Check the cluster status:
minikube status
Use the following documentation for more information:
https://docs.docker.com/engine/reference/commandline/system_prune/#examples
https://v1-18.docs.kubernetes.io/docs/tasks/tools/install-minikube/
Related
Context
I've a minikube cluster which run into WSL2 context and docker driver (from docker-desktop).
~ » minikube profile list
|----------|-----------|---------|--------------|------|---------|---------|-------|--------|
| Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes | Active |
|----------|-----------|---------|--------------|------|---------|---------|-------|--------|
| minikube | docker | docker | 192.168.49.2 | 8443 | v1.24.3 | Running | 1 | * |
|----------|-----------|---------|--------------|------|---------|---------|-------|--------|
I've setup a LoadBalancer service, then I run minikube tunnel command.
~ » kubectl get svc my-svc-loadbalancer
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-svc-loadbalancer LoadBalancer 10.105.11.182 127.0.0.1 8080:31684/TCP 18h
My problem
When trying to access 127.0.0.1:8080 from my browser, I'm getting a ERR_EMPTY_RESPONSE error code.
What I have noticed:
Without minikube tunnel command, accessing to 127.0.0.1:8080 result into a ERR_CONNECTION_REFUSED - which make sense.
The same service configuration but as NodePort and minikube service command works and I can access my deployment, but I would like to access it as LoadBalancer.
I am trying to get the status of a systemctl service which is in the host from the docker container.
I tried to volume mount the necessary unit files and made the docker work in privileged mode. But still i get an error saying the status of the service is inactive while the status in the host is in active.
docker run --privileged -d -v /etc/systemd/system/:/etc/systemd/system/ -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /usr/lib/systemd/system/:/usr/lib/systemd/system/ test-docker:latest
Is there a way to achieve this or any equivalent way of doing this ?
The way systemctl communicates with systemd to get service status is through a d-bus socket. The socket is hosted in /run/systemd. So we can try this:
docker run --privileged -v /run/systemd:/run/systemd -v /sys/fs/cgroup:/sys/fs/cgroup test-docker:latest
But when we run systemctl status inside the container, we get:
[root#1ed9d836a142 /]# systemctl status
Failed to connect to bus: No data available
It turns out that for this to work, systemctl expects systemd to be pid 1, but inside the container, pid 1 is something else. We can resolve this by running the container in the host PID namespace:
docker run --privileged -v /run/systemd:/run/systemd --pid=host test-docker:latest
And now we're able to successfully communicate with systemd:
[root#7dccc711a471 /]# systemctl status | head
● 7dccc711a471
State: degraded
Jobs: 0 queued
Failed: 2 units
Since: Sun 2022-06-26 03:11:44 UTC; 5 days ago
CGroup: /
├─kubepods
│ ├─burstable
│ │ ├─pod23889a3e-0bc3-4862-b07c-d5fc9ea1626c
│ │ │ ├─1b9508f0ab454e2d39cdc32ef3e35d25feb201923d78ba5f9bc2a2176ddd448a
...
It has been stated that:
The shim allows for daemonless containers. It basically sits as the parent of the container's process to facilitate a few things.
It keeps the STDIO and other fds open for the container incase containerd and/or docker both die. If the shim was not running then the parent side of the pipes or the TTY master would be closed and the container would exit.
However from a process level, it appears that containerd spawns containerd-shim, so if containerd is down I would expect containerd-shim to go down too.
Can someone explain how containerd-shim can remain up if containerd/docker are down?
$ ps fxa | grep dockerd -A 3
PID TTY STAT TIME COMMAND
43449 pts/2 S+ 0:00 \_ grep dockerd -A 3
117536 ? Ssl 163:36 /usr/bin/containerd
93633 ? Sl 1:01 \_ containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/8f75a1b32bb09611430ea55958b11a482b6c83ba2a75f7ca727301eb49a2770f -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
$ pstree -lpTs
systemd(1)─┬─VGAuthService(45146)
├─accounts-daemon(1053)
├─agetty(104696)
├─agetty(104707)
├─agetty(104716)
├─atd(993)
├─containerd(117536)─┬─containerd-shim(8394)─┬─bash(8969)
│ │ └─sh(8420)─┬─sshd(8512)
│ │ └─tail(8514)
│ ├─containerd-shim(13170)───bash(13198)
│ ├─containerd-shim(13545)───portainer(13577)
│ ├─containerd-shim(14156)───mysqld(14184)
...
├─dockerd(42320)─┬─docker-proxy(42700)
│ ├─docker-proxy(42713)
│ ├─docker-proxy(42725)
│ ├─docker-proxy(42736)
│ └─docker-proxy(42749)
UPDATE: Based on the explanation provided in the accepted answer:
$ pstree -lpTs
systemd(1)─┬─VGAuthService(45146)
├─accounts-daemon(1053)
├─agetty(104696)
├─agetty(104707)
├─agetty(104716)
├─atd(993)
├─containerd(117536)─┬─containerd-shim(8394)─┬─bash(8969)
│ │ └─sh(8420)─┬─sshd(8512)
│ │ └─tail(8514)
│ ├─containerd-shim(13170)───bash(13198)
│ ├─containerd-shim(13545)───portainer(13577)
│ ├─containerd-shim(14156)───mysqld(14184)
$ sudo kill -9 117536
$ pstree -lpTs
systemd(1)─┬─VGAuthService(45146)
├─accounts-daemon(1053)
├─agetty(104696)
├─agetty(104707)
├─agetty(104716)
├─atd(993)
├─containerd-shim(8394)─┬─bash(8969)
│ └─sh(8420)─┬─sshd(8512)
│ └─tail(8514)
├─containerd-shim(13170)───bash(13198)
├─containerd-shim(13545)───portainer(13577)
├─containerd-shim(14156)───mysqld(14184)
However from a process level, it appears that containerd spawns containerd-shim, so if containerd is down I would expect containerd-shim to go down too.
Child processes don't automatically die when their parent dies, they are simply re-parented to PID 1. systemd takes over as parent and containerd-shim continues running.
Started by user sabari k
Building in workspace /var/lib/jenkins/workspace/actualdairy
[actualdairy] $ /bin/sh -xe /tmp/jenkins4465259595371700187.sh
+ echo hello
+ cd
+ ./actualDairy/deploy.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-36-generic x86_64)
Documentation: https://help.ubuntu.com
Management: https://landscape.canonical.com
Support: https://ubuntu.com/advantage
System information as of Thu Oct 25 20:05:25 UTC 2018
System load: 0.09 Processes: 90
Usage of /: 8.7% of 24.06GB Users logged in: 1
Memory usage: 38% IP address for eth0:
Swap usage: 0% IP address for eth1:
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
43 packages can be updated.
6 updates are security updates.
Welcome to DigitalOcean's One-Click Node.js Droplet.
To keep this Droplet secure, the UFW firewall is enabled.
All ports are BLOCKED except 22 (SSH), 80 (HTTP), and 443 (HTTPS).
To get started, visit http://do.co/node1804
To delete this message of the day: rm -rf /etc/update-motd.d/99-one-click
mesg: ttyname failed: Inappropriate ioctl for device
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
npm WARN optional Skipping failed optional dependency /chokidar/fsevents:
npm WARN notsup Not compatible with your operating system or architecture: fsevents#1.2.4
Use --update-env to update environment variables
[PM2] Applying action restartProcessId on app [all](ids: 0)
[PM2] www ✓
┌──────────┬────┬─────────┬──────┬───────┬────────┬─────────┬────────┬─────┬──────────┬──────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────┼────┼─────────┼──────┼───────┼────────┼─────────┼────────┼─────┼──────────┼──────┼──────────┤
│ www │ 0 │ 0.0.2 │ fork │ 23688 │ online │ 65 │ 0s │ 0% │ 5.4 MB │ root │ disabled │
└──────────┴────┴─────────┴──────┴───────┴────────┴─────────┴────────┴─────┴──────────┴──────┴──────────┘
Use pm2 show <id|name> to get more details about an app
Finished: SUCCESS
i dont know why the git pull is not working. my deploy script is
#!/bin/bash
ssh root#ipaddress <<EOF
cd actualdairy
git pull
npm install
pm2 restart all
exit
EOF
i added remote server public key in bitbucket but its not pulling from the repo.saying Permission denied (publickey)
Am trying to setup Minikube, and have a challenge. My minikube is setup, and I started the Nginex pod. I can see that the pod is up, but the service doesn't appear as active. On dashboard too, although the pod appears the depolyment doesn't show up. Here are my power shell command outputs.
Am learning this technology and may have missed something. My understanding is that when using docker tools, no explicit configurations are necessary at docker level, other than setting it up. Am I wrong here ? If so where ?
relevant PS output
Lets deploy hello-nginx deployment
C:\> kubectl.exe run hello-nginx --image=nginx --port=80
deployment "hello-nginx" created
View List of pods
c:\> kubectl.exe get pods
NAME READY STATUS RESTARTS AGE
hello-nginx-6d66656896-hqz9j 1/1 Running 0 6m
Expose as a Service
c:\> kubectl.exe expose deployment hello-nginx --type=NodePort
service "hello-nginx" exposed
List exposed services using minikube
c:\> minikube.exe service list
|-------------|----------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------|-----------------------------|
| default | hello-nginx | http://192.168.99.100:31313 |
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | http://192.168.99.100:30000 |
|-------------|----------------------|-----------------------------|
Access Nginx from browser http://192.168.99.100:31313
This method can be used
this worked for me on centos 7
$ systemctl enable nginx
$ systemctl restart nginx
or
$ systemctl start nginx