Why does minikube doesn't have access to k8s registry? - docker

Running the minikube start command, I am getting this message:
This container is having trouble accessing https://registry.k8s.io
and after this the Booting up control plane process takes a long time then gives the following error:
Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
I have the right minikube, kubectl , docker ... versions.
$ echo $(minikube docker-env) this command outputs the following error:
Exiting due to GUEST_STATUS: state: unknown state "minikube": docker container inspect minikube --format=: exit status 1
stderr:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/minikube/json": dial unix /var/run/docker.sock: connect: permission denied
But what i don't understand, if I run the docker run hello-world , it works (I have the super user permission)

Try running the below commands:
Remove unused data:
docker system prune
Clear minikube's local state:
minikube delete
Start the cluster:
minikube start --driver=<driver_name>
(In your case driver name is docker as per minikube profile list info shared by you)
Check the cluster status:
minikube status
Also refer to this Github link.

Related

Docker containers cannot start in Minikube docker environment for a rails app

I am trying to build and run a rails app(repo) in the Minikube docker. But it cannot started. However, I can successfully start it on my local environment.
The steps I did to cause the issue:
clone the repo: git clone https://github.com/ryanwi/rails7-on-docker.git
go to the app: cd rails7-on-docker
enable minikube docker env: eval $(minikube docker-env)
if you don't have minikube:
install via brew: brew install minikube
start minikube: minikube start
build docker: docker compose build
start docker: docker compose up
The application should be started and rails should be running on 3000. However it throws this error:
Error response from daemon: OCI runtime create failed:
container_linux.go:380: starting container process caused: exec:
"./entrypoint.sh": stat ./entrypoint.sh: no such file or directory:
unknown
The reason to run it on minikube docker env:
My original plan was to build the image and deploy to minikube. Minikube can only access the images in its own docker server. I got the same error when I plan to make a deployment with the image. That's why I tried to start the docker first and solve the problem.
The screenshots are result of commands running in different environment.
minikube vs local
Does anyone know how to fix it?

Does Botium box requires docker container to be killed on every system start

I've read the command for docker kill. Now exactly how to stop
all containers or kill the container?
Should I navigate to the Docker folder in program files in cmd, or should I navigate to botium folder which I created for botium box in cmd? Currently I have Docker desktop version.
I'm getting the below error:
I restarted the Docker desktop app
Cmd : navigated to botium folder which I created for botium box
entered : docker-compose -f docker-compose-all.yml up
Error was thrown
C:\Users\Ram\Documents\Botium>docker-compose -f
docker-compose-all.yml up Starting botium_redis_1 ... botium_mysql_1
is up-to-date Starting botium_prisma_1 ... error
ERROR: for botium_prisma_1 Cannot start service prisma: driver failed
programming external connectivity on endpoint botStarting
botium_redis_1 ... error lready allocated
ERROR: for botium_redis_1 Cannot start service redis: driver failed
programming external connectivity on endpoint botium_redis_1
(023c3f7d0101a509a677a2f5434b00f25a8e4d3e238166eae6e0c1678b81035b):
Bind for 0.0.0.0:6379 failed: port is already allocated
ERROR: for prisma Cannot start service prisma: driver failed
programming external connectivity on endpoint botium_prisma_1
(1ad423ca349cd5d987a082407c64c8300e2822a0e4c3bf6a63c4369705f1413a):
Bind for 0.0.0.0:4466 failed: port is already allocated
ERROR: for redis Cannot start service redis: driver failed
programming external connectivity on endpoint botium_redis_1
(023c3f7d0101a509a677a2f5434b00f25a8e4d3e238166eae6e0c1678b81035b):
Bind for 0.0.0.0:6379 failed: port is already allocated ERROR:
Encountered errors while bringing up the project.
However when I retried the http://127.0.0.1:4000/quickstart a couple of times
the botium box opened. But initially this was not opening.
You don't have to navigate.
If you run using docker-compose, you can go to the directory where your docker-compose.yml file is located and run docker-compose down.
Without docker-compose, you have to run docker ps to list all currently running containers and find the name of the container to kill. You can use the CONTAINER ID or the NAMES. Then run docker kill <container name>.
Example:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
myId myimage:2.5 "/opt/command/ba…" 24 hours ago Up About an hour 0.0.0.0:9000->9000/tcp very_cool_name_1
$ docker kill very_cool_name_1
very_cool_name_1
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$
Just type below commands when you open your Powershell or Bash.
To stop all running containers:
docker stop $(docker ps -q)
To remove all containers:
docker rm $(docker ps -qa)
Please note that rm will just remove your container not the Docker image. If your want to delete an image then you can use: docker rmi -f container_id

permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock

we are getting this error while trying to run docker commands. E.g.:
$ docker image ls
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.38/images/json: dial unix /var/run/docker.sock: connect: permission denied
So we followed the steps here but the problem remains. Then we saw this question where it is advised
You have to restart the docker daemon, otherwise it won't let members
of the docker group to control the docker daemon
but are having trouble restarting the service
$ sudo service docker restart
Failed to restart docker.service: Unit docker.service not found.
we are using
$ docker -v
Docker version 18.06.1-ce, build e68fc7a
on
$ uname -a
Linux jnj 4.15.0-1036-azure #38-Ubuntu SMP Fri Dec 7 02:47:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
the docker group has been successfully created and we are member of it
$ grep docker /etc/group
docker:x:1001:siddjain
Also we did log out and log back in. we are able to run docker commands with sudo. Also
$ sudo snap services
Service Startup Current Notes
docker.dockerd enabled active -
can anyone help us?
The solution was to restart docker daemon using snap (since that is how we installed docker)
siddjain#jnj:~$ sudo snap stop docker
Stopped.
siddjain#jnj:~$ snap start docker
error: access denied (try with sudo)
siddjain#jnj:~$ sudo snap start docker
Started.
After that we are able to run docker commands without having to sudo
siddjain#jnj:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
siddjain#jnj:~$
Our joy was shortlived as we immediately ran into another error after this one when we tried to run another container.
mkdir /var/lib/docker: read-only file system
To fix it we had to uninstall and reinstall docker again - this time from the official documentation as described here

docker: not found when using docker command using Docker Jenkins container

jenkins is running in a Docker container.
Docker is using in a Mac OS. So I marked out these lines in jenkins.yml:
# mount docker sock and binary for docker in docker (only works on linux)
#- /var/run/docker.sock:/var/run/docker.sock
#- /usr/bin/docker:/usr/bin/docker
in Jenkinsfile which is generated by JHipster and includes two tasks int he pipeline:
Perform the build in a Docker container
Analyze code with Sonar
List item
node {
stage('checkout') {
checkout scm
}
docker.image('openjdk:8').inside('-u root -e MAVEN_OPTS="-Duser.home=./"') {
stage('check java') {
sh "java -version"
}
checkout from bitbucket was successful. the pipeline stopped and got an error at docker "pull openjdk:8". Console Output is:
[AAAAApp] Running shell script
+ docker inspect -f . openjdk:8
/var/jenkins_home/workspace/GeneticsDB#tmp/durable-21459aca/script.sh:
2: /var/jenkins_home/workspace/GeneticsDB#tmp/durable-21459aca/script.sh: docker: not found
[Pipeline] sh
[AAAAApp] Running shell script
+ docker pull openjdk:8
/var/jenkins_home/workspace/GeneticsDB#tmp/durable-d5590370/script.sh:
2: /var/jenkins_home/workspace/GeneticsDB#tmp/durable-d5590370/script.sh: docker: not found
but this command could be run successfully in the command line, like below:
docker pull openjdk:8
8: Pulling from library/openjdk
Digest: sha256:18c9622a8dc67b608a2dd0178b4c5aebc0e2da9a656072c6e799cfc46cb96422
Status: Image is up to date for openjdk:8
I know there is a similar question: Docker not found when building docker image using Docker Jenkins container pipeline
But my docker is running in Mac OS.
The problem actually is How to run Docker inside a container running on Docker for Mac. It is fixed by
brew install docker
and update jenkins.yml to
# mount docker sock and binary for docker in docker
- /var/run/docker.sock:/var/run/docker.sock
- /usr/local/bin/docker:/usr/local/bin/docker
got an error:
Warning: failed to get default registry endpoint from daemon (Got
permission denied while trying to connect to the Docker daemon socket
at unix:///var/run/docker.sock: Get
http://%2Fvar%2Frun%2Fdocker.sock/v1.35/info: dial unix
/var/run/docker.sock: connect: permission denied). Using system
default: https://index.docker.io/v1/
Got permission denied while trying to connect to the Docker daemon
socket at unix:///var/run/docker.sock: Post
http://%2Fvar%2Frun%2Fdocker.sock/v1.35/images/create?
fromImage=openjdk&tag=8: dial unix /var/run/docker.sock: connect:
permission denied
Solution: update the access permission of /var/run/docker.sock in docker container.
find the container of Jenkins: docker container ps -a
login the container: docker exec -it -u root ec379335d599 /bin/bash
upadte permission: chmod 777 /var/run/docker.sock
If your jenkins is running inside of a docker container, then I'd recommend:
installing docker inside that container
mounting the docker socket so it can run docker commands from inside the container
dynamically adjusting group permissions of the jenkins user in an entrypoint.sh of the jenkins container, so you don't need to change permissions of the docker socket or try to match the host group to the container group
The last part I do with an entrypoint that runs as root, runs a groupmod to adjust the gid of the user's group, and then drops permissions to that user with an exec + gosu which replaces pid 1 with the jenkins server running as the jenkins user. All the code needed to do this is up in the following git repo: https://github.com/sudo-bmitch/jenkins-docker

Failed to enter to a docker container, created with kubernetes deployment

With minikube i created simple deployment (https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment) in the kubernetes. I'm sure that container must running , because kubernetes pod was started successfully and I can see container running in the Portainer. But I just can't enter into the container!!
(I always could do it with a simple pod, maybe with deployment something wrong)
$ docker exec -it 01a7c90b4267 /bin/bash
rpc error: code = 2 desc = oci runtime error: exec failed: dial unix /tmp/pty870274210/pty.sock: connect: connection refused
Also I found "Error syncing pod" in the container logs, but the container status is running
bash isn't available in your container. Have you tried with sh?
$ docker exec -ti 01a7c90b4267 sh
Also, if you're attaching to a running container within Kubernetes, you probably want to kubectl exec instead of docker exec:
$ kubectl exec -ti <pod_id> sh
It seems that the problem was caused by mounting to the minikubes' tmp folder minikube mount $TMP:/tmp. Without mounting I can exec the /bin/bash in the containers with no problems

Resources