Failed to enter to a docker container, created with kubernetes deployment - docker

With minikube i created simple deployment (https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment) in the kubernetes. I'm sure that container must running , because kubernetes pod was started successfully and I can see container running in the Portainer. But I just can't enter into the container!!
(I always could do it with a simple pod, maybe with deployment something wrong)
$ docker exec -it 01a7c90b4267 /bin/bash
rpc error: code = 2 desc = oci runtime error: exec failed: dial unix /tmp/pty870274210/pty.sock: connect: connection refused
Also I found "Error syncing pod" in the container logs, but the container status is running

bash isn't available in your container. Have you tried with sh?
$ docker exec -ti 01a7c90b4267 sh
Also, if you're attaching to a running container within Kubernetes, you probably want to kubectl exec instead of docker exec:
$ kubectl exec -ti <pod_id> sh

It seems that the problem was caused by mounting to the minikubes' tmp folder minikube mount $TMP:/tmp. Without mounting I can exec the /bin/bash in the containers with no problems

Related

Why does minikube doesn't have access to k8s registry?

Running the minikube start command, I am getting this message:
This container is having trouble accessing https://registry.k8s.io
and after this the Booting up control plane process takes a long time then gives the following error:
Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
I have the right minikube, kubectl , docker ... versions.
$ echo $(minikube docker-env) this command outputs the following error:
Exiting due to GUEST_STATUS: state: unknown state "minikube": docker container inspect minikube --format=: exit status 1
stderr:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/minikube/json": dial unix /var/run/docker.sock: connect: permission denied
But what i don't understand, if I run the docker run hello-world , it works (I have the super user permission)
Try running the below commands:
Remove unused data:
docker system prune
Clear minikube's local state:
minikube delete
Start the cluster:
minikube start --driver=<driver_name>
(In your case driver name is docker as per minikube profile list info shared by you)
Check the cluster status:
minikube status
Also refer to this Github link.

Docker containers cannot start in Minikube docker environment for a rails app

I am trying to build and run a rails app(repo) in the Minikube docker. But it cannot started. However, I can successfully start it on my local environment.
The steps I did to cause the issue:
clone the repo: git clone https://github.com/ryanwi/rails7-on-docker.git
go to the app: cd rails7-on-docker
enable minikube docker env: eval $(minikube docker-env)
if you don't have minikube:
install via brew: brew install minikube
start minikube: minikube start
build docker: docker compose build
start docker: docker compose up
The application should be started and rails should be running on 3000. However it throws this error:
Error response from daemon: OCI runtime create failed:
container_linux.go:380: starting container process caused: exec:
"./entrypoint.sh": stat ./entrypoint.sh: no such file or directory:
unknown
The reason to run it on minikube docker env:
My original plan was to build the image and deploy to minikube. Minikube can only access the images in its own docker server. I got the same error when I plan to make a deployment with the image. That's why I tried to start the docker first and solve the problem.
The screenshots are result of commands running in different environment.
minikube vs local
Does anyone know how to fix it?

Unable to ssh to Docker instance using `docker attach <name>`

I am following this blog on how to connect to a docker instance: https://phoenixnap.com/kb/how-to-ssh-into-docker-container. It mentions using docker attach <name>
Trying this on my ec2 instance gives us:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
849844c1e3a5 6501862...us-east-618356524 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:32788->8401/tcp ecs-prod-clia-lab-5-Applicationprodclia-lab-8c88d2e0bc83cfb1230
Now let's try to `docker attach <instance-name>
$ docker attach ecs-prod-clia-lab-5-Applicationprodclia-lab-8c88d2e0bc83cfb1230
Error: No such container: ecs-prod-clia-lab-5-Applicationprodclia-lab-8c88d2e0bc83cfb1230
So that actually does not work? What is the correct way to do this?
To get a shell in a running container, do this:
$ docker exec -it <container-id> /bin/sh
The attach sub-command gives you access to a running containers stdout. That's not what you want here
However, if your conainer is meant to provide SSH as a service, you'll need to run it in such a way that it's exposed on the host, on some available port (like 2222).
The you'd simply "SSH in" like this:
$ ssh 127.0.0.1 -p 2222

Does Botium box requires docker container to be killed on every system start

I've read the command for docker kill. Now exactly how to stop
all containers or kill the container?
Should I navigate to the Docker folder in program files in cmd, or should I navigate to botium folder which I created for botium box in cmd? Currently I have Docker desktop version.
I'm getting the below error:
I restarted the Docker desktop app
Cmd : navigated to botium folder which I created for botium box
entered : docker-compose -f docker-compose-all.yml up
Error was thrown
C:\Users\Ram\Documents\Botium>docker-compose -f
docker-compose-all.yml up Starting botium_redis_1 ... botium_mysql_1
is up-to-date Starting botium_prisma_1 ... error
ERROR: for botium_prisma_1 Cannot start service prisma: driver failed
programming external connectivity on endpoint botStarting
botium_redis_1 ... error lready allocated
ERROR: for botium_redis_1 Cannot start service redis: driver failed
programming external connectivity on endpoint botium_redis_1
(023c3f7d0101a509a677a2f5434b00f25a8e4d3e238166eae6e0c1678b81035b):
Bind for 0.0.0.0:6379 failed: port is already allocated
ERROR: for prisma Cannot start service prisma: driver failed
programming external connectivity on endpoint botium_prisma_1
(1ad423ca349cd5d987a082407c64c8300e2822a0e4c3bf6a63c4369705f1413a):
Bind for 0.0.0.0:4466 failed: port is already allocated
ERROR: for redis Cannot start service redis: driver failed
programming external connectivity on endpoint botium_redis_1
(023c3f7d0101a509a677a2f5434b00f25a8e4d3e238166eae6e0c1678b81035b):
Bind for 0.0.0.0:6379 failed: port is already allocated ERROR:
Encountered errors while bringing up the project.
However when I retried the http://127.0.0.1:4000/quickstart a couple of times
the botium box opened. But initially this was not opening.
You don't have to navigate.
If you run using docker-compose, you can go to the directory where your docker-compose.yml file is located and run docker-compose down.
Without docker-compose, you have to run docker ps to list all currently running containers and find the name of the container to kill. You can use the CONTAINER ID or the NAMES. Then run docker kill <container name>.
Example:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
myId myimage:2.5 "/opt/command/ba…" 24 hours ago Up About an hour 0.0.0.0:9000->9000/tcp very_cool_name_1
$ docker kill very_cool_name_1
very_cool_name_1
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$
Just type below commands when you open your Powershell or Bash.
To stop all running containers:
docker stop $(docker ps -q)
To remove all containers:
docker rm $(docker ps -qa)
Please note that rm will just remove your container not the Docker image. If your want to delete an image then you can use: docker rmi -f container_id

Local files transfered to a Kubernentes Persistent Volume?

I'm extremely new to Kubernetes (besides it's not my field) but I got required to be able to execute this practice.
Question is that I need a Handbrake Converter in a containerized pod with a Persistent Volume mounted on a GKE cluster:
3 nodes.
node version 1.8.1-gke.1
node image Ubuntu
Everything is fine until this point but now I'm not able to upload a folder to that PV from my local machine.
What I have tried is a ssh connection to the node and then a sudo docker exec -ti containerId bash but I just got rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\\"bash\\\": executable file not found in $PATH\"\n".
Thanks in advance.
To transfer local files to a kubernetes pod, use kubectl cp:
kubectl cp /root/my-local-file my-pod:/root/remote-filename
or
kubectl cp /root/my-local-file my-namepace/my-pod:/root/remote-filename -c my-container
The namespace can be omitted (and you'll get the default), and the container can be omitted (you'll get the first in the pod).
For SSH'ing you need to go through kubectl as well:
kubectl exec -it <podname> -- /bin/sh

Resources