Cant copy files from minikube - docker

I have setup a kubernetes cluster locally using minikube and want to copy a file from the minikube to my local machine.
I am able to ssh into minikube successfully and run command but the scp command is timing out.
Commands followed
scp -i $(minikube ssh-key) docker#$(minikube ip):/home/docker/.docker/config.json ~/.docker/newconfig.json
and I am getting the following error message
ssh: connect to host 192.168.49.2 port 22: Operation timed out
Has anyone encountered this issue before or knows how to fix it?

Use ‘KUBECTL CP’ to Copy the files and directories from a Kubernetes Container [POD] to the local host and vice versa.
Copy /tmp/foo from a remote pod to /tmp/bar locally
kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar

Related

Connection refused error on worker node in kubernetes

I'm setting up a 2 node cluster in kubernetes. 1 master node and 1 slave node.
After setting up master node I did installation steps of docker, kubeadm,kubelet, kubectl on worker node and then ran the join command. On master node, I see 2 nodes in Ready state (master and worker) but when I try to run any kubectl command on worker node, I'm getting connection refused error as below. I do not see any admin.conf and nothing set in .kube/config . Are these files also needed to be on worker node? and if so how do I get it? How to resolve below error? Appreciate your help
root#kubework:/etc/kubernetes# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root#kubework:/etc/kubernetes# kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root#kubework:/etc/kubernetes#
root#kubework:/etc/kubernetes# kubectl get nodes The connection to the
server localhost:8080 was refused - did you specify the right host or
port?
Kubcetl is by default configured and working on the master. It requires a kube-apiserver pod and ~/.kube/config.
For worker nodes, we don’t need to use kube-apiserver but what we want is using the master configuration to pass by it.
To achieve it we have to copy the ~/.kube/config file from the master to the ~/.kube/config on the worker. Value ~ with the user executing kubcetl on the worker and master (that may be different of course).
Once that done you could use the kubectl command from the worker node exactly as you do that from the master node.
Yes these files needed. Move these files into respective .kube/config folder on worker nodes.
This is expected behavior even using kubectl on master node as non root account, by default this config file is stored for root account in /etc/kubernetes/admin.conf:
To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively on the master, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
Optionally Controlling your cluster from machines other than the control-plane node
scp root#<control-plane-host>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes
Note:
The KUBECONFIG environment variable holds a list of kubeconfig files. For Linux and Mac, the list is colon-delimited. For Windows, the list is semicolon-delimited. The KUBECONFIG environment variable is not required. If the KUBECONFIG environment variable doesn't exist, kubectl uses the default kubeconfig file, $HOME/.kube/config.
I tried many of the solutions which just copy the /etc/kubernetes/admin.conf to ~/.kube/config. But none worked for me.
My OS is ubuntu and is resolved by removing, purging and re-installing the following :
sudo dpkg -r kubeadm kubectl
sudo dpkg -P kubeadm kubectl
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" (downloading kubectl again, this actually worked)
kubectl get nodes
NAME STATUS ROLES AGE VERSION
mymaster Ready control-plane,master 31h v1.20.4
myworker Ready 31h v1.20.4
Davidxxx's solution worked for me.
In my case, I found out that there is a file that exists in the worker nodes at the following path:
/etc/kubernetes/kubelet.conf
You can copy this to ~/.kube/config and it works as well. I tested it myself.
If I try the following commands,
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
getting this error
[vikram#node2 ~]$ kubectl version
Error in configuration:
* unable to read client-cert /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: permission denied
* unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: permission denied
Then this works which is really a workaround and not a fix.
sudo kubectl --kubeconfig /etc/kubernetes/kubelet.conf version
Was able to fix it by copying the kubelet-client-current.pem from /var/lib/kubelet/pki/ to a location inside $HOME and modifying to reflect the path of the certs in HOME/.kube/config file. Is this normal?
I came across the same issue - I found that my 3 VMs are sharing the same IP (since I was using NAT network on Virtual-box), therefore I switched to Bridge network and have 3 different IPs for 3 different VMs and then followed the installation guide for successful installation of k8s cluster.
Amit

How to scp files from local machine directly to a docker container on a remote machine (without having to repeatedly copy)?

I'm new to Docker and I want to copy files to/from my local machine directly to a docker container that's on a remote machine without having to scp files from my local to my remote and then using docker cp to copy those files to the container. My container does not have an SSH server installed on it nor do I want to rebuild my image to include it.
I tried following solution given by the second answer here:How to SSH into Docker? . I ran the following command on my remote machine that hosts Docker:
docker run -d -p 2222:22 -v /var/run/docker.sock:/var/run/docker.sock -e CONTAINER=kind_tu -e AUTH_MECHANISM=noAuth jeroenpeeters/docker-ssh
Where kind_tu is the name of my running container.
On my local machine I then used: ssh -L 2222:localhost:2222 remote_account_name#remote_ip and then scp -P 2222 test_file remote_account_name#remote_ip:/destination/path (I'm also not familiar with port forwarding so I'm not sure if my notation is correct). When doing this, I get the following:
ssh: connect to host remote_ip port 2222: Connection refused
lost connection
Could this be an issue with the firewall since the remote machine is on my school's campus?
In all, I'm not sure if what I'm doing is even remotely correct.
According to your comment as a reply to David's, here is the explanation how to bind-mount the directory for your visualization files to your container:
On the host system create a directory, e.g. mkdir /home/sarah/viz/. Then, mount it to your docker container, using e.g.
docker run -v /home/sarah/viz:/data/viz … kind_tu …
Your viz software inside the kind_tu container should place the files in the directory /data/viz – which then lands in /home/sarah/viz/ on the host system, where you can download them to your local computer with scp or rsync or however you can connect to the remote machine.
You can also use docker-compose to have a more persistent environment. Write a file docker-compose.yml with the bind-mount and all the other configuration of the kind_tu container:
version: '3'
services:
kind_tu:
image: your_viz_software_image:latest
volumes:
- /home/sarah/viz:/data/viz:rw
…
Then, instead of docker run … you can just do docker-compose up -d and everything acts according to the config in the compose-file.

Failed to enter to a docker container, created with kubernetes deployment

With minikube i created simple deployment (https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment) in the kubernetes. I'm sure that container must running , because kubernetes pod was started successfully and I can see container running in the Portainer. But I just can't enter into the container!!
(I always could do it with a simple pod, maybe with deployment something wrong)
$ docker exec -it 01a7c90b4267 /bin/bash
rpc error: code = 2 desc = oci runtime error: exec failed: dial unix /tmp/pty870274210/pty.sock: connect: connection refused
Also I found "Error syncing pod" in the container logs, but the container status is running
bash isn't available in your container. Have you tried with sh?
$ docker exec -ti 01a7c90b4267 sh
Also, if you're attaching to a running container within Kubernetes, you probably want to kubectl exec instead of docker exec:
$ kubectl exec -ti <pod_id> sh
It seems that the problem was caused by mounting to the minikubes' tmp folder minikube mount $TMP:/tmp. Without mounting I can exec the /bin/bash in the containers with no problems

Local files transfered to a Kubernentes Persistent Volume?

I'm extremely new to Kubernetes (besides it's not my field) but I got required to be able to execute this practice.
Question is that I need a Handbrake Converter in a containerized pod with a Persistent Volume mounted on a GKE cluster:
3 nodes.
node version 1.8.1-gke.1
node image Ubuntu
Everything is fine until this point but now I'm not able to upload a folder to that PV from my local machine.
What I have tried is a ssh connection to the node and then a sudo docker exec -ti containerId bash but I just got rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\\"bash\\\": executable file not found in $PATH\"\n".
Thanks in advance.
To transfer local files to a kubernetes pod, use kubectl cp:
kubectl cp /root/my-local-file my-pod:/root/remote-filename
or
kubectl cp /root/my-local-file my-namepace/my-pod:/root/remote-filename -c my-container
The namespace can be omitted (and you'll get the default), and the container can be omitted (you'll get the first in the pod).
For SSH'ing you need to go through kubectl as well:
kubectl exec -it <podname> -- /bin/sh

mount openwrt filesystem using sshfs

I'm trying to mount my /tmp directory on my router running OpenWrt "Chaos Calmer" in my local Ubuntu machine using sshfs.
I followed
https://wiki.openwrt.org/doc/howto/sshfs.server and http://bredsaal.dk/openwrt-and-sshfs
But I kep getting the following error:
sudo sshfs root#192.168.252.93:/mnt/ /home/hscuser/mount/ -o sshfs_debug,ssh_command="ssh -p 222"
SSHFS version 2.5
read: Connection reset by peer
I do not see any error messages in dmesg or logread.
Also, I can ssh & scp to my router without any issues.
I'm able to mount directories from another Ubuntu system.
Any ideas on how to debug this?

Resources