mount openwrt filesystem using sshfs - openwrt

I'm trying to mount my /tmp directory on my router running OpenWrt "Chaos Calmer" in my local Ubuntu machine using sshfs.
I followed
https://wiki.openwrt.org/doc/howto/sshfs.server and http://bredsaal.dk/openwrt-and-sshfs
But I kep getting the following error:
sudo sshfs root#192.168.252.93:/mnt/ /home/hscuser/mount/ -o sshfs_debug,ssh_command="ssh -p 222"
SSHFS version 2.5
read: Connection reset by peer
I do not see any error messages in dmesg or logread.
Also, I can ssh & scp to my router without any issues.
I'm able to mount directories from another Ubuntu system.
Any ideas on how to debug this?

Related

Cant copy files from minikube

I have setup a kubernetes cluster locally using minikube and want to copy a file from the minikube to my local machine.
I am able to ssh into minikube successfully and run command but the scp command is timing out.
Commands followed
scp -i $(minikube ssh-key) docker#$(minikube ip):/home/docker/.docker/config.json ~/.docker/newconfig.json
and I am getting the following error message
ssh: connect to host 192.168.49.2 port 22: Operation timed out
Has anyone encountered this issue before or knows how to fix it?
Use ‘KUBECTL CP’ to Copy the files and directories from a Kubernetes Container [POD] to the local host and vice versa.
Copy /tmp/foo from a remote pod to /tmp/bar locally
kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar

How to run Docker on an external USB hard-drive in Ubuntu 21.10?

I am trying to run the following command in Ubuntu 21.10 with Docker so that it will run on my external USB hard-drive:
sudo docker run -v /media/alexanderjsingleton/SEAGATE/blockchain -P registry.gitlab.com/pulsechaincom/go-pulse --datadir=/media/alexanderjsingleton/SEAGATE/blockchain --pulsechain-testnet
Evidently, I need to configure some Docker settings in order for this occur, in addition to mounting the drive, because whenever I run the above command, the files are downloaded and saved within the Docker directory on my computer instead of the external USB hard-drive- can anyone please advise? Thanks!

How to scp files from local machine directly to a docker container on a remote machine (without having to repeatedly copy)?

I'm new to Docker and I want to copy files to/from my local machine directly to a docker container that's on a remote machine without having to scp files from my local to my remote and then using docker cp to copy those files to the container. My container does not have an SSH server installed on it nor do I want to rebuild my image to include it.
I tried following solution given by the second answer here:How to SSH into Docker? . I ran the following command on my remote machine that hosts Docker:
docker run -d -p 2222:22 -v /var/run/docker.sock:/var/run/docker.sock -e CONTAINER=kind_tu -e AUTH_MECHANISM=noAuth jeroenpeeters/docker-ssh
Where kind_tu is the name of my running container.
On my local machine I then used: ssh -L 2222:localhost:2222 remote_account_name#remote_ip and then scp -P 2222 test_file remote_account_name#remote_ip:/destination/path (I'm also not familiar with port forwarding so I'm not sure if my notation is correct). When doing this, I get the following:
ssh: connect to host remote_ip port 2222: Connection refused
lost connection
Could this be an issue with the firewall since the remote machine is on my school's campus?
In all, I'm not sure if what I'm doing is even remotely correct.
According to your comment as a reply to David's, here is the explanation how to bind-mount the directory for your visualization files to your container:
On the host system create a directory, e.g. mkdir /home/sarah/viz/. Then, mount it to your docker container, using e.g.
docker run -v /home/sarah/viz:/data/viz … kind_tu …
Your viz software inside the kind_tu container should place the files in the directory /data/viz – which then lands in /home/sarah/viz/ on the host system, where you can download them to your local computer with scp or rsync or however you can connect to the remote machine.
You can also use docker-compose to have a more persistent environment. Write a file docker-compose.yml with the bind-mount and all the other configuration of the kind_tu container:
version: '3'
services:
kind_tu:
image: your_viz_software_image:latest
volumes:
- /home/sarah/viz:/data/viz:rw
…
Then, instead of docker run … you can just do docker-compose up -d and everything acts according to the config in the compose-file.

Error in Docker: bad address to executables

I'm trying to something with Docker.
Steps I'm doing:
- Launch Docker Quickstart Terminal
- run docker run hello-world
Then I get error like:
bash: /c/Program Files/Docker Toolbox/docker: Bad address
I have to say that I was able to run hello-world image, but now I'm not. I don't know what happend.
I don't know if it matters however I had some problems at instalation step.
Since I have git installed in non standard location. However it seems git bash.exe working correctly for Docker.
My environment:
Windows 10
Git 2.5.0 (installed before Docker)
Docker Toolbox 1.9.1a
I have the same issue with bash: /c/Program Files/Docker Toolbox/docker: Bad address
I thought the problems is "bash doesn't support docker.exe".
SO I fix this problem by use powershell ,not the bash.
and if you use powershell maybe face this
An error occurred trying to connect: Get http://localhost:2375/v1.21/containers/json: dial tcp 127.0.0.1:2375: ConnectExenter code here
tcp: No connection could be made because the target machine actively refused it.
You can export variable from bash use export and import to powershell by this below
$env:DOCKER_HOST="tcp://192.168.99.100:2376"
$env:DOCKER_MACHINE_NAME="default"
$env:DOCKER_TLS_VERIFY="1"
$env:DOCKER_TOOLBOX_INSTALL_PATH="C:\\Program Files\\Docker Toolbox"
$env:DOCKER_CERT_PATH="C:\\Users\\kk580\\.docker\\machine\\machines\\default"
that's all
ps:I found this problem fixed by update git from 2.5.0 to 2.6.3.
Not entirely sure what the issue is, report it to the project on github. I find the docker mac and windows tools a bit flakey from time to time as they are still maturing. If you don't mind seeing what's underneath, you can try running docker-machine directly or set up your own host pretty quickly with Vagrant.
Docker Machine
Run a command or bash prompt to see what machines you have.
docker-machine ls
Create a machine if you don't have one listed
docker-machine create -d "virtualbox" default-docker
Then connect to the listed machine (or default-docker)
docker-machine ssh default-docker
Vagrant
If that doesn't work you can always use vagrant to manage VM's
Install VirtualBox (Which you probably have already if you installed the toolbox)
Reinstall Git, make sure you select the option for adding ALL the tools to your system PATH (for vagrant ssh)
Install Vagrant
Run a command or bash prompt
mkdir docker
cd docker
vagrant init debian/jessie64
vagrant up --provider virtualbox
Then to connect to your docker host you can run (from the same docker directory you created above)
vagrant ssh
Now your on the docker host, Install the latest docker the first time
curl https://get.docker.com/ | sudo sh
Docker
Now you have either a vagrant or docker-machine host up, you can docker away after that.
sudo docker run -ti busybox bash
You could also use PuTTY to connect to vagrant machines instead of installing git/ssh and running vagrant ssh. It provides a nicer shell experience but it requires some manual setup of the ssh connections.

Docker External File Access Not in /Users/ on OSX

So, despite Docker 1.3 now allowing easy access to external storage on OSX through boot2docker for files in /Users/, I still need to access files not in /Users/. I have a settings file in /etc/settings/ that I'd like my container to have access to. Also, the CMD in my container writes logs to /var/log in the container, which I'd rather have it write to /var/log on the host. I've been playing around with VOLUME and passing stuff in with -v at run, but I'm not getting anywhere. Googling hasn't been much help. Can someone who has this working provide help?
As boot2docker now includes VirtualBox Guest Additions, you can now share folders on the host computer (OSX) with guest operating systems (boot2docker-vm). /Users/ is automatically mounted but you can mount/share custom folders. In your host console (OSX) :
$ vboxmanage sharedfolder add "boot2docker-vm" --name settings-share --hostpath /etc/settings --automount
Start boot2docker and ssh into it ($boot2docker up / $boot2docker ssh).
Choose where you want to mount the "settings-share" (/etc/settings) in the boot2docker VM :
$ sudo mkdir /settings-share-on-guest
$ sudo mount -t vboxsf settings-share /settings-share-on-guest
According that /settings is the volume declared in the docker container add -v /settings-share-on-guest:/settings to the docker run command to mount the host directory settings-share-on-guest as a data volume.
Works on Windows, not tested on OSX but should work.

Resources