Docker on Cloud hosting - docker

Recently i created a linode account and installed docker in ubuntu 14.04LTS
i installed an image, and ran a container, everything is working properly at the moment.
I wanted to do a scp from my local machine to linode directory and i did successfully like so.
scp file.txt root#ip:/path/to/directory
only my problem started when i realized docker container has its own root#hostname:/path/to/directory within the linode root#ip:/and that i did not know how to do a scp from my local machine directly to the container path, simply because i do not know the syntax and i'm not very experienced with the process.
I looked around asked Linode for support but there was very little they could assist me with.
I decided to test some of my theories such as: instead of scp directly to the docker container i would scp to linode scp file.txt root#ip:/home and from there i would do a docker cp file.txt <container-name>:/path/to/directory after i click enter, i get no response neither error nor success.
i'm a beginner on all of this so what am i missing? what am i not understanding?

Your docker cp aproach is right. And in fact it doesn't return any response. Your can check if the file was indeed copied using a docker exec containerid bash.
There is another way more complicated and not recommended. If you install openssh in your container and open another port lets say -p 2222:22 you can scp directly to the container.
Of course you can do it via the docker way. Declaring a volume, linking your host directory to your container directory: -v /path/to/directory:/path/to/directory. Then your scp to your host will work.
Regards

Related

Docker/Ansible: ERROR! The playbook could not be found

I'm quite new to software development and having some issues setting up a docker container.
I've pull the docker container and run it. Now I want to apply some configuration to my container with
docker run --rm --network="ansible_default" -v C:\folder\folder1\ansible\playbooks:/ansible/playbooks docker.<address>/ansible ansible-playbook -i host localhost.playbook.yml
But when I run the above code, it just gives an error:
ERROR the playbook localhost.playbook.yml does not appear to be a file
I am running on administration powershell and have cd into the folder that contains the yaml files. (so inside C:\folder\folder1\ansible\playbooks)
Do I need ansible installed? Any pointers would be greatly appreciated!
EDIT: The docker container exits with a code 2, I'm supposed to be able to access it via localhost:8080 but it's just a blank screen. Exited(2) I'm not too sure what it means, haven't found much success online.
Turns out the solution is to reinstall Docker.

Connecting Spyder to Remote Jupyter Notebook in a Docker Container

I have been trying to connect Spyder to a docker container running on a remote server and failing time and again. Here is a quick diagram of what I am trying to achieve:
Currently I am launching the docker container on the remote machine through ssh with
docker run --runtime=nvidia -it --rm --shm-size=2g -v /home/timo/storage:/storage -v /etc/passwd:/etc/passwd -v /etc/group:/etc/group --ulimit memlock=-1 -p 8888:8888 --ipc=host ufoym/deepo:all-jupyter
so I am forwarding on port 8888. Then inside the docker container I am running
jupyter notebook --no-browser --ip=0.0.0.0 --port=8888 --allow-root --notebook-dir='/storage'
OK, now for the Spyder part - As per the instructions here, I go to ~/.local/share/jupyter/runtime, where I find the following files:
kernel-ada17ae4-e8c3-4e17-9f8f-1c029c56b4f0.json nbserver-11-open.html nbserver-21-open.html notebook_cookie_secret
kernel-e81bc397-05b5-4710-89b6-2aa2adab5f9c.json nbserver-11.json nbserver-21.json
Not knowing which one to take, I copy them all to my local machine.
I now go to Consoles->Connect to an Existing Kernel, which gives me the "Connect to an Existing Kernel" window which I fill out as so (of course using my actual remote IP address):
(here I have chosen the first of the json files for Connection info:). I hit enter and Spyder goes dark and crashes.
This happens regardless of which connection info file I choose. So, my questions are:
1: Am I doing all of this correctly? I have found lots of instructions for how to connect to remote servers, but not so far for specifically connecting to a jupyter notebook on a docker on a remote server.
2: If yes, then what else can I do to troubleshoot the issues I am encountering?
I should also note that I have no problems connecting to the Jupyter Notebook through the browser on my local machine. It's just that I would prefer to be working with Spyder as my IDE.
Many thanks in advance!
This isn't a solution so much as a work around, but sshfs might be of help
Use sshfs to mount the remote machine's home directory on a local directory, then your local copy of Spyder can edit the file as if it were a local file.
sshfs remotehost.com:/home/user/ ./remote-host/
It typically takes about half a second to upload the changes to an AWS host when you I hit save in Spyder, which is an acceptable delay for me. When it's time to run the code, ssh into the remote machine, and run the code from an IPython shell. It's not elegant, but it does work.
I'm not expecting this to be the best answer, but maybe you can use it as a stopgap solution.
I have the same problem with you. I got it working, maybe a bit clumsy as I am totally new to docker. Here are my steps and notes on where we differ, hope this helps:
Launch docker conatiner in remote machine:
docker run --gpus all --rm -ti --net=host -v /my_storage/data:/home/data -v /my_storage/JSON:/root/.local/share/jupyter/runtime repo/tensorflow:20.03-tf2-py3
I use a second volume mount, in order to get kernel.json file to my local computer. I couldn't manage to access directly from docker via ssh, as it is in /root/ folder in docker container, and with root-only access. If you know how to read from there directly, I'll be happy to learn. My workaround is:
On remote machine, create a JSON/ directory, and map it to the "jupyter --runtime-dir" in container. Once the kernel is created, access the kernel-xxx.json file through this volume mount, copy to local machine and chmod.
Launch ipython kernel in container:
ipython kernel
You are launching jupyter notebook. I suspect this is the reason for your problem. I am not sure if spyder works on notebooks, but it works on iPython kernels. Probably, it works better on spyder-kernels.
copy kernel.json file from /remote_machine/JSON to local machine, chmod for accessing.
launch spyder, use local kernel.json and ssh settings. This part is same as yours.
Not enough reputation... to add comment but to chime on #asim's solution. I was able to have my locally installed Spyder to connect to a kernel running from a container on a remote machine. There is bit of manual work but I am okay with this since I can get much more done with Spyder than with other IDEs.
docker run --rm -it --net=host -v /project_directory_remote_machine:/container_project_directory image_id bash
from container
python -m spyder_kernels.console - matplotlib=’inline’ --ip=127.0.0.1 -f=/container_project_directory/connection_file.json
from remote machine, chmod connection_file.json to open then open and copy/paste content to a file on a local machine :) Use the json file to connect to a remote kernel following steps in the sources below
https://medium.com/#halmubarak/connecting-spyder-ide-to-a-remote-ipython-kernel-25a322f2b2be
https://mazzine.medium.com/how-to-connect-your-spyder-ide-to-an-external-ipython-kernel-with-ssh-putty-tunnel-e1c679e44154

How to run shell script on Host from jenkins docker container?

I know my issue is already discussed in How to run shell script on host from docker container? but i think my issue is a littel bit more complicated.
At first I try to explain my situation. I'm using jenkins 2.x from a docker container in CentOS VM (Host). In jenkins i created a Job which checks out 3 files from SVN (2 Shell scripts and 1 .jar file). these files will be downloaded in jenkins workspace in jenkins docker container and also on host in a mounted directory like that:
volumes:
- ${DATA_HOME}/jenkins/data:/var/jenkins_home
One of these scripts will be executed from jenkins job and that executes the other script. The second script checks out a SVN directory and does much more stuffs.
So I want a new mounted volume in that directory all results of executed second script will be placed on Host. I think to connect to the host over 'SSH' and execute the script seems to be fine but how can i do that.
I hope I could explain my issue understandable
I will answer regarding "I think to connect to the host over 'SSH' and execute the script seems to be fine but how can i do that"
Pass Host machine Ip to your run command.
docker run --name redis --env pass=pass_my --add-host="hostmachine:192.168.1.23" -dit redis
Now,
docker exec -it redis ash
and run this command. This will do SSH from the container to host
ssh user_name#hostmachine 'ls; bash /home/user_name/Desktop/test.sh; docker run --name db -dit db; docker ps'
If you want something without password then set ssh-key in a container or you can also try
sshpass -p $pass ssh user_name#hostmachine 'ls;/home/user_name/Desktop/test.sh; docker run --name db -d
it db; docker ps'
or if you want to run the script that is inside container you can also do that just pass the script to ssh.
sshpass -p $pass ssh user_name#hostmachine < ./ab.sh
Note: $pass is password of host from ENV and hostmachine is host the we set during run command.
Based on comments in ans:
We can simply install any SSH plugin (SSH) or (Publish over SSH) and
it will work after providing username/password.
Only thing to watch out is that host name resolution does not work and we will need to provide an IP address.
As pointed out this is not the best approach, but sometimes in migration from older systems, we need to move one step at a time and this is the easiest step to take.

Docker Bind Mounts in WSL do not show files

Im trying to use the docker client from inside WSL, connecting to the docker engine on Windows. Ive exposed the docker engine on Windows on port 2375, and after setting the DOCKER_HOST environment variable in WSL, I can verify this works by running docker ps.
The problem comes when i attempt to mount directories into docker containers from WSL. For example:
I create a directory and file inside my home folder on WSL (mkdir ~/dockertest && touch ~/dockertest/example.txt)
ls ~/dockertest shows my file has been created
I now start a docker container, mounting my docker test folder (docker run -it --rm -v ~/dockertest:/data alpine ls /data)
I would expect to see 'example.txt' in the docker container, but this does not seem to be happening.
Any ideas what I might be missing?
There are great instructions for Docker setup in WSL at https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly#ensure-volume-mounts-work - solved most of my problems. The biggest trick for me was with bind-mounted directories; you have to use a path in WSL that the Docker daemon will be able to translate, for example /c/Users/rfay/myproject.
I don't think you need to change the mound point as the link suggests. Rather if you use pwd and sed in combination you should get the effect you need.
docker run -it -v $(pwd | sed 's/^\/mnt//'):/var/folder -w "/var/folder" alpine
pwd returns the working folder in the format '/mnt/c/code/folder'. Pipe this to sed and replace '/mnt' with empty string will leave you with a path such as '/c/code/folder' which is correct for docker for windows.
Anyone stumbling here over this issue follow this: Docker Desktop WSL2 Backend and make sure you are running the version 2 of the WSL in PowerShell:
> wsl -l -v
NAME STATE VERSION
* docker-desktop Running 2
Ubuntu Running 2
docker-desktop-data Running 2
If your Ubuntu VERSION doesn't say 2, you need to update it according to the guide above.
After that, you will be able to mount your Linux directory to the Docker Container directly like:
docker run -v ~/my-project:/sources <my-image>
Specific to WSL 1
Ran into the same issue. One thing to remember is that docker run command does not execute a container then and there on a command shell. It sends the run arguments to a docker daemon that does not interpret WSL path correctly. Hence, you need to pass Windows formatted path in quotes and with back slashes escaped
Your Windows path is
\\wsl$\Ubuntu\home\username\dockertest
Docker command after escaping will probably be like
docker run -it --rm -v "\\\\wsl\$\\Ubuntu\\home\\username\\dockertest":/data alpine ls /data
try
docker -v /:{path} exe
Hope to help you.

Docker cannot access host files using -v option

Not 100% sure this is the right place but let's try.
I'm using on my Windows laptop the Docker Quickstart Terminal (docker toolbox) to get access to a Linux env with Google AppEngine, python, mysql...
Well, that seems to work and when I type docker run -i -t appengine /bin/bash I get access to this env.
Now I'd like to have access to some of my local (host) files so I can edit them with my Windows editors but run them into the docker instance.
I've seen a -v option but cannot make it work.
What I do
docker run -v /d/workspace:/home/root/workspace:rw -i -t appengine /bin/bash
But workspace stays empty in the Docker instance...
Any help appreciated
(I've read this before to post: https://github.com/rocker-org/rocker/wiki/Sharing-files-with-host-machine#windows)
You have to enable Shared Drives , you can follow this Blog

Resources