I am trying to connect to docker containers using the default SSHManager.
These containers only have a running sshd, with public key authentication, and julia installed.
Here is my dockerfile:
FROM rastasheep/ubuntu-sshd
RUN apt-get update && apt-get install -y julia
RUN mkdir -p /root/.ssh
ADD id_rsa.pub /root/.ssh/authorized_keys
I am running the container using:
sudo docker run -d -p 3333:22 -it --name julia-sshd julia-sshd
And then in the host machine, using the julia repl, I get the following error:
julia> import Base:SSHManager
julia> addprocs(["root#localhost:3333"])
stdin: is not a tty
Worker 2 terminated.
ERROR (unhandled task failure): EOFError: read end of file
Master process (id 1) could not connect within 60.0 seconds.
exiting.
I have tested that I can connect to the container via ssh without password.
I have also tested that in julia repl I can add a regular machine with julia installed to the cluster and it works fine.
But I cannot get this two things working together. Any help or suggestions will be apreciated.
I recommend you to also deploy the Master in a Docker container. It makes your environment easily and fully reproducible.
I'm working on a way of deploying Workers in Docker containers on-demand. i.e., the Master deployed in a container can deploy further DockerizedJuliaWorkers. It is similar to https://github.com/gsd-ufal/Infra.jl but assuming that Master and Workers run on the same host, which makes things not so hard.
It is an on-going work and I plan to finish next weeks. In a nutshell:
1) You'll need a simple DockerBackend and a wrapper to transparently run containers, set up SSH, and call addprocs with all the low-level parameters (i.e., the DockerizedJuliaWorker.jl file):
https://github.com/NaelsonDouglas/DistributedMachineLearningThesis/tree/master/src/docker
2) Read here how to build the Docker image (Dockerfile is included):
https://github.com/NaelsonDouglas/DistributedMachineLearningThesis
Please tell me if you have any suggestion on how to improve it.
Best,
André Lage.
Related
I'm creating an application that will allow users to upload video files that will then be put through some processing.
I have two containers.
Nginx container that serves the website where users can upload their video files.
Video processing container that has FFmpeg and some other processing stuff installed.
What I want to achieve. I need container 1 to be able to run a bash script on container 2.
One possibility as far as I can see is to make them communicate over HTTP via an API. But then I would need to install a web server in container 2 and write an API which seems a bit overkill.
I just want to execute a bash script.
Any suggestions?
You have a few options, but the first 2 that come time mind are:
In container 1, install the Docker CLI and bind mount
/var/run/docker.sock (you need to specify the bind mount from the
host when you start the container). Then, inside the container, you
should be able to use docker commands against the bind mounted
socket as if you were executing them from the host (you might also
need to chmod the socket inside the container to allow a non-root
user to do this.
You could install SSHD on container 2, and then ssh in from container 1 and run your script. The advantage here is that you don't need to make any changes inside the containers to account for the fact that they are running in Docker and not bare metal. The down side is that you will need to add the SSHD setup to your Dockerfile or the startup scripts.
Most of the other ideas I can think of are just variants of option (2), with SSHD replaced by some other tool.
Also be aware that Docker networking is a little strange (at least on Mac hosts), so you need to make sure that the containers are using the same docker-network and are able to communicate over it.
Warning:
To be completely clear, do not use option 1 outside of a lab or very controlled dev environment. It is taking a secure socket that has full authority over the Docker runtime on the host, and granting unchecked access to it from a container. Doing that makes it trivially easy to break out of the Docker sandbox and compromise the host system. About the only place I would consider it acceptable is as part of a full stack integration test setup that will only be run adhoc by a developer. It's a hack that can be a useful shortcut in some very specific situations but the drawbacks cannot be overstated.
I wrote a python package especially for this use-case.
Flask-Shell2HTTP is a Flask-extension to convert a command line tool into a RESTful API with mere 5 lines of code.
Example Code:
from flask import Flask
from flask_executor import Executor
from flask_shell2http import Shell2HTTP
app = Flask(__name__)
executor = Executor(app)
shell2http = Shell2HTTP(app=app, executor=executor, base_url_prefix="/commands/")
shell2http.register_command(endpoint="saythis", command_name="echo")
shell2http.register_command(endpoint="run", command_name="./myscript")
can be called easily like,
$ curl -X POST -H 'Content-Type: application/json' -d '{"args": ["Hello", "World!"]}' http://localhost:4000/commands/saythis
You can use this to create RESTful micro-services that can execute pre-defined shell commands/scripts with dynamic arguments asynchronously and fetch result.
It supports file upload, callback fn, reactive programming and more. I recommend you to checkout the Examples.
Running a docker command from a container is not straightforward and not really a good idea (in my opinion), because :
You'll need to install docker on the container (and do docker in docker stuff)
You'll need to share the unix socket, which is not a good thing if you have no idea of what you're doing.
So, this leaves us two solutions :
Install ssh on you're container and execute the command through ssh
Share a volume and have a process that watch for something to trigger your batch
It was mentioned here before, but a reasonable, semi-hacky option is to install SSH in both containers and then use ssh to execute commands on the other container:
# install SSH, if you don't have it already
sudo apt install openssh-server
# start the ssh service
sudo service start ssh
# start the daemon
sudo /usr/sbin/sshd -D &
Assuming you don't want to always be root, you can add default user (in this case, 'foobob'):
useradd -m --no-log-init --system --uid 1000 foobob -s /bin/bash -g sudo -G root
#change password
echo 'foobob:foobob' | chpasswd
Do this on both the source and target containers. Now you can execute a command from container_1 to container_2.
# obtain container-id of target container using 'docker ps'
ssh foobob#<container-id> << "EOL"
echo 'hello bob from container 1' > message.txt
EOL
You can automate the password with ssh-agent, or you can use some bit of more hacky with sshpass (install it first using sudo apt install sshpass):
sshpass -p 'foobob' ssh foobob#<container-id>
I believe
docker exec -it <container_name> <command>
should work, even inside the container.
You could also try to mount to docker.sock in the container you try to execute the command from:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
I'm pretty green regarding docker and find myself facing the following problem:
I'm trying to create a dockerfile to generate an image with my companie software on it. During the installation of that software the install process check if ssh is running with the following command:
if [ $(pgrep sshd | wc -l) -eq 0 ]; then
I probably need to precise that I'm installing and starting open-ssh during that same process.
Can you at all check that a service is running during the image creation ?
I cannot ignore that step has it is executed as part of a self extracting mechanism.
Any clue toward the right direction would be appreciated.
An image cannot run services. You are just creating all the necessary things needed for your container to run, like installing databases, servers, or copying some config files etc in the Dockerfile. The last step in the Dockerfile is where you can give instructions on what to do when you issue a docker run command. A script or command can be specified using CMD or ENTRYPOINT in the Dockerfile.
To answer your question, during the image creation process, you cannot check whether a service is running or not. When the container is started, docker will execute the script or command that you can specify in the CMD or ENTRYPOINT. You can use that script to check if your services are running or not and take necessary action after that.
It is possible to run services during image creation. All processes are killed once a RUN command completes. A service will not keep running between RUN commands. However, each RUN command can start services and use them.
If an image creation command needs a service, start the service and then run the command that depends on the service, all in one RUN command.
RUN sudo service ssh start \
&& ssh localhost echo ok \
&& ./install
The first line starts the ssh server and succeeds with the server running.
The second line tests if the ssh server is up.
The third line is a placeholder: the 'install' command can use the localhost ssh server.
In case the service fails to start, the docker build command will fail.
I am new to docker and I tried to run the linuxconfig/lemp-php7 image. Everything worked fine and I could access the nginx web server installed on the container. To run this image I used this command:
sudo docker run linuxconfig/lemp-php7
When I tried to run the image with the following command to gain access over the container through bash I couldn't connect to nginx and I got the connection refused error message. Command: sudo docker run -ti linuxconfig/lemp-php7 bash
I tried this several times so I'm pretty sure it's not any kind of coincidence.
Why does this happen? Is this a problem specific to this particular image or is this a general problem. And how can I gain access to the shell of the container and access the web server at the same time?
I'd really like to understand this behavior to improve my general understanding of docker.
docker run runs the specified command instead of what that container would normally run. In your case, it appears to be supervisord, which presumably in turn runs the web server. So you're preventing any of that from happening.
My preferred method (except in cases where I'm trying to debug cases where the container won't even start properly) is to do the following after running the container normally:
docker exec -i -t $CONTAINER_ID /bin/bash
I'd like to package Selenium grid exrtas into a docker image.
This service being run without using docker container can reboot the OS it's running in. I wonder if I can setup the container to restart by Selemiun grid extras service running inside the container.
I am not familiar with Selenium Grid, but as a general idea: you could mount a folder from the host as data volume, then let Selenium write information to there, like a flag file.
On the host, you have a scheduled task / cronjob running on the host that would check for this flag in the shared folder and if it has a certain status, you would invoke a docker restart from there.
Not sure if there are other more elegant solutions for this, but this is what came to my mind adhoc.
Update:
I just found this on the Docker forum:
https://forums.docker.com/t/how-can-i-run-docker-command-inside-a-docker-container/337
I'm not sure about CoreOS but normally you can manage your host
containers from within a container by mounting the Docker socket.
Such as
docker run -it -v /var/run/docker.sock:/var/run/docker.sock ubuntu:latest sh -c "apt-get update ; apt-get install docker.io -y ;
bash"
or
https://registry.hub.docker.com/u/abh1nav/dockerui/
I have a docker image with all the necessary tools and environment properly set up. However, I am having a hard time running it in the background.
Seems like there are two approaches:
(1) can run the box as daemon and I can attach to it whenever I want to use the box. However, the container exit with code zero right after I run it as daemon.
$:~/docker/docker_scrapy$ sudo docker run -ti -v ~/docker/docker_scrapy/myvolume:/var/myvolume 3fb9894af1d9 /bin/bash
root#3fc39116a586:/# python -c 'from bs4 import BeautifulSoup'
root#3fc39116a586:/# cd /var/myvolume/
root#3fc39116a586:/var/myvolume#
$:~/docker/docker_scrapy$ sudo docker run -d -v ~/docker/docker_scrapy/myvolume:/var/myvolume 3fb9894af1d9
c5fab6e6ac02a579e3371aa641b18ca67feb93a9f4f4934b6d083157182fe4e1
$:~/docker/docker_scrapy$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Clearly, I can start the box in the interactive mode, but when I try to run it as a daemon, it will exit with code 0 right after I started. And I can not attach to it because I need to start it. Does that mean you can not run a image in the daemon mode if it is idle?
(2 )Or setting it up as a SSH server, and I can ssh in and do the work whenever I want. Like Vagrant up/ssh..
In summary:
(1) What did I do wrong with the detach/attach?
(2) Which is the proper way to have a run docker in the background? daemon/ssh
If you give it another command to run after starting the service that waits for input then the container will keep running until you attach and exit that command. I usually leave a shell running after the service starts so I can debug things. here's a simple example:
First let's create a service that runs in the background
arthur#a:~$ docker run -ti ubuntu bash
root#5dc7f330b947:/# cat <<'EOF' >start-service.sh
> while true
> do
> echo service is running >> service.log
> sleep 10
> done
> EOF
root#5dc7f330b947:/# chmod +x start-service.sh
root#5dc7f330b947:/# exit
arthur#a:~$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5dc7f330b947 ubuntu:12.04 bash 50 seconds ago Exited (0) 3 seconds ago jolly_nobel
arthur#a:~$ docker commit 5dc7f330b947 service/example
4c37b69b129287d79a6fe3916e4293f935194966b1de49d125f1cf8d6ab14f6f
then we can start it (i background it with a & here. in your example the & would not be required). Note it's fine to use both the interactive and detach options.
arthur#a:~$ docker run -ti -d service/example bash -c "./start-service.sh & bash"
b35a5397ea2d29b4085d93ef32270379b09e49118380b0376309bca74fd719d0
arthur#a:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b35a5397ea2d service/example:latest bash -c './start-ser 7 seconds ago Up 7 seconds cranky_wright
later we can attach and check on the service by looking in it's log file:
arthur#a:~$ docker attach b35a5397ea2d
root#b35a5397ea2d:/# cat service.log
service is running
service is running
service is running
root#b35a5397ea2d:/#
I don't recommend running sshd inside the container because it leaves an option for attackers that isn't strictly useful for me.
A lots of questions in there. I would firstly suggest you go through the docker tutorial to grasp some of the underlying concepts. That said...
That Dockerfile will never run in the background, that's not how docker works. There is no cmd, no entrypoint, nothing to run.
Docker by default runs one task, when that returns the container stops. So if all you wanted was a sshd you would run that as your CMD in non daemon mode. (sshd -D)
There are ways to run daemonized apps though:
Using supervisord, as documented on the docker site.
Another alternative is phusion/baseimage.
Phusion/baseimage provides ssh access, but honestly to do what I need in containers I find nsenter easier to use. Especially when paired with the phusion docker-bash tool.
Notice: this answer promotes a tool I've written.
First of all, conceptually running multiple processes in one container is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/). A more favorable solution is one that involves multiple containers each running their own process/service. Linking them together would result in a coherent application.
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container, without that container even knowing about ssh. The only requirement is that the container has bash.
The following example would start an SSH server attached to a container with name 'sshd-web-server1'.
docker run -ti --name sshd-web-server1 -e CONTAINER=web-server1 -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker \
jeroenpeeters/docker-ssh
You connect to the SSH server with your ssh client of choice, just as you normally would.
Be adviced: Docker-SSH is currently still under development, but it does work! Please let me know what you think
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh