I have to analyze tons of files which require different tools. Right now I have several steps which each are in their own seperate Docker container. Each container takes a folder as an input and provides a folder with output files, everything is working fine. Now I want to automate this as a chain of these containers. How can one container start the next one though? Do I have to use Docker in Docker as a container can not start a new one in the host system ?
Bonus:
what if the run command differs, as it depends on the output of the
previous container ?
Thank you so much, couldn't find a working solution yet.
If you want to simple start the containers after each other you can to it with a shell script:
#!/bin/bash
echo "Running first container"
docker run -it --name sleep1 --rm sleep
echo "Running second container"
docker run -it --name sleep2 --rm sleep
Example Dockerfile (docker build -t sleep .):
FROM alpine:3.3
ENTRYPOINT ["/bin/ash", "-c", "sleep 5"]
Maybe this help you to get what you want. If not please describe a bit more what you want to achieve. Maybe with code examples or something.
Edit:
Based on the new informations of the creator i see only two possibilities to chain the start of mulitple containers from out of a container:
1.: Run Docker in Docker (nested containers). But this is not really recommended.
2.: Run the script from above via ssh from the container on the host machine.
Related
I'm using Docker for Windows (Education Edition with Hyper-V) and am fairly new to Docker. My workflow feels a little bit complicated and I think there are better ways. Here's what I do:
When I develop with Docker containers, I add a Dockerfile to my project first.
Then I am going to build the container by running a command like docker build -t containername .
When Docker is done building, I am going to run the container with a command like docker run -p 8080:8080 containername (sometimes I add a volume at this point)
This runs the container and leaves my Powershell in a state where I can read debug messages and so on from the container.
Then I'm testing and developing the application.
Once I'm done developing and testing, I need to CTRL + C in order to exit the running container.
Now comes the tricky part: Say, I forgot something and want to test what I forgot to test right away. I would again run docker build -t containername . BUT docker would now tell me, that the port is already taken. So I continue like this:
I search for my container with this command: docker ps
Once I found the name (i.e. silly_walrusbeard) I type docker stop silly_walrusbeard. Now I can run docker build -t containername . again and the port is now free.
How could I simplify this workflow? Is there an alternative to CTRL+C that also stops the container? Thanks for your suggestions!
list all current containers with docker ps -a. Kill them with docker kill <ID> and maybe docker rm <ID>.
And when you run new containers use the --rm to free ports (among other things) automatically when the container stops:
docker run --rm -it containername
(I usually need the -it when running shells, but I'm not sure about powershell. Maybe you don't need it)
Here is my simple docker file
FROM java:8
EXPOSE 4000
now when I run it using the following command
sudo docker run --name hello dockerfile
and do docker ps -a it shows the status as exited. I just want to keep this container up and running so I can ssh into this container and probably transfer files and so on. It looks like containers are mainly used to run servers am I correct?
you can at least keep your container up with something like docker run -d hello sleep infinity but as said by René M, you should put in your Dockerfile something to do in your CMD or ENTRYPOINT, see the doc
https://docs.docker.com/engine/reference/builder/#cmd
and
https://docs.docker.com/engine/reference/builder/#entrypoint
That is realy simple.
Because your container is running nothing that last long. What happens is, that this container starts, has nothing to do and stops.
What you can do is:
Run the container in interactive mode with attached tty. This way your console enters the container after it's start, and let him run a tty, which is something to do and prevends the container from stopping. Then you can work inside this container, like installing an application. Doing this your work will be lost after stoping the container. But you can run docker commit on that container, which makes your changes persistent.
docker run -i -t --name hello dockerfile
Enhance your dockerfile with something usefull. Like copying an application into the container and provide a CMD command to run, when the container starts.
After this the container will last as long as your CMD command runs. If the command is a server or deamon application, the container will last for ever and will only stop when you stop him.
I am trying to run a simple docker container with my web application installed (Not using docker file).
During the testing I would always run a container using -t -i option and then start the tomcat service inside it by running a shell script.
How when I am moving to production I dont want to use the -t -i option any more and just need my Tomcat service to start and be the only primary service.
I trying pointing the entrypoint to the start up script for starting tomcat but the container terminates after that script finishes.
How do I run a container, start a service and keep that service as the single primary service of the container?
Note: I read some posts about supervisor but not sure if I would need to start building my image from scratch if I go that route? I would prefer not doing that.
Any suggestions?
If you have a Dockerfile that uses an entrypoint pattern, it will look something like this:
(Dockerfile)
FROM ubuntu
...Some configuration steps...
add start.sh /start.sh
ENTRYPOINT ["/start.sh"]
All you need to do is make sure your start.sh script 'hangs' in some way. Some people like to tail the syslogs, but tailing any file that exists will work.
(start.sh)
#!/bin/bash
service Your_Service_Or_Whatever start
tail -f /var/log/dmesg
A shorter version:
FROM ubuntu
...Some configuration steps...
ENTRYPOINT ["/bin/sh", "-c", "while true; do sleep 1; done"]
tested with Docker version 1.12.1, build 23cf638
Use docker --version to find out your version
Docker containers as default will run according to the configuration in the images Dockerfile. If you usually run a container with the -i flag, you leave STDIN open allowing you access to the containers entrypoint or it could be a bash shell. To achieve what you want, you can run the container in a detached state passing your commands into docker run directly.
docker run -d myapp /opt/catalina/bin/startup.sh
This will run the myapp container in a detached state and will run the command passed as the 3rd argument. If the command results in a long lived service, the container will stay active as long as the service is.
This is explained in detail in the docs.
In practice to start a container I do:
docker run a8asd8f9asdf0
If thats the case, what does:
docker start
do?
In the manual it says
Start one or more stopped containers
This is a very important question and the answer is very simple, but fundamental:
Run: create a new container of an image, and execute the container. You can create N clones of the same image. The command is:
docker run IMAGE_ID and not docker run CONTAINER_ID
Start: Launch a container previously stopped. For example, if you had stopped a database with the command docker stop CONTAINER_ID, you can relaunch the same container with the command docker start CONTAINER_ID, and the data and settings will be the same.
run runs an image
start starts a container.
The docker run doc does mention:
The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command.
That is, docker run is equivalent to the API /containers/create then /containers/(id)/start.
You do not run an existing container, you docker exec to it (since docker 1.3).
You can restart an exited container.
Explanation with an example:
Consider you have a game (iso) image in your computer.
When you run (mount your image as a virtual drive), a virtual drive is created with all the game contents in the virtual drive and the game installation file is automatically launched. [Running your docker image - creating a container and then starting it.]
But when you stop (similar to docker stop) it, the virtual drive still exists but stopping all the processes. [As the container exists till it is not deleted]
And when you do start (similar to docker start), from the virtual drive the games files start its execution. [starting the existing container]
In this example - The game image is your Docker image and virtual drive is your container.
run command creates a container from the image and then starts the root process on this container. Running it with run --rm flag would save you the trouble of removing the useless dead container afterward and would allow you to ignore the existence of docker start and docker remove altogether.
run command does a few different things:
docker run --name dname image_name bash -c "whoami"
Creates a Container from the image. At this point container would have an id, might have a name if one is given, will show up in docker ps
Starts/executes the root process of the container. In the code above that would execute bash -c "whoami". If one runs docker run --name dname image_name without a command to execute container would go into stopped state immediately.
Once the root process is finished, the container is stopped. At this point, it is pretty much useless. One can not execute anything anymore or resurrect the container. There are basically 2 ways out of stopped state: remove the container or create a checkpoint (i.e. an image) out of stopped container to run something else. One has to run docker remove before launching container under the same name.
How to remove container once it is stopped automatically? Add an --rm flag to run command:
docker run --rm --name dname image_name bash -c "whoami"
How to execute multiple commands in a single container? By preventing that root process from dying. This can be done by running some useless command at start with --detached flag and then using "execute" to run actual commands:
docker run --rm -d --name dname image_name tail -f /dev/null
docker exec dname bash -c "whoami"
docker exec dname bash -c "echo 'Nnice'"
Why do we need docker stop then? To stop this lingering container that we launched in the previous snippet with the endless command tail -f /dev/null.
daniele3004's answer is already pretty good.
Just a quick and dirty formula for people like me who mixes up run and start from time to time:
docker run [...] = docker pull [...] + docker start [...]
It would have been wiser to name the command "new" instead of "run".
Run creates a container instance of an existing (or downloadable) image and starts it.
I'm trying to run Docker inside a Jenkins container that is also running in Docker (i.e. Docker in Docker). What I want to know is how to properly start the Docker service when booting Jenkins. The only solution I've found today is to build my own Jenkins image based on the official Jenkins image but change the jenkins script loaded by the entry point to also start up Docker:
# I've added this line just before Jenkins is started from the script:
sudo service docker start
# I've also removed "exec" from the original file which used "exec java $JAVA_TOPS ..." but that didn't work
java $JAVA_OPTS -jar /usr/share/jenkins/jenkins.war $JENKINS_OPTS "$#"
This works when I run (using docker run) a new container but the problem is that if I do (docker start) on stopped container the Docker service is not started.
I strongly suspect that this is not the right way to start my Docker service. My plan is to perhaps use supervisord to start Jenkins and Docker separately (I suppose container linking is out of the question since Docker should be executed as a service on the same container that Jenkins is running on?). My concern with this approach is that I'm going to lose the EntryPoint specified in the Jenkins Dockerfile which allows me to pass arguments to the Jenkins container when starting the container, for example:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins -- <jenkins_arguments>
Does anyone have any recommendations on a good way to solve this preferably by not forking the official Jenkins image?
I'm pretty you cannot do that.
Docker in Docker doesn't mean you have to run docker inside docker with 3 level : host > First level container > Second Level Container
In fact, you just need to share docker with host, and this is your host who will run others containers.
To do that, you have to mount volume with -v parameter
-v /var/run/docker.sock:/var/run/docker.sock
with this command, when you will docker run inside you jenkins container, the docker client will communicate with docker deamon from your host in order to run new container.
To do that, you should run your jenkins container with privileged
--privileged
To resume, here is the full command line
docker run -d -v /var/run/docker.sock:/var/run/docker.sock --privileged myimage
And you you don't need to create a new jenkins image for that.
Hoping to have helped you
http://container-solutions.com/running-docker-in-jenkins-in-docker/