Currently, I'm using bash script in order to execute docker commands.
For example docker-compose up -d is just command start inside the terminal and etc.
I have a lot of commands like this to start/stop/restart/execute some commands inside a container and etc.
Everything is working fine, but the problem is with the bash file there are too many if/else for all the commands. I'm trying to find another script language to do this, but not sure which will be cleaner(better to write). Does someone else use something similar?
I was thinking to use Python, but it needs to be something that requests a minimum work from the user side. The idea is just to download the docker repo and start using the commands that's why I'm using bash for now.
I would say that Python is an extremely good choice. The syntax is very easy to pick up, and there are many modules/libraries for infrastructure work.
Python is one of the most popular languages in the Ops/Devops world, so there will also be plenty of help online.
Related
I would like to run docker-compose projects from Golang using the docker package by providing the docker-compose.yml file.
Following the example from https://docs.docker.com/engine/api/sdk/examples/
I know how to create and run individual containers using Golang, but is there a way to run docker-compose projects from the Golang docker library?
I know that I can do something like this
import "os/exec"
exec.Command("docker-compose","up")
but I would like this to happen from the docker package instead.
I'm a complete Go noob, so take this with a grain of salt, but I was looking through the Docker compose implementation of their up command - they use Cobra for their cli tooling (which seems quite common), and Viper for argument parsing. The link provided is to the Cobra command implementation, which is different from the up internals, which the command uses.
If you were to want to add a command which would invoke the docker compose "up" command as part of your golang command (which is what I think you're going for - it's what I was looking to try to do), I think you'd have to accept the fact that you'd have to basically implement your own versions of the Cobra commands they have there, replicating their existing logic. That was my take.
I am creating an automation framework using selenium and my entry point in execution is creating containers of different db types, load them with database dumps and then start with tests.
I have one simple and might be a foolish question
If I create a docker-compose file which creates the above mentioned container and generally we do docker-compose up command to run the docker compose file.
But can I control the docker-compose/Dockerfile when the execution is going on, like
Test starts from TestNG -> Before scripts execute to run the docker-compose file and create containers.
how can I control that?
Thanks in advance
I can think of the following options:
1- use ansible to deploy for you, you can write a play book with the steps
advantages: scaling, will manage everything for you, you can add notifications, but requires managing ansible itself and learning it.
2- use a shell script that will start things in order start the containers (or however you want the order to be) then start the TestNG, cheap and dirty solution.
I'm trying to create a Docker container based on CentOS 7 that will host R, shiny-server, and rstudio-server, but to I need to have systemd in order for the services to start. I can use the systemd enabled centos image as a basis, but then I need to run the container in privileged mode and allow access to /sys/fs/cgroup on the host. I might be able to tolerate the less secure situation, but then I'm not able to share the container with users running Docker on Windows or Mac.
I found this question but it is 2 years old and doesn't seem to have any resolution.
Any tips or alternatives are appreciated.
UPDATE: SUCCESS!
Here's what I found: For shiny-server, I only needed to execute shiny-server with the appropriate parameters from the command line. I captured the appropriate call into a script file and call that using the final CMD line in my Dockerfile.
rstudio-server was more tricky. First, I needed to install initscripts to get the dependencies in place so that some of the rstudio scripts would work. After this, executing rstudio-server start would essentially do nothing and provide no error. I traced the call through the various links and found myself in /usr/lib/rstudio-server/bin/rstudio-server. The daemonCmd() function tests cat /proc/1/comm to determine how to start the server. For some reason it was failing, but looking at the script, it seems clear that it needs to execute /etc/init.d/rstudio-server start. If I do that manually or in a Docker CMD line, it seems to work.
I've taken those two CMD line requirements and put them into an sh script that gets called from a CMD line in the Dockerfile.
A bit of a hack, but not bad. I'm happy to hear any other suggestions.
You don't necessarily need to use an init system like systemd.
Essentially, you need to start multiple services, there are existing patterns for this. Check out this page about how to use supervisord to achieve the same thing: https://docs.docker.com/engine/admin/using_supervisord/
I often find myself in need of re-creating container with minor modifications to arguments used to docker run container originally (things like changing published ports, network, memory amount).
Now I am making images and running them in place of old containers.
This works fine but I don't always have original params to docker run saved and sometimes (esp. when there are lot of things to define) it becomes pain to recover them.
Is there any way to recover docker run arguments from existing container?
Sorry for being a couple of years late, but I had a similar question and no satisfying answer yet, so I still needed to find my way out.
I've found two sources addressing the issue:
A gist
To run, save this to a file, e.g. run.tpl and do docker inspect --format "$(<run.tpl)" name_or_id_of_running_container
A docker image
Quick run:
$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock nexdrew/rekcod <container>
Both solutions are quite simple to use, but the second one failed to generate the command for an Nginx container because they did not manage to have it quoted like this "nginx" "-g" "daemon off;"
So, I focused on the first solution, which is a golang template intended to feed the --format parameter of docker inspect. I liked it because it was kind of simple, elegant, and no other tool needed.
I've made some improvements in my forked gist and notified the original author about it.
Couple of answers to this. Run your containers using docker-compose, then you can just run compose files and retain all your configuration. Obviously compose is designed for multi-container applications, but massively underrated for single-container, complex run argument use cases.
Second one is to put your run command into a LABEL on the image. Take a look at Label Schema's docker.cmd etc... Then you can easily retrieve from the image (or from your Dockerfile).
the best way to do this is not to type the commands manually. put them into a shell script... a .sh file on linux/mac, or a .cmd file on windows. then you just run the shell script to create your container and you never have to worry about re-typing the commands and options, you'll never get them wrong, etc.
personally, i write my scripts with "npm scripts" in my package.json file. but the same thing can be done with any tool that can run command-line program with arguments
i do this along with a few other tricks to make sure i never fail to build my images or run my containers. makes life with docker soooo much easier. :)
You can use docker inspect to get the container's configuration. Reconstructing the docker run command from that can be somewhat tedious though.
Another option is to search your shell history using either history | grep "docker run" or ctrl+r (if you use bash). That way, you don't need to go out of your way to save the commands but can still recover them quickly.
I'm trying to automate the following loop with Docker: spawn a container, do some work inside of it (more than one single command), get some data out of the container.
Something along the lines of:
for ( i = 0; i < 10; i++ )
spawn a container
wget revision-i
do something with it and store results in results.txt
According to the documentation I should go with:
for ( ... )
docker run <image> <long; list; of; instructions; separated; by; semicolon>
Unfortunately, this approach is not attractive nor maintanable as the list of instructions grows in complexity.
Wrapping the instructions in a script as in docker run <image> /bin/bash script.sh doesn't work either since I want to spawn a new container for every iteration of the loop.
To sum up:
Is there any sensible way to run a complex series of
commands as described above inside the same container?
Once some data are saved inside a container in, say, /home/results.txt,
and the container returns, how do I get results.txt? The only way I
can think of is to commit the container and tar the file out of the
new image. Is there a more efficient way to do it?
Bonus: should I use vanilla LXC instead? I don't have any experience with it though so I'm not sure.
Thanks.
I eventually came up with a solution that works for me and greatly improved my Docker experience.
Long story short: I used a combination of Fabric and a container running sshd.
Details:
The idea is to spawn container(s) with sshd running using Fabric's local, and run commands on the containers using Fabric's run.
To give a (Python) example, you might have a Container class with:
1) a method to locally spawn a new container with sshd up and running, e.g.
local('docker run -d -p 22 your/image /usr/sbin/sshd -D')
2) set the env parameters needed by Fabric to connect to the running container - check Fabric's tutorial for more on this
3) write your methods to run everything you want in the container exploiting Fabric's run, e.g.
run('uname -on')
Oh, and if you like Ruby better you can achieve the same using Capistrano.
Thanks to #qkrijger (+1'd) for putting me on the right track :)
On question 2.
I don't know if this is the best way, but you could install SSH on you image and use that. For more information on this, you can check out this page from the documentation.
You post 2 questions in one. Maybe you should put 2. in a different post. I will consider 1. here.
It is unclear to me whether you want to spawn a new container for every iteration (as you say first) or if you want to "run a complex series of commands as described above inside the same container?" as you say later.
If you want to spawn multiple containers I would expect you to have a script on your machine handling that.
If you need to pass an argument to your container (like i): there is being work done on passing arguments currently. See https://github.com/dotcloud/docker/pull/1015 (and https://github.com/dotcloud/docker/pull/1015/files for documentation change which is not online yet).