Services in CentOS 7 Docker image without systemd - docker

I'm trying to create a Docker container based on CentOS 7 that will host R, shiny-server, and rstudio-server, but to I need to have systemd in order for the services to start. I can use the systemd enabled centos image as a basis, but then I need to run the container in privileged mode and allow access to /sys/fs/cgroup on the host. I might be able to tolerate the less secure situation, but then I'm not able to share the container with users running Docker on Windows or Mac.
I found this question but it is 2 years old and doesn't seem to have any resolution.
Any tips or alternatives are appreciated.
UPDATE: SUCCESS!
Here's what I found: For shiny-server, I only needed to execute shiny-server with the appropriate parameters from the command line. I captured the appropriate call into a script file and call that using the final CMD line in my Dockerfile.
rstudio-server was more tricky. First, I needed to install initscripts to get the dependencies in place so that some of the rstudio scripts would work. After this, executing rstudio-server start would essentially do nothing and provide no error. I traced the call through the various links and found myself in /usr/lib/rstudio-server/bin/rstudio-server. The daemonCmd() function tests cat /proc/1/comm to determine how to start the server. For some reason it was failing, but looking at the script, it seems clear that it needs to execute /etc/init.d/rstudio-server start. If I do that manually or in a Docker CMD line, it seems to work.
I've taken those two CMD line requirements and put them into an sh script that gets called from a CMD line in the Dockerfile.
A bit of a hack, but not bad. I'm happy to hear any other suggestions.

You don't necessarily need to use an init system like systemd.
Essentially, you need to start multiple services, there are existing patterns for this. Check out this page about how to use supervisord to achieve the same thing: https://docs.docker.com/engine/admin/using_supervisord/

Related

Create new docker image vs run shell commands

we are working with fabric-ca docker image. it does not come with scp installed so we have two options:
Option 1: create a new image as described here
Option 2: install scp from the shell when container is started
we'd like to understand what are the pros and cons of each.
Option 1: allows you to build on it further, creates a stable state, you can verify / test an image before releasing
Option 2: takes longer to startup, requires being online during container start, it is harder to trace / understand and manage software stack locked in e.g. bash scripts that start dockers vs. Dockerfile and whatever technology you will end up using for container orchestration.
Ultimately, I use option 2 only for discovery, proof of concept or trying something out. Once I know I need certain container on ongoing basis, I build a proper image via Dockerfile.
You should consider your option 2 a non-starter. Either build a custom image or use a host directory bind-mount (docker run -v /host/path:/container/path option) to inject the data you need; I would probably prefer the bind-mount option.
It’s extremely routine to docker rm a container, and when you do, any changes you’ve made locally in a container are lost. For example, if there is a new software release or a critical security update, you have to recreate the container with a new image. You should pretty much never install software in an interactive shell in a container, especially if you’re going to use it to copy in data your application needs: you’ll have to repeat this step every single time you delete and recreate the container.
Option 1:
The BUILD of the image is longer, but you execute it only the first time
The RUN is faster
You don't need an internet connection at RUN
Include a verification of the different steps
Allow tracability
Option 2:
The RUN is longer
You need need an internet connection at RUN
Harder to trace

Accessing Files on a Windows Docker Container Easily

Summary
So I'm trying to figure out a way to use docker to be able to spin up testing environments for customers rather easily. Basically, I've got a customized piece of software that want to install to a Windows docker container (microsoft/windowsservercore), and I need to be able to access the program folder for that software (C:\Program Files\SOFTWARE_NAME) as it has some logs, imports/exports, and other miscellaneous configuration files. The installation part was easy, and I figured that after a few hours of messing around with docker and learning how it works, but transferring files in a simple manner is proving far more difficult than I would expect. I'm well aware of the docker cp command, but I'd like something that allows for the files to be viewed in a file browser to allow testers to quickly/easily view log/configuration files from the container.
Background (what I've tried):
I've spent 20+ hours monkeying around with running an SSH server on the docker container, so I could just ssh in and move files back and forth, but I've had no luck. I've spent most of my time trying to configure OpenSSH, and I can get it installed, but there appears to be something wrong with the default configuration file provided with my installation, as I can't get it up and running unless I start it manually via command line by running sshd -d. Strangely, this runs just fine, but it isn't really a viable solution as it is running in debug mode and shuts down as soon as the connection is closed. I can provide more detail on what I've tested with this, but it seems like it might be a dead end (even though I feel like this should be extremely simple). I've followed every guide I can find (though half are specific to linux containers), and haven't gotten any of them to work, and half the posts I've found just say "why would you want to use ssh when you can just use the built in docker commands". I want to use ssh because it's simpler from an end user's perspective, and I'd rather tell a tester to ssh to a particular IP than make them interact with docker via the command line.
EDIT: Using OpenSSH
Starting server using net start sshd, which reports it starting successfully, however, the service stops immediately if I haven't generated at least an RSA or DSA key using:
ssh-keygen.exe -f "C:\\Program Files\\OpenSSH-Win64/./ssh_host_rsa_key" -t rsa
And modifying the permissions using:
icacls "C:\Program Files\OpenSSH-Win64/" /grant sshd:(OI)(CI)F /T
and
icacls "C:\Program Files\OpenSSH-Win64/" /grant ContainerAdministrator:(OI)(CI)F /T
Again, I'm using the default supplied sshd_config file, but I've tried just about every adjustment of those settings I can find and none of them help.
I also attempted to setup Volumes to do this, but because the installation of our software is done at compile time in docker, the folder that I want to map as a volume is already populated with files, which seems to make docker fail when I try to start the container with the volume attached. This section of documentation seems to say this should be possible, but I can't get it to work. Keep getting errors when I try to start the container saying "the directory is not empty".
EDIT: Command used:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination=C:/temp my_container
Running this on a ProxMox VM.
At this point, I'm running out of ideas, and something that I feel like should be incredibly simple is taking me far too many hours to figure out. It particularly frustrates me that I see so many blog posts saying "Just use the built in docker cp command!" when that is honestly a pretty bad solution when you're going to be browsing lots of files and viewing/editing them. I really need a method that allows the files to be viewed in a file browser/notepad++.
Is there something obvious here that I'm missing? How is this so difficult? Any help is appreciated.
So after a fair bit more troubleshooting, I was unable to get the docker volume to initialize on an already populated folder, even though the documentation suggests it should be possible.
So, I instead decided to try to start the container with the volume linked to an empty folder, and then start the installation script for the program after the container is running, so the folder populates after the volume is already linked. This worked perfectly! There's a bit of weirdness if you leave the files in the volume and then try to restart the container, as it will overwrite most of the files, but things like logs and files not created by the installer will remain, so we'll have to figure out some process for managing that, but it works just like I need it to, and then I can use windows sharing to access that volume folder from anywhere on the network.
Here's how I got it working, it's actually very simple.
So in my dockerfile, I added a batch script that unzips the installation DVD that is copied to the container, and runs the installer after extracting. I then used the CMD option to run this on container start:
Dockerfile
FROM microsoft/windowsservercore
ADD DVD.zip C:\\resources\\DVD.zip
ADD config.bat C:\\resources\\config.bat
CMD "C:\resources\config.bat" && cmd
Then I build the container without anything special:
docker build -t my_container:latest .
And run it with the attachment to the volume:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination="C:/Program Files (x86)/{PROGRAM NAME}" my_container
And that's it. Unfortunately, the container takes a little longer to start (it does build faster though, for what that's worth, as it isn't running the installer in the build), and the program isn't installed/running for another 5 minutes or so after the container does start, but it works!
I can provide more details if anyone needs them, but most of the rest is implementation specific and fairly straightforward.
Try this with Docker composer. Unfortunately, I cannot test it as I'm using a Mac it's not a "supported platform" (way to go Windows). See if that works, if not try volume line like this instead - ./my_volume:C:/tmp/
Dockerfile
FROM microsoft/windowsservercore
# need to ecape \
WORKDIR C:\\tmp\\
# Add the program from host machine to container
ADD ["<source>", "C:\tmp"]
# Normally used with web servers
# EXPOSE 80
# Running the program
CMD ["C:\tmp\program.exe", "any-parameter"]
Docker Composer
Should ideally be in the parent folder.
version: "3"
services:
windows:
build: ./folder-of-Dockerfile
volume:
- type: bind
source: ./my_volume
target: C:/tmp/
ports:
- 9999:9092
Folder structure
|---docker-composer.yml
|
|---folder-of-Dockerfile
|
|---Dockerfile
Just run docker-composer up to build and start the container. Use -d for detach mode, should only use once you know its working properly.
Useful link Manage Windows Dockerfile

Docker: Run command while another command is running

I need to configure a program running in a docker container. To achieve that the program must be running (and provide an open port) so that the administration program can connect to the running process. Unfortunately there is no simple editable config file so this is the only way. The RUN command is obviously not the right one because it does not provide a running instance after docker went to the next command. The best way would be doing this while building the docker image but if it has to be done during container start it would be OK as well. But there is (as far as I know) also no easy way to run multiple commands on startup. Does anyone has an idea how to do that?
To make it a bit more clear, here is a simple example from my Dockerfile:
# this command should start the application which has to be configured
RUN /usr/local/server/server.sh
# I tried this command alternatively because the shell script is blocking
RUN nohup /usr/local/server/server.sh &
# this is the command which starts an administration program which connects to the running instance started above
RUN /usr/local/administration/adm [some configuration parameters...]
# afterwards the server process can be stopped
Downloading the complete program directory containing the correct state could be a solution, too. But then the configuration cannot changed easily in the Dockerfile, what would be great.
A Dockerfile is supposed to be a sequential list of instructions to produce an image. The image should contain your application's code, and all of its installable dependencies.
Each RUN instruction gets executed as its own container. Once the command that you run completes, any changed files get committed as a new image layer.
Trying to run a process in the background, will cause the command you are running to return immediately. Once that happens, the container is considered stopped, and the Dockerfile's next instruction will be executed in a new separate container.
If you really need two processes running, you will need to produce a command that you can pass to a single RUN instruction.

Recover docker container's run arguments

I often find myself in need of re-creating container with minor modifications to arguments used to docker run container originally (things like changing published ports, network, memory amount).
Now I am making images and running them in place of old containers.
This works fine but I don't always have original params to docker run saved and sometimes (esp. when there are lot of things to define) it becomes pain to recover them.
Is there any way to recover docker run arguments from existing container?
Sorry for being a couple of years late, but I had a similar question and no satisfying answer yet, so I still needed to find my way out.
I've found two sources addressing the issue:
A gist
To run, save this to a file, e.g. run.tpl and do docker inspect --format "$(<run.tpl)" name_or_id_of_running_container
A docker image
Quick run:
$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock nexdrew/rekcod <container>
Both solutions are quite simple to use, but the second one failed to generate the command for an Nginx container because they did not manage to have it quoted like this "nginx" "-g" "daemon off;"
So, I focused on the first solution, which is a golang template intended to feed the --format parameter of docker inspect. I liked it because it was kind of simple, elegant, and no other tool needed.
I've made some improvements in my forked gist and notified the original author about it.
Couple of answers to this. Run your containers using docker-compose, then you can just run compose files and retain all your configuration. Obviously compose is designed for multi-container applications, but massively underrated for single-container, complex run argument use cases.
Second one is to put your run command into a LABEL on the image. Take a look at Label Schema's docker.cmd etc... Then you can easily retrieve from the image (or from your Dockerfile).
the best way to do this is not to type the commands manually. put them into a shell script... a .sh file on linux/mac, or a .cmd file on windows. then you just run the shell script to create your container and you never have to worry about re-typing the commands and options, you'll never get them wrong, etc.
personally, i write my scripts with "npm scripts" in my package.json file. but the same thing can be done with any tool that can run command-line program with arguments
i do this along with a few other tricks to make sure i never fail to build my images or run my containers. makes life with docker soooo much easier. :)
You can use docker inspect to get the container's configuration. Reconstructing the docker run command from that can be somewhat tedious though.
Another option is to search your shell history using either history | grep "docker run" or ctrl+r (if you use bash). That way, you don't need to go out of your way to save the commands but can still recover them quickly.

Run a complex series of commands in the same Docker container

I'm trying to automate the following loop with Docker: spawn a container, do some work inside of it (more than one single command), get some data out of the container.
Something along the lines of:
for ( i = 0; i < 10; i++ )
spawn a container
wget revision-i
do something with it and store results in results.txt
According to the documentation I should go with:
for ( ... )
docker run <image> <long; list; of; instructions; separated; by; semicolon>
Unfortunately, this approach is not attractive nor maintanable as the list of instructions grows in complexity.
Wrapping the instructions in a script as in docker run <image> /bin/bash script.sh doesn't work either since I want to spawn a new container for every iteration of the loop.
To sum up:
Is there any sensible way to run a complex series of
commands as described above inside the same container?
Once some data are saved inside a container in, say, /home/results.txt,
and the container returns, how do I get results.txt? The only way I
can think of is to commit the container and tar the file out of the
new image. Is there a more efficient way to do it?
Bonus: should I use vanilla LXC instead? I don't have any experience with it though so I'm not sure.
Thanks.
I eventually came up with a solution that works for me and greatly improved my Docker experience.
Long story short: I used a combination of Fabric and a container running sshd.
Details:
The idea is to spawn container(s) with sshd running using Fabric's local, and run commands on the containers using Fabric's run.
To give a (Python) example, you might have a Container class with:
1) a method to locally spawn a new container with sshd up and running, e.g.
local('docker run -d -p 22 your/image /usr/sbin/sshd -D')
2) set the env parameters needed by Fabric to connect to the running container - check Fabric's tutorial for more on this
3) write your methods to run everything you want in the container exploiting Fabric's run, e.g.
run('uname -on')
Oh, and if you like Ruby better you can achieve the same using Capistrano.
Thanks to #qkrijger (+1'd) for putting me on the right track :)
On question 2.
I don't know if this is the best way, but you could install SSH on you image and use that. For more information on this, you can check out this page from the documentation.
You post 2 questions in one. Maybe you should put 2. in a different post. I will consider 1. here.
It is unclear to me whether you want to spawn a new container for every iteration (as you say first) or if you want to "run a complex series of commands as described above inside the same container?" as you say later.
If you want to spawn multiple containers I would expect you to have a script on your machine handling that.
If you need to pass an argument to your container (like i): there is being work done on passing arguments currently. See https://github.com/dotcloud/docker/pull/1015 (and https://github.com/dotcloud/docker/pull/1015/files for documentation change which is not online yet).

Resources