We have a docker application that when it is deployed has to do a warmup, else the first request will be really slow.
It is a shell script that just caches the routes and classes.
We are using the same dockerfile for development and would like to keep doing that.
How can we do that?
You would override the entrypoint with a custom script which runs your original entry point command and then warm-up shell script.
You would have to make sure the last command is long running to keep the container running. You could use Supervisor for this purpose.
Related
I have a ftp.sh script that downloads files from an external ftp to my host. And I have another (java) application that imports the downloaded content into a database. Currently, both runs on the host triggered by a cronjob as follows:
importer.sh:
#!/bin/bash
source ftp.sh
java -jar app.jar
Now I'd like to move my project to docker. From the design point of view: would the .sh script and the application reside both in a separate container? Or should both be bundled into one container?
I can think of the following approaches:
Run the ftp script on host, but java app in docker container.
Run the ftp script in its own docker container, and the java app in another docker container.
Bundle both the script and java app in a docker container. Then call a wrapper script with: ENTRYPOINT["wrapper.sh"]
So the underlying question is: should each docker container serve only on purpose (either download files, or import them)?
Sharing files between containers is tricky and I would try to avoid doing that. It sounds like you are trying to set up a one-off container that does the download, then does the import, then exits, so "orchestrating" this with a shell script in a single container will be much easier than trying to set up multiple containers with shared storage. If the ftp.sh script sets some environment variables then it will be basically impossible to export them to a second container.
From your workflow it doesn't sound like building an image with the file and the import tool is the right approach. If it was, I could envision a work flow where you ran ftp.sh on the host or as the first part of a multi-stage build and then COPY the file into the image. For a workflow that's "download a file then import it into a database, where the file changes routinely" I don't think that's what you're after.
The setup you have now should work fine just packaging it into a container. I'd give you some generic advice in a code-review context but your last option of stuffing it also into one image and running the wrapper script as the main container process makes sense.
I'd like to pull down a standard docker container and then issue it a command that will read and execute a .jmx test file from the current folder (or specified path) and drop the results into the same folder (or another specified path/filename). Bonus points if the stdout from jmeter's console app comes through from the docker run command.
I've been looking into this for quite some time and the solutions I've found are way more complex than I'd like. Some require that I create my own dockerfile and build my own image. Others require that I set up a Docker volume first on my machine and then use that as part of the command. Still others want to run fairly lengthy bash shell scripts. I'm running on Windows and would prefer something that just works with the standard docker CLI running in any Windows prompt (it should work from cmd or PowerShell or bash, not just one of these).
My end goal is that I want to test some APIs using jmeter tests that already exist. The APIs are running in another locally running container that I can execute with a path and port. I want to be able to run these tests from any machine without first having to install Java and jmeter.
I have a Node.JS based application consisting of three services. One is a web application, and two are internal APIs. The web application needs to talk to the APIs to do its work, but I do not want to hard-code the IP address and ports of the other services into the codebase.
In my local environment I am using the nifty envify Node.JS module to fix this. Basically, I can pretend that I have access to environment variables while I'm writing the code, and then use the envify CLI tool to convert those variables to hard-coded strings in the final browserified file.
I would like to containerize this solution and deploy it to Kubernetes. This is where I run into issues...
I've defined a couple of ARG variables in my Docker image template. These get turned into environment variables via RUN export FOO=${FOO}, and after running npm run-script build I have the container I need. OK, so I can run:
docker build . -t residentmario/my_foo_app:latest --build-arg FOO=localhost:9000 BAR=localhost:3000
And then push that up to the registry with docker push.
My qualm with this approach is that I've only succeeded in punting having hard-coded variables to the container image. What I really want is to define the paths at pod initialization time. Is this possible?
Edit: Here are two solutions.
PostStart
Kubernetes comes with a lifecycle hook called PostStart. This is described briefly in "Container Lifecycle Hooks".
This hook fires as soon as the container reaches ContainerCreated status, e.g. the container is done being pulled and is fully initialized. You can then use the hook to jump into the container and run arbitrary commands.
In our case, I can create a PostStart event that, when triggered, rebuilds the application with the correct paths.
Unless you created a Docker image that doesn't actually run anything (which seems wrong to me, but let me know if this is considered an OK practice), this does require some duplicate work: stopping the application, rerunning the build process, and starting the application up again.
Command
Per the comment below, this event doesn't necessarily fire at the right time. Here's another way to do it that's guaranteed to work (and hence, superior).
A useful Docker container ends with some variant on a CMD serving the application. You can overwrite this run command in Kubernetes, as explained in the "Define a Command and Arguments for a Container" section of the documentation.
So I added a command to the pod definition that ran a shell script that (1) rebuilt the application using the correct paths, provided as an environment variable to the pod and (2) started serving the application:
command: ["/bin/sh"]
args: ["./scripts/build.sh"]
Worked like a charm.
I need to configure a program running in a docker container. To achieve that the program must be running (and provide an open port) so that the administration program can connect to the running process. Unfortunately there is no simple editable config file so this is the only way. The RUN command is obviously not the right one because it does not provide a running instance after docker went to the next command. The best way would be doing this while building the docker image but if it has to be done during container start it would be OK as well. But there is (as far as I know) also no easy way to run multiple commands on startup. Does anyone has an idea how to do that?
To make it a bit more clear, here is a simple example from my Dockerfile:
# this command should start the application which has to be configured
RUN /usr/local/server/server.sh
# I tried this command alternatively because the shell script is blocking
RUN nohup /usr/local/server/server.sh &
# this is the command which starts an administration program which connects to the running instance started above
RUN /usr/local/administration/adm [some configuration parameters...]
# afterwards the server process can be stopped
Downloading the complete program directory containing the correct state could be a solution, too. But then the configuration cannot changed easily in the Dockerfile, what would be great.
A Dockerfile is supposed to be a sequential list of instructions to produce an image. The image should contain your application's code, and all of its installable dependencies.
Each RUN instruction gets executed as its own container. Once the command that you run completes, any changed files get committed as a new image layer.
Trying to run a process in the background, will cause the command you are running to return immediately. Once that happens, the container is considered stopped, and the Dockerfile's next instruction will be executed in a new separate container.
If you really need two processes running, you will need to produce a command that you can pass to a single RUN instruction.
I'm trying to create a Docker container based on CentOS 7 that will host R, shiny-server, and rstudio-server, but to I need to have systemd in order for the services to start. I can use the systemd enabled centos image as a basis, but then I need to run the container in privileged mode and allow access to /sys/fs/cgroup on the host. I might be able to tolerate the less secure situation, but then I'm not able to share the container with users running Docker on Windows or Mac.
I found this question but it is 2 years old and doesn't seem to have any resolution.
Any tips or alternatives are appreciated.
UPDATE: SUCCESS!
Here's what I found: For shiny-server, I only needed to execute shiny-server with the appropriate parameters from the command line. I captured the appropriate call into a script file and call that using the final CMD line in my Dockerfile.
rstudio-server was more tricky. First, I needed to install initscripts to get the dependencies in place so that some of the rstudio scripts would work. After this, executing rstudio-server start would essentially do nothing and provide no error. I traced the call through the various links and found myself in /usr/lib/rstudio-server/bin/rstudio-server. The daemonCmd() function tests cat /proc/1/comm to determine how to start the server. For some reason it was failing, but looking at the script, it seems clear that it needs to execute /etc/init.d/rstudio-server start. If I do that manually or in a Docker CMD line, it seems to work.
I've taken those two CMD line requirements and put them into an sh script that gets called from a CMD line in the Dockerfile.
A bit of a hack, but not bad. I'm happy to hear any other suggestions.
You don't necessarily need to use an init system like systemd.
Essentially, you need to start multiple services, there are existing patterns for this. Check out this page about how to use supervisord to achieve the same thing: https://docs.docker.com/engine/admin/using_supervisord/