Unable to resolve docker container during another container's build process - docker

I have two Dockerfiles, one for a database, and one for a web server. The web server's Dockerfile has a RUN statement which requires a connection to the database container. The web server is unable to resolve the database's IP then errors out. But if I comment out the RUN line, then manually run it inside the container, it successfully resolves the database. Should the web server be able to resolve the database during its build process?
# Web server
FROM tomcat:9.0.26-jdk13-openjdk-oracle
# The database container cannot be resolved when myscript runs. "Unable to connect to the database." is thrown.
RUN myscript
CMD catalina.sh run
# But if I comment out the RUN line then connect to web server container and run myscript, the database container is resolved
docker exec ... bash
# This works
./myscript

I ran into the same problem on database migrations and NuGet pushes. You may want to run something similar on your db like migrations, initial/test data and so on. It could be solved in two ways:
Move your DB operations to the ENTRYPOINT so that they're executed at runtime (where the DB container is up and reachable).
Build your image using docker build instead of something like docker-compose up --build because docker build has a switch called --network. So you could create a network in your compose file, bring the DB up with docker-compose up -d db-container and then access them during the build with docker build --network db-container-network -t your-image .
I'd prefer #1 over #2 if possible because
it's simpler: the network is only present in docker-compose file, not on multiple places
you can specify relations usind depends_on and make sure that they're respected properly without taking manually care of it
But depending on the action you want to execute, you need to take care that it's not executed multiple times because it's running on every start and not just during build (when the cache got purged by file changes).
However, I'd consider this as best practice anyway when running such automated DB operations to expect that they may executed more than one and should create the expected result anyway (e.g. by checking if the migration version or change is present).

Related

Docker build can't find docker to run tests

I have a NodeJS application that is using ioredis to connect to redis and publish data and other redisy things.
I am trying to write a component test against redis and was able to create a setup/teardown script via jest that runs redis via docker on a random port and tears it down when the tests are done via docker run -d -p 6379 --rm redis and docker stop {containerId}.
This works great locally, but we have the tests running in a multi-stage build in our Dockerfile:
RUN yarn test
which I try to build via docker build . it goes great until it gets to the tests and then complains with the following error - /bin/sh: docker: not found
Hence, Docker is unavailable to the docker-build process to run the tests?
Is there a way to run docker-build to give it the ability to spin up sibling processes during the process?
This smells to me like a "docker-in-docker" situation.
You can't spin up siblings, but you can spawn a container within a container, by doing some tricks: (you might need to do some googling to get it right)
install the docker binaries in the "host container"
mount the docker socket from the actual host inside the "host" container, like so docker run -v /var/run/docker.sock:/var/run/docker.sock ...
But you won't be able to do it in the build step, so it won't be easy for your case.
I suggest you prepare a dedicated build container capable of running nested containers, which would basically emulate your local env and use that in your CI. Still, you might need to refactor your process a bit make it work.
Good luck :)
In my practice, tests shouldn't be concerned with initializing the database, they should only be concerned about how to connect to the database, so you just pass your db connection data via environment variables.
The way you are doing it it won't scale, imagine that you need a lot more services for your application, it will be difficult and not practical to start them via tests.
When you are developing locally, it's your responsibility to have the services running before doing the tests.
You can have docker compose scripts in your repository that create and start all the services you need when you start developing.
And when you are using CI in the cloud, you would still use docker containers and run tests in them( node container with your tests, redis container, mysql container, etc...) and again just pass the appropriate connection data via environment variables.

Docker - run command on host during build

My query is similar to this Execute Command on host during docker build but I need my container to be running when I execute the command.
Background - I'm trying to create a base image for the database part of an application, using the mysql:8.0 image. The installation instructions for the product require me to run a DDL script to create the database (Done, by copying .sql file to the entrypoint directory), but the second step involves running a java based application which reads various config files to insert the required data into the running database. I would like this second step to be captured in the dockerfile somehow so I can then build a new base image containing the tables and the initial data.
Things I've thought of:
Install java and copy the quite large config tool to the container
and EXEC the appropriate command, but I want to avoid installing
java into the database container and certainly the subsequent image
if I can.
I could run the config tool on the host manually and
connect to the running container but my understanding is that this
would only apply to the running container - I couldn't get this into
a new image? It needs to be done from the dockerfile for docker build
to work.
I suspect docker just isn't designed for this.

Docker Compose: Running a command and then retrieving files

I have a Docker compose setup where I have 20 different services that depend on each other. I'm writing a script that runs tests on a container by using docker-compose run my_service ....
I've got a couple of issues with that though:
After the tests finish, they should output both an XML file with the test results and an XML file with the coverage results. I want my script, which calls docker-compose, to have access to both of these files. This is a challenge because as far as I know, after running docker-compose run, these containers are shut down. The only solution I can think of is running it with --entrypoint=tail -f /dev/null, then executing the test command and retrieving the file. But that's a little cumbersome. Is there a better way?
After the tests finish, I'd like to stop and delete not only the container I was running tests on but all containers that were started because it was dependent on them. How can I do that automatically?
After the tests finish, they should output both an XML file...
If the main function of your task is to read or produce files to the local filesystem, it's often better to run it outside of Docker. In the case of integration tests, this is even pretty straightforward: instead of running the tests inside a Docker container and pointing at the other containers' endpoints, run the tests on the host and point at their published ports. If your test environment can run docker-compose commands then you can launch the container stack as a test fixture.
If for some reason they have to run in Docker, then you can bind-mount a host directory into the container to receive the result files. docker-compose run does support additional -v volume mounts, so you should be able to run something like
docker-compose run -v $PWD/my_service_tests:/output my_service ..
I'd like to stop and delete not only the container I was running tests on but all containers that were started because it was dependent on them.
I don't think Docker Compose has that option; it's not that clever. Consider the case of two different tests running at the same time, each running a separate test container but sharing a database container. The first test can't stop the database container because the second test is using it, but Compose isn't really aware of this.
If you don't mind running a complete isolated stack for each test run, then you can use the docker-compose -p option to do that. Then you can use docker-compose rm to clean everything up, for that specific test run.
docker-compose -p test1 run -v $PWD/test1:/output my_service ...
docker-compose -p test1 stop
docker-compose -p test1 rm
After the tests finish, they should output both an XML file with the test results and an XML file with the coverage results. I want my
script, which calls docker-compose, to have access to both of these
files.
You can write test reports to some folder inside the container. This folder may be mapped to folder on the Docker host using volumes. So script running docker-compose commands would be able to use them.
This is a challenge because as far as I know, after running
docker-compose run, these containers are shut down.
They are stopped. But, the next time you run docker-compose up they are restarted preserving mounted volumes.
Note:
Compose caches the configuration used to create a container. When you
restart a service that has not changed, Compose re-uses the existing
containers. Re-using containers means that you can make changes to
your environment very quickly.
It means you can copy reports files generated by test service using docker cp commands even after containers exit.
docker cp should work regardless volumes. For example, suppose tests had written reports.xml to /test_reports folder insider the container. You can copy the file to the host using docker cp after test container has stopped.
Example, Example2
After the tests finish, I'd like to stop and delete not only the
container I was running tests on but all containers that were started
because it was dependent on them. How can I do that automatically?
Use docker-compose down
The command
Stops containers and removes containers, networks, volumes, and images created by up.
The command will work if you defined the service under test with all dependent services and test service itself in the same compose file.
Usage example

Docker - docker-compose - postgres image

I'm using the postgres:latest image, and creating backups using the following command
pg_dump postgres -U postgres > /docker-entrypoint-initdb.d/backups/redmine-$(date +%Y-%m-%d-%H-%M).sql
and it's running periodically using crontab
*/30 * * * * /docker-entrypoint-initdb.d/backup.sh
However, on occasion I might need to run
docker-compose down/up
for whatever reason
The problem
I always need to manually run /etc/init.d/cron start whenever I restart the container. This is a bit of a problem because it's difficult to remember to do, and if I (or anyone else) forgets this, backups wont be made
According to the documentation, scripts ending with *.sql and *.sh inside the /docker-entrypoint-initdb.d/ are run on container startup (and they do)
However, if I put /etc/init.d/cron start inside a executable .sh file, the other commands inside that file are executed and I've verified that. But the cron service does not start, probably because the /etc/init.d/cron start inside the executable file does not execute successfully
I would appreciate any suggestion for a solution
You will want to keep your docker containers as independent of other services as possible, I would recommend that you instead of running the cronjob in the container do it on the host, that way it will run even if the container is restarted (weather automatically or manually).
If you really really feel the need for it, I would build a new image with the postgres image as base, and add the cron right from there, that way it is in the container already from start, without any extra scripts needed. Or even create another image just to invoke the cronjob and connect via the docker network.
Expanding on #Jite's answer, you could run pg_dump remotely in a different container using the --host option
This image, for example, provides a minimal environment with psql client and dump/restore utilities

How to execute docker commands after a process has started

I wrote a Dockerfile for a service (I have a CMD pointing to a script that starts the process) but I cannot run any other commands after the process has started? I tried using '&' to run the process in the background so that the other commands would run after the process has started but it's not working? Any idea on how to achieve this?
For example, consider I started a database server and wanted to run some scripts only after the database process has started, how do I do that?
Edit 1:
My specific use case is I am running a Rabbitmq server as a service and I want to create a new user, make him administrator and delete the default guest user once the service starts in a container. I can do it manually by logging into the docker container but I wanted to automate it by appending these to the shell script that starts the rabbitmq service but that's not working.
Any help is appreciated!
Regards
Specifically around your problem with Rabbit MQ - you can create a rabbitmq.config file and copy that over when creating the docker image.
In that file you can specify both a default_user and default_pass that will be created when an the database is set from scratch see https://www.rabbitmq.com/configure.html
AS for the general problem - you can change the entry point to a script that runs whatever you need and the service you want instead of the run script of the service
I partially understood your question. Based on what I perceived from your question, I would recommend you to mention the Copy command to copy the script you want to run into the dockerfile. Once you build an image and run the container, start the db service. Then exec the container and get into the container, run the script manually.
If you have CMD command in the dockerfile, then it will be overwritten by the command you specify during the execution. So, I don't think you have any other option to run the script unless you don't have CMD in the dockerfile.

Resources