Conflicting / Multiple start commands between `Dockerfile` and `docker-compose.yml` - docker

I was following this tutorial to get a Rails 6 Application up and running on Docker (although this question isn't specific to Rails)
In the Dockerfile it has the following command
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
Great, so it's giving it a startup command to start the rails server locally.
Later in the same article it shows the following in the docker-compose.yml file:
services:
...
web:
build: .
command: bash -c "foreman start -f Procfile.dev-server"
...
Now it's providing a different command to start the app (using the foreman gem, which likely starts the rails server in a similar fashion to the first command).
Which "command" is the one that actually executes and starts everything up? Does the docker-compose command override the Dockerfile CMD when I run docker-compose up ?

The command: in docker-compose.yml, or the command given at the end of a docker run command, takes precedence. No matter what else you specify, a container only runs one command, and then exits.
In a typical image that does package some single application, best practice is to COPY the application code (or compiled binary) in and set an appropriate CMD that runs it, even if in development you'll be running it with modified settings.

Related

Docker with Ruby on Rails on a development environment

I'm learning Docker and I'm trying to configure a Ruby on Rails project to run on it (on development environment). But I'm having some trouble.
I managed to configure docker-compose to start a container with the terminal open, so I can do bundle install, start a server or use rails generators. However, every time I run the command to start, it starts a new container, where I have to do bundle install again (it takes a while).
So I'd like to know if there is a way to reuse components already created.
Here is my Dockerfile.dev
FROM ruby:2.7.4-bullseye
WORKDIR '/apps/gaia_api'
EXPOSE 3000
RUN gem install rails bundler
CMD ["/bin/bash"]
And here is my docker-compose file:
version: "3.8"
services:
gaia_api:
build:
dockerfile: Dockerfile.dev
context: "."
volumes:
- .:/apps/gaia_api
environment:
- USER_DB_RAILS
- PASSWORD_DB_RAILS
ports:
- "3000:3000"
The command I'm using to run is: docker-compose run --service-ports gaia_api.
I tried to use the docker commands build, create and start, however the volume mapping doesn't work. On the terminal of the container, the files of the volume are not there.
The commands I tried.
docker build -t gaia -f Dockerfile.dev .
docker create -v ${pwd}:/apps/gaia_api -it -p 3000:3000 gaia
docker start -i f36d4d9044b08e42b2b9ec1b02b03b86b3ae7da243f5268db2180f3194823e48
There is probably something I still don't understand. So I ask: Whats the best way to configure docker for ruby on rails development? And will it be possible to add new services later (I plan once I get the first part to work, to add postgres and a vue project).
EDIT: Forgot to say that I'm on Mac OS Big Sur
EDIT 2: I found what was wrong with the volumes, I was tying -v ${pwd}:/apps instead of $(pwd):/apps.

Docker container staying up and running

I have a failure to understand how does the docker container stay up and running. From what I know, if the container doesn't have any active processes it will shut down automatically independent of commands given.
That's the reason I have instructed my docker-compose.yml to do this command which keeps it alive:
gateway:
build: .
image: me/gateway
container_name: gateway
command: tail -F /dev/null <------
# restart: always
ports:
- "10091:10091"
volumes:
- ./logs:/root/logs
- vendor:/root/vendor
- .:/root
env_file:
- .env
While my Dockerfile does the following:
FROM php:7-fpm-alpine
EXPOSE 10091
WORKDIR /root
COPY . .
COPY src/scripts/generateConfig.sh /usr/local/bin/generateConfig
RUN ["chmod", "+x", "/usr/local/bin/generateConfig"]
In this scenario the container stays up, and it's all fine. However I would like to run a script once the container starts, so I have added the additional line to the end of my Dockerfile:
ENTRYPOINT ["generateConfig"]
After the command is ran, the container automatically stops. There are no errors when I inspect the log, as the script does the job as it is supposed to. Script is responsible for running a Ratchet web socket process indefinitely.
How can I make the script run and simultaneously keep my container running?
When you start a container it runs the command you defined in ENTRYPOINT or in CMD.
This command usually starts a process and as long as this process runs in the foreground the container will still run.
If your process is run in the background the container will stop.
Thus, if you are running a script you need to make sure it never ends.
You can simply achieve that by adding such line to the end of your script:
tail -f <some log file> # keep listening on your application log file.
Alternatively, you can add some while true:
while true; sleep 2; done
Hope it'll help
I have found the issue. Script which was in command to start the web socket process had this line starting it:
nohup php ${serviceProcess} >> /dev/null 2>&1 &
The solution was not to suppress output by using only:
nohup php ${serviceProcess}
It works well now

How to call the entry-point of redmine docker image?

I'm trying to get redmine running in docker. I'm new to both.
I'm using the "default" redmine image (version 3.3, because version 4.X is not yet supported by redmine mobile apps).
I have the issue that the redmine container starts before the db is ready, and fails. So I just want to try and build a "sleep" in the container using "command", but I need to work out how to start redmine using "command". From what I found, I need to call "/docker-entrypoint.sh", but this doesn't work:
command: >
/docker-entrypoint.sh
I think this is the actual start script (from the current version:)
docker-entrypoint.sh
You can use entrypoint keyword instead of command in your docker-compose.yml however for the issue you have with the database you don't actually need to call the entrypoint itself. You can add wait-for-it or wait-for to redmine image by extending it through using Dockerfile then in your docker-compose.yml you can use this as a command:
In case of MySQL use port 3306 or use port 5432 in case of PostgreSQL and change db word according to the database service name inside your docker-compose.yml
The rest of the command which comes after -- is based on the CMD line in redmine's Dockerfile as shown in here
command: ["./wait-for-it.sh", "db:3306", "--", "rails", "server", "-b", "0.0.0.0"]
More explanation can be found in the following answer

Linode/lamp + docker-compose

I want to install linode/lamp container to work on some wordpress project locally without messing up my machine with all the lamp dependencies.
I followed this tutorial which worked great (it's actually super simple).
Now I'd like to use docker-compose because I find it more convenient to simply having to type docker-compose up and being good to go.
Here what I have done:
Dockerfile:
FROM linode/lamp
RUN service apache2 start
RUN service mysql start
docker-compose.yml:
web:
build: .
ports:
- "80:80"
volumes:
- .:/var/www/example.com/public_html/
When I do docker-compose up, I get:
▶ docker-compose up
Recreating gitewordpress_web_1...
Attaching to gitewordpress_web_1
gitewordpress_web_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
I'm guessing I need a command argument in my docker-compose.yml but I have no idea what I should set.
Any idea what I am doing wrong?
You cannot start those two processes in the Dockerfile.
The Dockerfile determines what commands are to be run when building the image.
In fact many base images like the Debian ones are specifically designed to not allow starting any services during build.
What you can do is create a file called run.sh in the same folder that contains your Dockerfile.
Put this inside:
#!/usr/bin/env bash
service apache2 start
service mysql start
tail -f /dev/null
This script just starts both services and forces the console to stay open.
You need to put it inside of your container though, this you do via two lines in the Dockerfile. Overall I'd use this Dockerfile then:
FROM linode/lamp
COPY run.sh /run.sh
RUN chmod +x /run.sh
CMD ["/bin/bash", "-lc", "/run.sh"]
This ensures that the file is properly ran when firing up the container so that it stays running and also that those services actually get started.
What you should also look out for is that your port 80 is actually available on your host machine. If you have anything bound to it already this composer file will not work.
Should this be the case for you ( or you're not sure ) try changing the port line to like 81:80 or so and try again.
I would like to point you to another resource where LAMP server is already configured for you and you might find it handy for your local development environment.
You can find it mentioned below:
https://github.com/sprintcube/docker-compose-lamp

Docker Compose and execute command on starting container

I am trying to get my head around the COMMAND option in docker compose. In my current docker-compose.yml i start the prosody docker image (https://github.com/prosody/prosody-docker) and i want to create a list of users when the container is actually started.
The documentation of the container states that a user can be made using environment options LOCAL, DOMAIN, and PASSWORD, but this is a single user. I need a list of users.
When reading some stuff around the internet it seemed that using the command option i should be able to execute commands in a starting or running container.
xmpp:
image: prosody/prosody
command: prosodyctl register testuser localhost testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
But this seems not to work, i checked to running container using docker exec -it <imageid> bash but the user is not created.
Is it possible to execute a command on a started container using docker-compose or are there other options?
The COMMAND instruction is exactly the same as what is passed at the end of a docker run command, for example echo "hello world" in:
docker run debian echo "hello world"
The command is interpreted as arguments to the ENTRYPOINT of the image, which in debian's case is /bin/bash. In the case of your image, it gets passed to this script. Looking at that script, your command will just get passed to the shell. I would have expected any command you pass to run successfully, but the container will exit once your command completes. Note that the default command is set in the Dockerfile to CMD ["prosodyctl", "start"] which is presumably a long-running process which starts the server.
I'm not sure how Prosody works (or even what it is), but I think you probably want to either map in a config file which holds your users, or set up a data container to persist your configuration. The first solution would mean adding something like:
volumes:
- my_prosodoy_config:/etc/prosody
To the docker-compose file, where my_prosody_config is a directory holding the config files.
The second solution could involve first creating a data container like:
docker run -v /etc/prosody -v /var/log/prosody --name prosody-data prosody-docker echo "Prosody Data Container"
(The echo should complete, leaving you with a stopped container which has volumes set up for the config and logs. Just make sure you don't docker rm this container by accident!)
Then in the docker-compose file add:
volumes_from:
- prosody-data
Hopefully you can then add users by running docker exec as you did before, then running prosodyctl register at the command line. But this is dependent on how prosody and the image behave.
CMD is directly related to ENTRYPOINT in Docker (see this question for an explanation). So when changing one of them, you also have to check how this affects the other. If you look at the Dockerfile, you will see that the default command is to start prosody through CMD ["prosodyctl", "start"]. entrypoint.sh just passes this command through as Adrian mentioned. However, your command overrides the default command, so your prosody demon is never started. Maybe you want to try something like
xmpp:
image: prosody/prosody
command: sh -c prosodyctl register testuser localhost testpassword && prosodyctl start
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
instead. More elegant and somehow what the creator seems to have intended (judging from the entrypoint.sh script) would be something like
xmpp:
image: prosody/prosody
environment:
- LOCAL=testuser
- DOMAIN=localhost
- PASSWORD=testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
To answer your final question: no, it is not possible (as of now) to execute commands on a running container via docker-compose. However, you can easily do this with docker:
docker exec -i prosody_container_name prosodyctl register testuser localhost testpassword
where prosody_container_name is the name of your running container (use docker ps to list running containers).

Resources