How to call the entry-point of redmine docker image? - docker

I'm trying to get redmine running in docker. I'm new to both.
I'm using the "default" redmine image (version 3.3, because version 4.X is not yet supported by redmine mobile apps).
I have the issue that the redmine container starts before the db is ready, and fails. So I just want to try and build a "sleep" in the container using "command", but I need to work out how to start redmine using "command". From what I found, I need to call "/docker-entrypoint.sh", but this doesn't work:
command: >
/docker-entrypoint.sh
I think this is the actual start script (from the current version:)
docker-entrypoint.sh

You can use entrypoint keyword instead of command in your docker-compose.yml however for the issue you have with the database you don't actually need to call the entrypoint itself. You can add wait-for-it or wait-for to redmine image by extending it through using Dockerfile then in your docker-compose.yml you can use this as a command:
In case of MySQL use port 3306 or use port 5432 in case of PostgreSQL and change db word according to the database service name inside your docker-compose.yml
The rest of the command which comes after -- is based on the CMD line in redmine's Dockerfile as shown in here
command: ["./wait-for-it.sh", "db:3306", "--", "rails", "server", "-b", "0.0.0.0"]
More explanation can be found in the following answer

Related

Docker with Ruby on Rails on a development environment

I'm learning Docker and I'm trying to configure a Ruby on Rails project to run on it (on development environment). But I'm having some trouble.
I managed to configure docker-compose to start a container with the terminal open, so I can do bundle install, start a server or use rails generators. However, every time I run the command to start, it starts a new container, where I have to do bundle install again (it takes a while).
So I'd like to know if there is a way to reuse components already created.
Here is my Dockerfile.dev
FROM ruby:2.7.4-bullseye
WORKDIR '/apps/gaia_api'
EXPOSE 3000
RUN gem install rails bundler
CMD ["/bin/bash"]
And here is my docker-compose file:
version: "3.8"
services:
gaia_api:
build:
dockerfile: Dockerfile.dev
context: "."
volumes:
- .:/apps/gaia_api
environment:
- USER_DB_RAILS
- PASSWORD_DB_RAILS
ports:
- "3000:3000"
The command I'm using to run is: docker-compose run --service-ports gaia_api.
I tried to use the docker commands build, create and start, however the volume mapping doesn't work. On the terminal of the container, the files of the volume are not there.
The commands I tried.
docker build -t gaia -f Dockerfile.dev .
docker create -v ${pwd}:/apps/gaia_api -it -p 3000:3000 gaia
docker start -i f36d4d9044b08e42b2b9ec1b02b03b86b3ae7da243f5268db2180f3194823e48
There is probably something I still don't understand. So I ask: Whats the best way to configure docker for ruby on rails development? And will it be possible to add new services later (I plan once I get the first part to work, to add postgres and a vue project).
EDIT: Forgot to say that I'm on Mac OS Big Sur
EDIT 2: I found what was wrong with the volumes, I was tying -v ${pwd}:/apps instead of $(pwd):/apps.

Cloud Run Deploy fails: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable

I have a project which I had previously successfully deployed to Google Cloud Run, and set up with a trigger such that upon pushing to the repo's main branch on Github, it would automatically deploy. It worked great.
Then I tried to rename the github repo, which meant deleting and creating a new trigger, and now I cannot get it working again.
Everytime, the build succeeds but deployment fails with this error in Cloud Build:
Step #2 - "Deploy": ERROR: (gcloud.run.services.update) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I have not changed anything other than the repo name, leading me to believe the fix is not with my code, but I tried some changes there anyway.
I have looked into the solutions set forth in this post. However, I believe I am listening on the correct port.
My app is using Python and Flask, and contains this:
if __name__ == "__main__":
app.run(debug=False, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
Which should use the ENV var Port (8080) and otherwise default to 8080. I also tried just using port=8080.
I tried explicitly exposing the port in the Dockerfile, which also did not work:
FROM python:3.7
#Copy files into docker image dir, and make that the current working dir
COPY . /docker-image
WORKDIR /docker-image
RUN pip install -r requirements.txt
CMD ["flask", "run", "--host", "0.0.0.0"]
EXPOSE 8080
Cloud Run does seem to be using port 8080 - if I dig into the response, I see this nested under Response.spec.container.0 :
ports: [
0: {
containerPort: 8080
name: "http1"
}
]
All that said, if I look at the logs, it shows "Now running on Port 5000".
I have no idea where that Port 5000 is coming from or being set, but trying to change the ports in Python/Flask and the Dockerfile to 5000 leads to the same errors.
How do I get it to run on Port 8080? It's very strange to me that this was working FINE prior to renaming the repo and creating a new trigger. How is this setup different? The Trigger does not give an option to set the port so I'm not sure how that caused this error.
You have mixed things. Flask command default port is effectively 5000. If you want to change it, you need to change your flask run command with the --port= parameter
CMD ["flask", "run", "--host", "0.0.0.0","--port","8080"]
In addition, your flask run command, is a flask runtime and totally ignore the standard python entrypoint if __name__ == "__main__":. If you want to use this entrypoint, use the Python runtime
CMD ["python", "<main file>.py"]

Conflicting / Multiple start commands between `Dockerfile` and `docker-compose.yml`

I was following this tutorial to get a Rails 6 Application up and running on Docker (although this question isn't specific to Rails)
In the Dockerfile it has the following command
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
Great, so it's giving it a startup command to start the rails server locally.
Later in the same article it shows the following in the docker-compose.yml file:
services:
...
web:
build: .
command: bash -c "foreman start -f Procfile.dev-server"
...
Now it's providing a different command to start the app (using the foreman gem, which likely starts the rails server in a similar fashion to the first command).
Which "command" is the one that actually executes and starts everything up? Does the docker-compose command override the Dockerfile CMD when I run docker-compose up ?
The command: in docker-compose.yml, or the command given at the end of a docker run command, takes precedence. No matter what else you specify, a container only runs one command, and then exits.
In a typical image that does package some single application, best practice is to COPY the application code (or compiled binary) in and set an appropriate CMD that runs it, even if in development you'll be running it with modified settings.

node and react running with docker-compose.yml file

I have a sample application, I am using nodejs and reactjs, So my project folder consists of client and server folder. The client folder is created using create-react-app.
i have created two Dockerfile for each of the folder, and i am using a docker-compose.yml on the root of the project.
everything is working fine. Now i just want to host this application. I am trying to use jenkins.
Since i have little knowledge on the devops side. i have some doubts
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
6) what is the use of volumes is it required in docker-compose.yml file ?
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
please see the below folder structure
movielisting
client
Dockerfile
package.json
package.lock.json
... other create-react-app folders like src..
server
Dockerfile
index.js
docker-compose.yml
.env
Dockerfile -- client
FROM node:10.15.1-alpine
#Create app directory and use it as the working directory
RUN mkdir -p /srv/app/client
WORKDIR /srv/app/client
COPY package.json /srv/app/client
COPY package-lock.json /srv/app/client
RUN npm install
COPY . /srv/app/client
CMD ["npm", "start"]
Dockerfile -- server
FROM node:10.15.1-alpine
#Create app directory
RUN mkdir -p /srv/app/server
WORKDIR /srv/app/server
COPY package.json /srv/app/server
COPY package-lock.json /srv/app/server
RUN npm install
COPY . /srv/app/server
CMD ["node", "index.js"]
docker-compose.yml -- root of project
version: "3"
services:
#########################
# Setup node container
#########################
server:
build: ./server
expose:
- ${APP_SERVER_PORT}
environment:
API_HOST: ${API_HOST}
APP_SERVER_PORT: ${APP_SERVER_PORT}
ports:
- ${APP_SERVER_PORT}:${APP_SERVER_PORT}
volumes:
- ./server:/srv/app/server
command: ${NODE_COMMAND:-node} index.js
##########################
# Setup client container
##########################
client:
build: ./client
environment:
- REACT_APP_PORT=${REACT_APP_PORT}
expose:
- ${REACT_APP_PORT}
ports:
- ${REACT_APP_PORT}:${REACT_APP_PORT}
volumes:
- ./client/src:/srv/app/client/src
- ./client/public:/srv/app/client/public
links:
- server
command: npm run start
.env
API_HOST="http://localhost:4000"
APP_SERVER_PORT=4000
REACT_APP_PORT=3000
package.json -- client
"proxy": "http://server:4000"
what all things can i refactor,
Any help appreciated.
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
Each dockerfile will build a docker image. So in the end you will have two images one for the react application and the other one for the backend which is nodejs application
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
You need to build the react application within the steps you have in its Dockerfile in order to be able to use it as a normal application. Also you might use environment varaible to customize the image during the build using build-args for example passing custom path or anything else.
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
It would be better if you use the dockerfile(s) with jenkins in order to build your images and keep docker-compose.yml file(s) for deploying the application itself without using the build keyword
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
Using command inside the docker-compose.yml file will override the CMD for the dockerfile which was set during the build step
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
Generally speaking yes you need it however as long as you want to use override it from the docker-compose file you might added it as CMD ["node", "--help"] or something
6) what is the use of volumes is it required in docker-compose.yml file ?
Volumes is needed in case you have shared files between containers or you need to keep data persistent on the host
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
server is an alias for the nodejs container inside the docker network once you start your application. and why named server ? because you have it inside your docker-compose.yml file in this part:
services:
server:
But of course you can change it by adding alias to it within network keyword inside the docker-compose.yml file
Note: React itself is a client side which means it works through the browser itself so it wont be able to contact the nodejs application through docker network you may use the ip itself or use localhost and make the nodejs accessible through localhost
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
Docker itself does not know about which port your application is using so you have to make both of them use the same port. and in nodejs this is achievable by using environment variable
For more details:
https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg
https://docs.docker.com/compose/networking/
https://docs.docker.com/compose/compose-file/#aliases
https://docs.docker.com/compose/compose-file/#command
https://facebook.github.io/create-react-app/docs/deployment
if any one facing issues with connecting react and express, make sure there is NO localhost attached to server api address in client code
(eg: http://localhost:5000/api should be changed to /api),
since proxy entry is there in package.json file.
PS: if no entry is there, add
{
"proxy": "http://server:5000"
}
to package.json, ('server' is your express app container name in docker-compose file)
finally made it work. thought of sharing this if it helps anyone else

Linode/lamp + docker-compose

I want to install linode/lamp container to work on some wordpress project locally without messing up my machine with all the lamp dependencies.
I followed this tutorial which worked great (it's actually super simple).
Now I'd like to use docker-compose because I find it more convenient to simply having to type docker-compose up and being good to go.
Here what I have done:
Dockerfile:
FROM linode/lamp
RUN service apache2 start
RUN service mysql start
docker-compose.yml:
web:
build: .
ports:
- "80:80"
volumes:
- .:/var/www/example.com/public_html/
When I do docker-compose up, I get:
▶ docker-compose up
Recreating gitewordpress_web_1...
Attaching to gitewordpress_web_1
gitewordpress_web_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
I'm guessing I need a command argument in my docker-compose.yml but I have no idea what I should set.
Any idea what I am doing wrong?
You cannot start those two processes in the Dockerfile.
The Dockerfile determines what commands are to be run when building the image.
In fact many base images like the Debian ones are specifically designed to not allow starting any services during build.
What you can do is create a file called run.sh in the same folder that contains your Dockerfile.
Put this inside:
#!/usr/bin/env bash
service apache2 start
service mysql start
tail -f /dev/null
This script just starts both services and forces the console to stay open.
You need to put it inside of your container though, this you do via two lines in the Dockerfile. Overall I'd use this Dockerfile then:
FROM linode/lamp
COPY run.sh /run.sh
RUN chmod +x /run.sh
CMD ["/bin/bash", "-lc", "/run.sh"]
This ensures that the file is properly ran when firing up the container so that it stays running and also that those services actually get started.
What you should also look out for is that your port 80 is actually available on your host machine. If you have anything bound to it already this composer file will not work.
Should this be the case for you ( or you're not sure ) try changing the port line to like 81:80 or so and try again.
I would like to point you to another resource where LAMP server is already configured for you and you might find it handy for your local development environment.
You can find it mentioned below:
https://github.com/sprintcube/docker-compose-lamp

Resources