I am trying to spin up a webapp using docker build. For generating certs, I want to use certbot. However, if I just put
RUN certbot --nginx,
I get
Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel): Plugins selected: Authenticator nginx, Installer nginx
An unexpected error occurred:
EOFError.
Is there a way to provide this information in the Dockerfile or ignore it ?
RUN certbot -n -m ${EMAIL} -d ${DOMAINS} --nginx
My one suggestion is not to do this during docker build, but instead generate the cert when the container starts up. This is because letsencrypt will attempt to connect to your server at the domains you're specifying, which probably is not where you're building the image.
To decrease startup time though, you'll want to skip bootstrapping dependencies (but you will need these installed). For this purpose, I would use the following command in your Dockerfile to list certificates (this will ensure dependencies are properly installed) and then alter the CMD (assuming you're using the nginx image)
Dockerfile:
ARG EMAIL_ARG=defaultemail#example.com
ARG DOMAINS_ARG=example.com
ENV EMAIL=${EMAIL_ARG}
ENV DOMAINS=${DOMAINS_ARG}
RUN certbot --help
...
CMD ["sh", "-c", "certbot --no-bootstrap -n -m ${EMAIL} -d ${DOMAINS} --nginx", "&&", "nginx", "-g", "daemon off;"]
The -n is for non-interactive mode
The --no-bootstrap is to skip the bootstrapping of dependencies (installing python and such)
The -m is to specify the email used for important notifications
The -d is to specify a comma separated list of domains
Using "sh", "-c" will invoke a shell when the command is executed, so you'll get the shell like behavior of replacing your environment variables with their values. Passing the values in to the build as build args doesn't expose them at startup time of your container, which is why they are then being placed into environment variables. The added benefit of them being used from environment variables is you can override these values in different environments (dev, test, stage, prod, etc...).
Related
I have a project which I had previously successfully deployed to Google Cloud Run, and set up with a trigger such that upon pushing to the repo's main branch on Github, it would automatically deploy. It worked great.
Then I tried to rename the github repo, which meant deleting and creating a new trigger, and now I cannot get it working again.
Everytime, the build succeeds but deployment fails with this error in Cloud Build:
Step #2 - "Deploy": ERROR: (gcloud.run.services.update) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I have not changed anything other than the repo name, leading me to believe the fix is not with my code, but I tried some changes there anyway.
I have looked into the solutions set forth in this post. However, I believe I am listening on the correct port.
My app is using Python and Flask, and contains this:
if __name__ == "__main__":
app.run(debug=False, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
Which should use the ENV var Port (8080) and otherwise default to 8080. I also tried just using port=8080.
I tried explicitly exposing the port in the Dockerfile, which also did not work:
FROM python:3.7
#Copy files into docker image dir, and make that the current working dir
COPY . /docker-image
WORKDIR /docker-image
RUN pip install -r requirements.txt
CMD ["flask", "run", "--host", "0.0.0.0"]
EXPOSE 8080
Cloud Run does seem to be using port 8080 - if I dig into the response, I see this nested under Response.spec.container.0 :
ports: [
0: {
containerPort: 8080
name: "http1"
}
]
All that said, if I look at the logs, it shows "Now running on Port 5000".
I have no idea where that Port 5000 is coming from or being set, but trying to change the ports in Python/Flask and the Dockerfile to 5000 leads to the same errors.
How do I get it to run on Port 8080? It's very strange to me that this was working FINE prior to renaming the repo and creating a new trigger. How is this setup different? The Trigger does not give an option to set the port so I'm not sure how that caused this error.
You have mixed things. Flask command default port is effectively 5000. If you want to change it, you need to change your flask run command with the --port= parameter
CMD ["flask", "run", "--host", "0.0.0.0","--port","8080"]
In addition, your flask run command, is a flask runtime and totally ignore the standard python entrypoint if __name__ == "__main__":. If you want to use this entrypoint, use the Python runtime
CMD ["python", "<main file>.py"]
I need to start two services/commands in docker, from google I got that I can use ENTRYPOINT and CMD to pass different commands. but when I start the container only ENTRYPOINT script runs and CMD seems not running. since I am a new docker can you help me on how to run two commands.
Dockerfile :
FROM registry.suse.com/suse/sle15
ADD repolist/*.repo /etc/zypp/repos.d/
RUN zypper refs && zypper refresh
RUN zypper in -y bind
COPY docker-entrypoint.d/* /docker-entrypoint.d/
COPY --chown=named:named named /var/lib/named
COPY --chown=named:named named.conf /etc/named.conf
COPY --chown=named:named forwarders.conf /etc/named.d/forwarders.conf
ENTRYPOINT [ "./docker-entrypoint.d/startbind.sh" ]
CMD ["/usr/sbin/named","-g","-t","/var/lib/named","-u","named"]
startbind.sh:
#! /bin/bash
/usr/sbin/named.init start
Thanks & Regards,
Mohamed Naveen
You can use supervisor tools for managing multiple services inside a single docker container.
Check out the below example(running Redis and Django server using single CMD):
Dockerfile:
# Base Image
FROM alpine
# Installing required tools
RUN apk --update add nano supervisor python3 redis
# Adding Django Source code to container
ADD /django_app /src/django_app
# Adding supervisor configuration file to container
ADD /supervisor /src/supervisor
# Installing required python modules for app
RUN pip3 install -r /src/django_app/requirements.txt
# Exposing container port for binding with host
EXPOSE 8000
# Using Django app directory as home
WORKDIR /src/django_app
# Initializing Redis server and Gunicorn server from supervisors
CMD ["supervisord","-c","/src/supervisor/service_script.conf"]
service_script.conf file
## service_script.conf
[supervisord] ## This is the main process for the Supervisor
nodaemon=true ## This setting is to specify that we are not running in daemon mode
[program:redis_script] ## This is the part where we give the name and add config for our 1st service
command=redis-server ## This is the main command to run our 1st service
autorestart=true ## This setting specifies that the supervisor will restart the service in case of failure
stderr_logfile=/dev/stdout ## This setting specifies that the supervisor will log the errors in the standard output
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout ## This setting specifies that the supervisor will log the output in the standard output
stdout_logfile_maxbytes = 0
## same setting for 2nd service
[program:django_service]
command=gunicorn --bind 0.0.0.0:8000 django_app.wsgi
autostart=true
autorestart=true
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes = 0
Final output:
Redis and Gunicorn service in same docker container
You can read my complete article on this, the link is given below:
Link for complete article
Options to run more than one service within the container described really well in this official docker article:
multi-service_container.
I'd recommend reviewing why you need two services in one container(shared data volume, init, etc) cause by properly separating the services you'll have ready to scale architecture, more useful logs, easier lifecycle/resource management, and easier testing.
Within startbind.sh
you can do:
#! /bin/bash
#start second servvice here, and push it to background:
/usr/sbin/secondesrvice.init start &
#then run the last commands:
/usr/sbin/named.init start
your /usr/sbin/named.init start (the last command on the entry point) command must NOT go into background, you need to keep it on the foreground.
If this last command is not kept in foreground, Docker will exit.
You can add to startbind.sh the two service start. You can use RUN command also. RUN execute commands in docker container. If dont work, you can ask me to keep helping you.
I am new to Docker, so bear with me. I have written a Dockerfile that creates an image with a Java language Spring Boot service within. I am now trying to set up an entry point to start the service in the container. I chose to write an external shell script to be called as the entry point.
This is how the service is set up.
When the project is built, a zip file is produced containing the service jar, all dependency jars, config resources and a bash script used to launch the service. The script takes a number of arguments, validates them and uses them to construct and then execute a "java" command to run the service.
If you were to run this the "non-container-way", you would just unpack the zip file in a directory somewhere and invoke the script, passing the necessary arguments. The script displays "usage" information if required arguments are not present.
In the Docker container case, I'm trying to figure out how to do the same from the ENTRYPOINT in the Dockerfile. I specified my launcher script in the ENTRYPOINT, and it did invoke it, although, of course no arguments were passed, so the script exits with the usage information.
I can't figure out how to pass the arguments that the launcher script expects.
I get the impression that I'm missing an important detail in the usage of ENTRYPOINT.
Below are snippets of the relevant files, to try to illustrate my situation.
Dockerfile:
...
# Copy Route Assesssor service archive and unpack.
COPY target/route-assessor.zip .
RUN unzip route-assessor.zip
WORKDIR route-assessor
ENTRYPOINT ["run-route-assessor.sh"]
run-route-assessor.sh:
Rather than include a fairly lengthy script, I'll show the usage statement to give an idea of what this script expects for arguments.
show_usage() {
echo "Usage: `basename "$0"` <args>"
echo " --port=<service port>"
echo " --instance=<service instance> [optional, default: 1]"
echo " --uuid=<service UUID>"
echo " --ssl [optional]"
echo " --keystore=<key store path> [required when --ssl specified]"
echo " --key-alias=<key alias> [required when --ssl specified]"
echo " --apm-host=<Elastic APM server host> [optional]"
echo " --apm-port=<Elastic APM server port> [optional]"
}
A container instance was created from the image:
[jo24447#489337-mitll route-assessor]$ docker create --name route-assessor-1 route-assessor
Examples of container start attempts:
[jo24447#489337-mitll route-assessor]$ docker start -ai route-assessor-1
Service port is required.
Usage: run-route-assessor.sh <args>
--port=<service port>
--instance=<service instance> [optional, default: 1]
--uuid=<service UUID>
--ssl [optional]
--keystore=<key store path> [required when --ssl specified]
--key-alias=<key alias> [required when --ssl specified]
--apm-host=<Elastic APM server host> [optional]
--apm-port=<Elastic APM server port> [optional]
[jo24447#489337-mitll route-assessor]$ docker start -ai route-assessor-1 --port=9100
unknown flag: --port
See 'docker start --help'.
If you want to run a container with some default parameters you need to define these parameters in CMD:
ENTRYPOINT [ "./run-route-assessor.sh" ]
CMD ["-cmd1 value1 -cmd2 value2"]
So you can run the container with just:
docker run image-name
If you want to override that default parameters list with no parameters you can do it with:
docker run image-name "''"
However the better way, would be to pass a flag to ./run-route-assessor.sh and let the script handle how to run the Spring Boot service with no parameters
docker run image-name -no-params
I was following this tutorial to get a Rails 6 Application up and running on Docker (although this question isn't specific to Rails)
In the Dockerfile it has the following command
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
Great, so it's giving it a startup command to start the rails server locally.
Later in the same article it shows the following in the docker-compose.yml file:
services:
...
web:
build: .
command: bash -c "foreman start -f Procfile.dev-server"
...
Now it's providing a different command to start the app (using the foreman gem, which likely starts the rails server in a similar fashion to the first command).
Which "command" is the one that actually executes and starts everything up? Does the docker-compose command override the Dockerfile CMD when I run docker-compose up ?
The command: in docker-compose.yml, or the command given at the end of a docker run command, takes precedence. No matter what else you specify, a container only runs one command, and then exits.
In a typical image that does package some single application, best practice is to COPY the application code (or compiled binary) in and set an appropriate CMD that runs it, even if in development you'll be running it with modified settings.
I have a recently-Dockerized web app that I would like to get running on AWS ECS, and a few fundamental concepts (which I don't see explained in the AWS docs) are throwing me off.
First, when you Edit/configure a new container, it asks you to specify the image to use, but then also has an Environment section:
The Entry point, Command and Working directory fields look suspiciously similar to the commands I already specified when creating my Docker image (here's my Dockerfile):
FROM openjdk:8
RUN mkdir /opt/myapp
ADD build/libs/myapp.jar /opt/myapp
WORKDIR /opt/myapp
EXPOSE 9200
ENTRYPOINT ["java", "-Dspring.config=.", "-jar", "myapp.jar"]
So if ECS is asking me for an image (that's already been built using this Dockerfile), why in tarnation do I need to re-specify the exact same values for WORKDIR, EXPOSE, ENTRYPOINT, CMD, etc.?!?
Also outside of ECS I run my container like so:
docker run -it -p 9200:9200 -d --net="host" --env-file ~/myapp-local.env --name myapp myapp
Notice how I specify the env file? Does ECS support env files, or do I really have to enter each and every env var from my env file into this UI here?
Also I see there is a Docker Labels section near the bottom:
Are these different than env vars, or are they interchangeable?
Yes you need to add environment variable either through UI or through CLI .
For CLI you need to pass it as JSON template .
Also if you have already specified these values in Dockerfile then you dont need to pass these values again.
All the values that will be passed externally will overwrite internal/default values in Dockerfile