I have an application (deployed in docker container) managed via supervisord
My supervisord.conf looks like:
[supervisord]
nodaemon=true
logfile=/var/log/supervisor/supervisord.log
loglevel=INFO
[program:anjay]
priority=1
#USE SOME_CLI for different run configurations
command=/some/binary %(ENV_SOME_CLI)s
stdout_logfile=/dev/fd/1
stderr_logfile=/dev/fd/2
stdout_logfile_maxbytes=0
stderr_logfile_maxbytes=0
autostart=true
autorestart=false
stopsignal=INT
user=root
I want to be able to restart /some/binary with different arguments (driven by SOME_CLI env variable)
Starting the application for the first time works perfectly, arguments are expanded. E.g.:
export SOME_CLI=A
/some/binary A
Then I want to export new SOME_CLI=B and I expect after restart:
export SOME_CLI=B
/some/binary B
Unfortunately there still is
/some/binary A
Is it possible to restart configured application with different arguments that way?
If no, how to achieve such functionality?
Remark: I know that my application is deployed in container and I could just restart the container with different arguments. It just doesn't seem to be the right thing to do (restart whole container just to change some arguments).
Correct me if I'm wrong
Add the environment variable directly on your conf file in the [program] section, such as:
[program:anjay]
environment=ENV_SOME_CLI=your_value
Apply changes by telling supervisord there's a change in that file with supervisorctl update.
Related
I recently created an app using flask and put the py file in a docker container. However I am confused with online cases where people assigned the port.
First of all on the bottom of my py file I wrote
if __name__ == "__main__":
app.run(host='0.0.0.0',port=8000, debug=True)
In some cases I saw people specify the port in CMD when making dockerfile
CMD ["python3", "app.py", "--host=0.0.0.0", "--port=8000"]
In my own experience, the port assigned in CMD didn't work on my case at all. I wish to learn the differences between the two approaches and when to use each way.
Regarding this approach:
if __name__ == "__main__":
app.run(host='0.0.0.0',port=8000, debug=True)
__name__ is equal to "__main__" when the app is launched directly with the python interpreter (executed with the command python app.py) - which is a python technicallity and nothing to do with Flask. In that case the app.run function is called, and it accepts the various arguments as stated. app.run causes the Werkzeug development server to run.
This block will not be run, if you're executing the program with a production WSGI server like gunicorn as __name__ will not be equal to "__main__" in that case, so the app.run call is bypassed.
In practice, putting the app.run call in this if block means you can run the dev server with python app.py and avoid running the dev server when the same code is imported by gunicorn or similar in production.
There are lots of older tutorials or posts which reference the above approach. Modern versions of Flask ship with the flask command which is intended to replace this. So essentially without that if block, you can launch the development server which imports your app object in a similar manner to to gunicorn:
flask run -h 0.0.0.0 -p 8000
This automatically looks for an object called app in app.py, and accepts the host and port options, as you can see from flask run --help:
Options:
-h, --host TEXT The interface to bind to.
-p, --port INTEGER The port to bind to.
One advantage of this method is that the development server won't crash if you're using the auto reloader and introduce syntax errors. And of course the same code will be compatible with a production server like gunicorn.
With the above in mind, regarding the command you pass:
python app.py --host=0.0.0.0 --port=8000
I'm not sure if you've been confused with references to the flask command's supported options, but for this one to work you'd need to manually write some code to do something with those options. This could be done with a python module like argparse, but that would probably be redundant given that the flask command actually supports this out of the box.
To conclude: you should probably remove the if block, and your Dockerfile should contain:
CMD ["flask", "run", "--host=0.0.0.0", "--port=8000"]
You may also wish to check the FLASK_ENV environment variable is set to development to use the auto reloader, and be aware that the CMD line would need to be changed within this Dockerfile to run with gunicorn or similar in production, but that's probably outwith the scope of this question.
CMD ["python3", "app.py", "--host=0.0.0.0", "--port=8000"] means: Python run application app.py and pass the --host and the --port parameters to that application. It is up to your app.py to do something with those parameters. If your app does not process those flags, then you do not need to add them to the CMD.
If in your code you have app.run(host='0.0.0.0',port=8000), then your app will always be listening to port 8000 inside the container. In this case you can just use CMD ["python3", "app.py"]
If you wanted to have the ability to change the port and host that your app is listening to, the you could add some code to read the values from the command line. Once you setup your app to look at values from the command line, then it would make sense to run CMD ["python3", "app.py", "--host=0.0.0.0", "--port=8000"]
I need to set some environment variables in a docker container after it starts. When the docker starts env X gets value, then I want to set env Y with a value which is the first part of the value X with this command:
Y=$(echo $X | cut -d'#' -f 1)
Is there any way to do this?
I tried ENTRYPOINT and CMD in the Dockerfile, but it doesn't work.
The docker will be deployed on a Kubernetes cluster, and I also tried to set them in the config.yaml file but it doesn't work either.
You are on the right track that you would have to handle this by either CMD or ENTRYPOINT, because you want it to be dynamic and derived from existing data. The specifics would depend on your container and use case though.
You can use the ENV command in your dockerfile like below:
ENV PORT 8080
Source and more info - https://vsupalov.com/docker-build-time-env-values/
I have a recently-Dockerized web app that I would like to get running on AWS ECS, and a few fundamental concepts (which I don't see explained in the AWS docs) are throwing me off.
First, when you Edit/configure a new container, it asks you to specify the image to use, but then also has an Environment section:
The Entry point, Command and Working directory fields look suspiciously similar to the commands I already specified when creating my Docker image (here's my Dockerfile):
FROM openjdk:8
RUN mkdir /opt/myapp
ADD build/libs/myapp.jar /opt/myapp
WORKDIR /opt/myapp
EXPOSE 9200
ENTRYPOINT ["java", "-Dspring.config=.", "-jar", "myapp.jar"]
So if ECS is asking me for an image (that's already been built using this Dockerfile), why in tarnation do I need to re-specify the exact same values for WORKDIR, EXPOSE, ENTRYPOINT, CMD, etc.?!?
Also outside of ECS I run my container like so:
docker run -it -p 9200:9200 -d --net="host" --env-file ~/myapp-local.env --name myapp myapp
Notice how I specify the env file? Does ECS support env files, or do I really have to enter each and every env var from my env file into this UI here?
Also I see there is a Docker Labels section near the bottom:
Are these different than env vars, or are they interchangeable?
Yes you need to add environment variable either through UI or through CLI .
For CLI you need to pass it as JSON template .
Also if you have already specified these values in Dockerfile then you dont need to pass these values again.
All the values that will be passed externally will overwrite internal/default values in Dockerfile
I want to install linode/lamp container to work on some wordpress project locally without messing up my machine with all the lamp dependencies.
I followed this tutorial which worked great (it's actually super simple).
Now I'd like to use docker-compose because I find it more convenient to simply having to type docker-compose up and being good to go.
Here what I have done:
Dockerfile:
FROM linode/lamp
RUN service apache2 start
RUN service mysql start
docker-compose.yml:
web:
build: .
ports:
- "80:80"
volumes:
- .:/var/www/example.com/public_html/
When I do docker-compose up, I get:
▶ docker-compose up
Recreating gitewordpress_web_1...
Attaching to gitewordpress_web_1
gitewordpress_web_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
I'm guessing I need a command argument in my docker-compose.yml but I have no idea what I should set.
Any idea what I am doing wrong?
You cannot start those two processes in the Dockerfile.
The Dockerfile determines what commands are to be run when building the image.
In fact many base images like the Debian ones are specifically designed to not allow starting any services during build.
What you can do is create a file called run.sh in the same folder that contains your Dockerfile.
Put this inside:
#!/usr/bin/env bash
service apache2 start
service mysql start
tail -f /dev/null
This script just starts both services and forces the console to stay open.
You need to put it inside of your container though, this you do via two lines in the Dockerfile. Overall I'd use this Dockerfile then:
FROM linode/lamp
COPY run.sh /run.sh
RUN chmod +x /run.sh
CMD ["/bin/bash", "-lc", "/run.sh"]
This ensures that the file is properly ran when firing up the container so that it stays running and also that those services actually get started.
What you should also look out for is that your port 80 is actually available on your host machine. If you have anything bound to it already this composer file will not work.
Should this be the case for you ( or you're not sure ) try changing the port line to like 81:80 or so and try again.
I would like to point you to another resource where LAMP server is already configured for you and you might find it handy for your local development environment.
You can find it mentioned below:
https://github.com/sprintcube/docker-compose-lamp
I really don't know how to get supervisor to work with environment variables.
Below is a configuration snippet.
[program:htNotificationService]
priority=2
#autostart=true
#autorestart=true
directory=/home/ubuntu/workspace/htFrontEnd/heythat/htsite
command = /usr/bin/python htNotificationService.py -service
stdout_logfile=/var/log/heythat/htNotificationService.log
redirect_stderr=true
environment=PATH=/home/ubuntu/workspace/htFrontEnd/heythat
stopsignal=QUIT
I have tried the following:
environment=PATH=/home/ubuntu/workspace/htFrontEnd/heythat
environment=PYTHONPATH=$PYTHONPATH:/home/ubuntu/workspace/htFrontEnd/heythat
environment=PATH=/home/ubuntu/workspace/htFrontEnd/heythat,PYTHONPATH=$PYTHONPATH:/home/ubuntu/workspace/htFrontEnd/heythat
When I start supervisor I get
htNotificationService: ERROR (abnormal termination)
I can start from the shell by setting the python path, but not from supervisor. In the logs I get an error that says that an import can't be found. Well, that would be solved if supervisor would work. I even have the path in /etc/environments?
Why will supervisor not work?
Referencing existing env vars is done with %(ENV_VARNAME)s
See: https://github.com/Supervisor/supervisor/blob/master/supervisor/skel/sample.conf
Setting multiple environment variables is done by separating them with commas
See: http://supervisord.org/subprocess.html#subprocess-environment
Try:
environment=PYTHONPATH=/opt/mypypath:%(ENV_PYTHONPATH)s,PATH=/opt/mypath:%(ENV_PATH)s
In your .conf file under the supervisord block, you can add all the environment key=value pairs as such
[supervisord]
environment=CELERY_BROKER_URL="amqp://guest:guest#127.0.0.1:5672//",FLASK_CONFIG="TESTING"
[program:celeryd]
command=celery worker -A celery --loglevel=info -P gevent -c 1000
If you dont want to hardcode the variables but want to pull it in from the os environment, step 1 on your bash
Export env var
>> sudo export CELERY_BROKER_URL="amqp://guest:guest#127.0.0.1:5672//"
Reload Bash
>> . ~/.bashrc
Check if env vars are set properly
>> env
Now modify the conf file to read - Note: prepend your env variables with ENV_
[supervisord]
environment=CELERY_BROKER_URL="%(ENV_CELERY_BROKER_URL)s",FLASK_CONFIG="%(ENV_FLASK_CONFIG)s"
[program:celeryd]
command=celery worker -A celery --loglevel=info -P gevent -c 1000
this works for me. note the tabs before each line:
environment=
CLOUD_INSTANCE_NAME=media-server-xx-xx-xx-xx,
CLOUD_APPLICATION=media-server,
CLOUD_APP_COMPONENT=none,
CLOUD_ZONE=a,
CLOUD_REGION=b,
CLOUD_PRIVATE_IP=none,
CLOUD_PUBLIC_IP=xx.xx.xx.xx,
CLOUD_PUBLIC_IPV6=xx.xx.xx.xx.xx.xx,
CLOUD_PROVIDER=c
I know this is old but I just struggled with this for hours and wanted to maybe help out the next guy.
Don't forget to reload your config files after making updates
supervisorctl reread
supervisorctl update
If you install supervisor from a package installer, check which Supervisor version you are using.
As of August 2016 you will get 3.0b2. If this is the case you will need a newer version of supervisor. You can get it by installing supervisor manually or by using Python's pip. Make sure all the dependencies are met, along with the upstart setup so that supervisord works as a service and starts on system boot.