I really don't know how to get supervisor to work with environment variables.
Below is a configuration snippet.
[program:htNotificationService]
priority=2
#autostart=true
#autorestart=true
directory=/home/ubuntu/workspace/htFrontEnd/heythat/htsite
command = /usr/bin/python htNotificationService.py -service
stdout_logfile=/var/log/heythat/htNotificationService.log
redirect_stderr=true
environment=PATH=/home/ubuntu/workspace/htFrontEnd/heythat
stopsignal=QUIT
I have tried the following:
environment=PATH=/home/ubuntu/workspace/htFrontEnd/heythat
environment=PYTHONPATH=$PYTHONPATH:/home/ubuntu/workspace/htFrontEnd/heythat
environment=PATH=/home/ubuntu/workspace/htFrontEnd/heythat,PYTHONPATH=$PYTHONPATH:/home/ubuntu/workspace/htFrontEnd/heythat
When I start supervisor I get
htNotificationService: ERROR (abnormal termination)
I can start from the shell by setting the python path, but not from supervisor. In the logs I get an error that says that an import can't be found. Well, that would be solved if supervisor would work. I even have the path in /etc/environments?
Why will supervisor not work?
Referencing existing env vars is done with %(ENV_VARNAME)s
See: https://github.com/Supervisor/supervisor/blob/master/supervisor/skel/sample.conf
Setting multiple environment variables is done by separating them with commas
See: http://supervisord.org/subprocess.html#subprocess-environment
Try:
environment=PYTHONPATH=/opt/mypypath:%(ENV_PYTHONPATH)s,PATH=/opt/mypath:%(ENV_PATH)s
In your .conf file under the supervisord block, you can add all the environment key=value pairs as such
[supervisord]
environment=CELERY_BROKER_URL="amqp://guest:guest#127.0.0.1:5672//",FLASK_CONFIG="TESTING"
[program:celeryd]
command=celery worker -A celery --loglevel=info -P gevent -c 1000
If you dont want to hardcode the variables but want to pull it in from the os environment, step 1 on your bash
Export env var
>> sudo export CELERY_BROKER_URL="amqp://guest:guest#127.0.0.1:5672//"
Reload Bash
>> . ~/.bashrc
Check if env vars are set properly
>> env
Now modify the conf file to read - Note: prepend your env variables with ENV_
[supervisord]
environment=CELERY_BROKER_URL="%(ENV_CELERY_BROKER_URL)s",FLASK_CONFIG="%(ENV_FLASK_CONFIG)s"
[program:celeryd]
command=celery worker -A celery --loglevel=info -P gevent -c 1000
this works for me. note the tabs before each line:
environment=
CLOUD_INSTANCE_NAME=media-server-xx-xx-xx-xx,
CLOUD_APPLICATION=media-server,
CLOUD_APP_COMPONENT=none,
CLOUD_ZONE=a,
CLOUD_REGION=b,
CLOUD_PRIVATE_IP=none,
CLOUD_PUBLIC_IP=xx.xx.xx.xx,
CLOUD_PUBLIC_IPV6=xx.xx.xx.xx.xx.xx,
CLOUD_PROVIDER=c
I know this is old but I just struggled with this for hours and wanted to maybe help out the next guy.
Don't forget to reload your config files after making updates
supervisorctl reread
supervisorctl update
If you install supervisor from a package installer, check which Supervisor version you are using.
As of August 2016 you will get 3.0b2. If this is the case you will need a newer version of supervisor. You can get it by installing supervisor manually or by using Python's pip. Make sure all the dependencies are met, along with the upstart setup so that supervisord works as a service and starts on system boot.
Related
Generally do not post here, so forgive me if anything is not up to code, but I have built a micro-service to run database migrations using flask-migrate/alembic. This has seemed like a very good option for the group I am working with. Up until very recently, the micro-service could be deployed very easily by pointing to different databases and running upgrades, but recently, the flask db upgrade command has stopped working inside of the docker container. As can be seen I am using alembic-utils here to handle some aspects of dbmigrations less commonly handled by flask-migrate like views/materialized views etc.
Dockerfile:
FROM continuumio/miniconda3
COPY ./ ./
WORKDIR /dbapp
RUN conda update -n base -c defaults conda -y
RUN conda env create -f environment_py38db.yml
RUN chmod +x run.sh
ENV PATH /opt/conda/envs/py38db/bin:$PATH
RUN echo "source activate py38db" > ~/.bashrc
RUN /bin/bash -c "source activate py38db"
ENTRYPOINT [ "./run.sh" ]
run.sh:
#!/bin/bash
python check_create_db.py
flask db upgrade
environment_py38db.yml:
name: py38db
channels:
- defaults
- conda-forge
dependencies:
- Flask==2.2.0
- Flask-Migrate==3.1.0
- Flask-SQLAlchemy==3.0.2
- GeoAlchemy2==0.12.5
- psycopg2
- boto3==1.24.96
- botocore==1.27.96
- pip
- pip:
- retrie==0.1.2
- alembic-utils==0.7.8
EDITED TO INCLUDE OUTPUT:
from inside the container:
(base) david#<ip>:~/workspace/dbmigrations$ docker run --rm -it --entrypoint bash -e PGUSER="user" -e PGDATABASE="trial_db" -e PGHOST="localhost" -e PGPORT="5432" -e PGPASSWORD="pw" --net=host migrations:latest
(py38db) root#<ip>:/dbapp# python check_create_db.py
successfully created database : trial_db
(py38db) root#<ip>:/dbapp# flask db upgrade
from local environment
(py38db) david#<ip>:~/workspace/dbmigrations/dbapp$ python check_create_db.py
database: trial_db already exists: skipping...
(py38db) david#<ip>:~/workspace/dbmigrations/dbapp$ flask db upgrade
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 41f5be29ae44, initital migration to generate tables
INFO [alembic.runtime.migration] Running upgrade 41f5be29ae44 -> 34c067400f6b, add materialized views <. . .>
INFO [alembic.runtime.migration] Running upgrade 34c067400f6b -> 34c067400f6b_views, add <. . .>
INFO [alembic.runtime.migration] Running upgrade 34c067400f6b_views -> b51d57354e6c, add <. . .>
INFO [alembic.runtime.migration] Running upgrade b51d57354e6c -> 97d41cc70cb2, add-functions
(py38db) david#<ip>:~/workspace/dbmigrations/dbapp$
As the output shows, flask db upgrade is hanging inside the docker container while running locally. Both environments are reading in the db parameters from environment variables, and these are being read correctly (the fact that check_create_db.py runs confirms this). I can share more of the code if you can help me figure this out.
For good measure, here is the python script:
check_create_db.py
import psycopg2
import os
def recreate_db():
""" checks to see if the database set by env variables already exists and
creates the appropriate db if it does not exist.
"""
try:
# print statemens would be replaced by python logging modules
connection = psycopg2.connect(
user=os.environ["PGUSER"],
password=os.environ["PGPASSWORD"],
host=os.environ["PGHOST"],
port=os.environ["PGPORT"],
dbname='postgres'
)
connection.set_session(autocommit=True)
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM pg_catalog.pg_database WHERE datname = '{os.environ['PGDATABASE']}'")
exists = cursor.fetchone()
if not exists:
cursor.execute(f"CREATE DATABASE {os.environ['PGDATABASE']}")
print(f"successfully created database : {os.environ['PGDATABASE']}")
else:
print(f"database: {os.environ['PGDATABASE']} already exists: skipping...")
except Exception as e:
print(e)
finally:
if connection:
connection.close()
if __name__ == "__main__":
recreate_db()
Ok, so I was able to find the bug easily enough by going through all the commits to isolate when the program stopped working and it was an easy fix. It has however left me with more questions.
The cause of the problem was that in the root directory of the project ( so dbmigrations if you are following above..) I had added an __init__.py. This was unnecessary, but I thought it might help me access database objects defined outside of the env.py in my migrations directory after adding the path to my sys.path in env.py. This was not required, and I probably shouldv'e known not to add the __init__.py to a folder I did not intend to use a python module.
I continue to find it strange is that the project still ran perfectly fine locally, with the same __init__.py in the root folder. However, from within the docker container, this cause the flask-migrate commands to be unresponsive. This remains a point of curiosity.
In any case, if you are feeling like throwing an __init__.py in the root directory of a project, here is a data point that should discourage you from doing so, and it would probably be poor design to do so in most cases anyway.
I need to start two services/commands in docker, from google I got that I can use ENTRYPOINT and CMD to pass different commands. but when I start the container only ENTRYPOINT script runs and CMD seems not running. since I am a new docker can you help me on how to run two commands.
Dockerfile :
FROM registry.suse.com/suse/sle15
ADD repolist/*.repo /etc/zypp/repos.d/
RUN zypper refs && zypper refresh
RUN zypper in -y bind
COPY docker-entrypoint.d/* /docker-entrypoint.d/
COPY --chown=named:named named /var/lib/named
COPY --chown=named:named named.conf /etc/named.conf
COPY --chown=named:named forwarders.conf /etc/named.d/forwarders.conf
ENTRYPOINT [ "./docker-entrypoint.d/startbind.sh" ]
CMD ["/usr/sbin/named","-g","-t","/var/lib/named","-u","named"]
startbind.sh:
#! /bin/bash
/usr/sbin/named.init start
Thanks & Regards,
Mohamed Naveen
You can use supervisor tools for managing multiple services inside a single docker container.
Check out the below example(running Redis and Django server using single CMD):
Dockerfile:
# Base Image
FROM alpine
# Installing required tools
RUN apk --update add nano supervisor python3 redis
# Adding Django Source code to container
ADD /django_app /src/django_app
# Adding supervisor configuration file to container
ADD /supervisor /src/supervisor
# Installing required python modules for app
RUN pip3 install -r /src/django_app/requirements.txt
# Exposing container port for binding with host
EXPOSE 8000
# Using Django app directory as home
WORKDIR /src/django_app
# Initializing Redis server and Gunicorn server from supervisors
CMD ["supervisord","-c","/src/supervisor/service_script.conf"]
service_script.conf file
## service_script.conf
[supervisord] ## This is the main process for the Supervisor
nodaemon=true ## This setting is to specify that we are not running in daemon mode
[program:redis_script] ## This is the part where we give the name and add config for our 1st service
command=redis-server ## This is the main command to run our 1st service
autorestart=true ## This setting specifies that the supervisor will restart the service in case of failure
stderr_logfile=/dev/stdout ## This setting specifies that the supervisor will log the errors in the standard output
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout ## This setting specifies that the supervisor will log the output in the standard output
stdout_logfile_maxbytes = 0
## same setting for 2nd service
[program:django_service]
command=gunicorn --bind 0.0.0.0:8000 django_app.wsgi
autostart=true
autorestart=true
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes = 0
Final output:
Redis and Gunicorn service in same docker container
You can read my complete article on this, the link is given below:
Link for complete article
Options to run more than one service within the container described really well in this official docker article:
multi-service_container.
I'd recommend reviewing why you need two services in one container(shared data volume, init, etc) cause by properly separating the services you'll have ready to scale architecture, more useful logs, easier lifecycle/resource management, and easier testing.
Within startbind.sh
you can do:
#! /bin/bash
#start second servvice here, and push it to background:
/usr/sbin/secondesrvice.init start &
#then run the last commands:
/usr/sbin/named.init start
your /usr/sbin/named.init start (the last command on the entry point) command must NOT go into background, you need to keep it on the foreground.
If this last command is not kept in foreground, Docker will exit.
You can add to startbind.sh the two service start. You can use RUN command also. RUN execute commands in docker container. If dont work, you can ask me to keep helping you.
I have an application (deployed in docker container) managed via supervisord
My supervisord.conf looks like:
[supervisord]
nodaemon=true
logfile=/var/log/supervisor/supervisord.log
loglevel=INFO
[program:anjay]
priority=1
#USE SOME_CLI for different run configurations
command=/some/binary %(ENV_SOME_CLI)s
stdout_logfile=/dev/fd/1
stderr_logfile=/dev/fd/2
stdout_logfile_maxbytes=0
stderr_logfile_maxbytes=0
autostart=true
autorestart=false
stopsignal=INT
user=root
I want to be able to restart /some/binary with different arguments (driven by SOME_CLI env variable)
Starting the application for the first time works perfectly, arguments are expanded. E.g.:
export SOME_CLI=A
/some/binary A
Then I want to export new SOME_CLI=B and I expect after restart:
export SOME_CLI=B
/some/binary B
Unfortunately there still is
/some/binary A
Is it possible to restart configured application with different arguments that way?
If no, how to achieve such functionality?
Remark: I know that my application is deployed in container and I could just restart the container with different arguments. It just doesn't seem to be the right thing to do (restart whole container just to change some arguments).
Correct me if I'm wrong
Add the environment variable directly on your conf file in the [program] section, such as:
[program:anjay]
environment=ENV_SOME_CLI=your_value
Apply changes by telling supervisord there's a change in that file with supervisorctl update.
I'm trying to launch container using docker-compose services.But unfortunetly, container exited whith code 0.
Containers is build thanks to a repository which is from a .tar.gz archive. This archive is a Centos VM.
I want to create 6 container from the same archive.
Instead of typing 6 times docker command, I would like to create a docker-compose.yml file where i can summarize their command and tag.
I have started to write docker-compose.yml file just for create one container.
Here is my docker-compose.yml :
version: '2'
services:
dvpt:
image: compose:test.1
container_name: cubop1
command: mkdir /root/essai/
tty: true
Do not pay attention to the command as I have just to specify one.
So my question is, why the container is exiting ? Is there a another solution to build these container at the same time ?
Thanks for your responses.
The answer is actually the first comment. I'll explain Miguel's comment a bit.
First, we need to understand that a Docker container runs a single command. The container will be running as long as that process the command started is running. Once the process is completed and exits then the container will stop.
With that understanding, we can make an assumption of what is happening in your case. When you start your dvpt service it runs the command mkdir /root/essai/. That command creates the folder and then exits. At this point, the Docker container is stopped because the process exited (with status 0, indicating that mkdir completed with no error).
run your docker in background with -d
$ docker-compose up -d
and on docker-compose.yml add:
mydocker:
tty: true
You can end with command like tail -f /dev/null
It often works in my docker-compose.yml with
command: tail -f /dev/null
And it is easy to see how I keep the container running.
docker ps
We had a problem where two of the client services(vitejs) exited with code 0. I added the tty: true and it started to work.
dashboard:
tty: true
container_name: dashboard
expose:
- 8001
image: tilt.dev/dashboard
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.tls=true"
- "traefik.http.routers.dashboard.entrypoints=web"
- "traefik.http.routers.dashboard-wss.tls=true"
- "traefik.http.routers.dashboard-wss.entrypoints=wss"
One solution is to create a process that doesn't end, an infinite loop or something that can run continuously in the background. This will keep the container opened because the the process won't exit.
This is very much a hack though. I'm still looking for a better solution.
The Zend Server image does something like this. In their .sh script they have a final command:
exec /usr/local/bin/nothing
Which executes a file that continuously runs in the background. I've tried to copy the file contents here but it must be in binary.
EDIT:
You can also end your file with /bin/bash which begins a new terminal process in the container and keeps it from closing.
It can be case that program (from ENTRYPOINT/CMD) run successfully and exited (without demonizing itself). So check your ENTRYPOINT/CMD in Dockerfile.
Create a Dockerfile and add the below line to execute any shell scripts or commands without exit code 0 error. In your case, it should be
RUN mkdir /root/essai/
However, use the below line to execute shell script
RUN /<absolute_path_of_container>/demo.sh
I know i am too late for the answer but few days ago i also ran into the same problem and everything mentioned above not working. The real problem is as mentioned in the above answer that the docker stops after the command exits.
So i did a hack for this
Note i have used Dockerfile for creating image you can do it in your way below is just an example.
I used Supervisor for monitoring the process. As long as supervisor is monitoring the docker container will also not exit.
For those who also ran into the same problem will do the following thin to solve the issue:
#1 Install supervisor in Dockerfile
RUN apt-get install -y supervisor
#2 Create a config file (named supervisord.conf )for supervisor like this
[include]
files = /etc/supervisor/conf.d/*.conf
[program:app]
command=bash
#directory will be any folder where you wnat supervisor to cd before executing.
directory=/project
autostart=true
autorestart=true
startretries=3
#user will be anyone you want but make sure that user will have the enough privilage.
user=root
[supervisord]
nodaemon=true
[supervisorctl]
#3 Copy the supervisor conf file to docker
COPY supervisord.conf /etc/supervisord.conf
#4 Define an entrypoint
ENTRYPOINT ["supervisord","-c","/etc/supervisord.conf"]
Tht`s it now just build the file and run the container. it will keep container running.
Hope it helps you to solve the problem.
And Happy coding :-)
It seems that I am not understanding something about variable substitution in the following page (my variable NUM is not registering): https://github.com/compose-spec/compose-spec/blob/master/spec.md#Interpolation
See screenshot below. Running this on mac OSX.
Regarding docker-compose variable substitution, it can depend on how NUM has been set.
set NUM=5 would only set it in the current shell, not for another process.
Make sure to type:
export NUM=5
It is mentioned in the docs:
You can use a $$ (double-dollar sign) when your configuration needs a
literal dollar sign. This also prevents Compose from interpolating a
value, so a $$ allows you to refer to environment variables that you
don’t want processed by Compose.
web:
build: .
command: "$$VAR_NOT_INTERPOLATED_BY_COMPOSE"
If you forget and use a single dollar sign ($), Compose interprets the
value as an environment variable and will warn you:
The VAR_NOT_INTERPOLATED_BY_COMPOSE is not set. Substituting an empty
string.
According to that, line 03 of your compose file should be:
command: echo $$NUM
In addition to $$ solution provided by #ayman-nedjmeddine above you also need to do following, to make shell variables available in compose you have two options
Option 1
log in as root , set your variable and execute docker-compose
root>export NUM=5
root>docker-compose up
Option 2
use sudo -E from user shell, -E will propagate user shell env to sudo,
provide sudo access to docker/docker-compose
add :SETENV: to the command in sudoer file to use -E option in sudo
eg:
sudo visudo -f /etc/sudoers.d/docker-compose
ALL ALL=(ALL:ALL) NOPASSWD:SETENV: /usr/local/bin/docker-compose
sudo visudo -f /etc/sudoers.d/docker
ALL ALL=(ALL:ALL) NOPASSWD:SETENV: /usr/bin/docker
finally use
user1>export NUM=5
user1>sudo -E docker-compose up