I have a service that I am bringing up through Rancher via docker-compose. The issue I am running into is that I need to set a password after the container has been deployed.
The way rancher secrets work, is that I set my secret in and rancher will mount a volume on my container with a file containing my secret. I was hoping to be able to execute a script to grab that secret, and set it as a password on my config file.
I don't believe I have a way to get that secret in through the Dockerfile as I don't want the secret to be in git, so I'm left looking at doing it via docker-compose.
Does anyone know if this is possible?
This is the way I use for calling a script after a container is started without overriding the entrypoint.
In my example, I used it for initializing the replicaset of my local MongoDB
services:
mongo:
image: mongo:4.2.8
hostname: mongo
container_name: mongodb
entrypoint: ["/usr/bin/mongod","--bind_ip_all","--replSet","rs0"]
ports:
- 27017:27017
mongosetup:
image: mongo:4.2.8
depends_on:
- mongo
restart: "no"
entrypoint: [ "bash", "-c", "sleep 10 && mongo --host mongo:27017 --eval 'rs.initiate()'"]
In the first part, I simply launch my service (mongo)
The second service use a "bash" entry point AND a restart: no <= important
I also use a depends_on between service and setup service for manage the launch order.
The trick is to overwrite the compose COMMAND to perform whatever init action you need before calling the original command.
Add a script in your image that will perform the init work that you want like set password, change internal config files, etc. Let's call it init.sh. You add it to your image.
Dockerfile:
FROM: sourceimage:tag
COPY init.sh /usr/local/bin/
ENTRYPOINT []
The above overrides whatever ENTRYPOINT is defined in the sourceimage. That's to make this example simpler. Make sure you understand what the ENTRYPOINT is doing in the Dockerfile from the sourceimage and call it in the command: of the docker-compose.yml file.
docker-compose.yml:
services:
myservice:
image: something:tag
...
command: sh -c "/usr/local/bin/init.sh && exec myexecutable"
It's important to use exec before calling the main command. That will install the command as the first process (PID1) which will make it receive signals like STOP, KILL (Ctrl-C on keyboard) or HUP.
You can also use volumes to do this:
services:
example:
image: <whatever>
volume: ./init.sh:/init.sh
entrypoint: sh -c "/init.sh"
Note that this will mount init.sh to the container, not copy it (if that matters, usually it doesn't). Basically processes within the container can modify init.sh and it would modify the file as it exists in your actual computer.
docker-compose specify how to launch containers, not how to modify an existing running container.
The Rancher documentation mentions that, for default usage of secrets, you can reference the secret by name in the secrets array in the docker-compose.yml.
The target filename will be the same name as the name of the secret.
By default, the target filename will be created as User ID and Group ID 0, and File Mode of 0444.
Setting external to true in the secrets part will make sure it knows the secret has already been created.
Example of a basic docker-compose.yml:
version: '2'
services:
web:
image: sdelements/lets-chat
stdin_open: true
secrets:
- name-of-secret
labels:
io.rancher.container.pull_image: always
secrets:
name-of-secret:
external: true
As illustrated in "How to Update a Single Running docker-compose Container", updating a container would involve a "build, kill, and up" sequence.
docker-compose up -d --no-deps --build <service_name>
Related
Let's say I have a docker-compose file with two containers:
version: "3"
services:
app:
image: someimage:fpm-alpine
volumes:
- myvolume:/var/www/html
web:
image: nginx:alpine
volumes:
- myvolume:/var/www/html
volumes:
myvolume:
The app container contains the application code in the /var/www/html directory which gets updated with each version of the image, so I don't want this directory to be persistent.
Yet I need to share the data with the nginx container. If I use a volume or a host bind the data is persistent and doesn't get updated with a new version. Maybe there is a way to automatically delete a volume whenever I pull a new image? Or a way to share an anonymous volume?
i think its better for you to use anonymous volume
volumes:
- ./:/var/www/html
You would have to be willing to drop back to docker-compose version 2 and use data containers with the volumes_from directive.
Which is equivalent to --volumes-from on a docker run command.
This should work fine. The problem isn't with docker. You can use volumes to communicate in this way. If you run docker-compose up in a directory with the following compose file:
version: "3"
services:
one:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
two:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
volumes:
vol:
Then, in a 2nd terminal docker exec -it so_one_1 bash (you might have to do a docker ps to find the exact name of the container, it can change). You'll find yourself in a bash container. Change to the /vol directory cd /vol and then echo "wobble" > wibble.txt", then exit` the shell (ctrl-d).
In the same terminal you can then type docker exec -it so_two_1 bash (again, check the names). Just like last time you can cd /vol and type ls -gAlFh you'll see the wibble.txt file we created in the other container. You can even cat wibble.txt to see the contents. It'll be there.
So if the problem isn't docker, what can it be? I think the problem is that nginx isn't seeing the changes on the filesystem. For that, I believe that setting expires -1; inside a location block in the config will actually disable caching completely and may solve the problem (dev only).
My project runs with supervisor on docker container. All the stdout_logfile files are stored in "logs" folder (inside docker container) and I need them to be saved in the same directory on my local machine. I added volumes, but got socket error. docker-compose.yml:
version: '3'
services:
web:
build: ./
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py collectstatic --noinput --i rest_framework && cd supervisor && supervisord -c supervisord.conf && tail -f /dev/null"
ports:
- "${webport}:8080"
env_file:
- ./.env
links:
- redis
volumes:
- ./logs:/orion-amr/logs
redis:
image: 'bitnami/redis:latest'
container_name: $redishostname
environment:
- REDIS_PASSWORD=$redispassword
volumes:
logs:
But I got following error:
Error: Cannot open an HTTP server: socket.error reported errno.EIO (5)
However, new supervisord.log file appeared in logs folder(on the computer) and there is:
2020-07-30 11:15:07,528 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2020-07-30 11:15:07,529 INFO Included extra file "/orion-amr/supervisor/conf.d/amr.conf" during parsing
What happened, who can help?
As Adiii hinted in the comment, this is not the problem of the docker configuration. The answer to the question Adiii is refering to states:
Supervisord switches to UNIX user account before any processing.
You need to specify what kind of user account it should use, run the daemon as root but specify user in the config file
Therefore you have three different options as your solution:
1. Change the supervisor configuration to run as some other user:
[program:myprogram]
...
user=user1
You will have to map the config file from outside the container to the inside the container through volumes in your service in docker-compose.yaml or COPY it during the build.
2. Specify the user directive in your docker-compose.yaml. Take a look here.
version: '3'
services:
web:
...
user: user1
3. Specify the USER directive in your Dockerfile. As per docs it states:
The USER instruction sets the user name (or UID) and optionally the user group (or GID) to use when running the image and for any RUN, CMD and ENTRYPOINT instructions that follow it in the Dockerfile.
Since you do not share the intent of the web container, you alone have to decide the best course of action. I would go with option one though, it's more maintainable.
I'm building a docker image from a project where I have a file with default credentials for the database. At the docker container run time, I want to pass the real credentials and replace the variables defined on that file. What is the best way to do it? I tried to use environment variables, but it's not working.
db_config.yml:
host: ${HOST}
user: ${USER}
pass: ${PASS}
port: ${PORT}
db: ${DB_NAME}
docker-compose.yml:
version: '2.3'
services:
test_ctr:
container_name: test
image: container:latest
network_mode: "host"
environment:
- HOST=${HOST}
- USER=${USER}
- PASS=${PASS}
- PORT=${PORT}
- DB_NAME=${DB_NAME}
db_config.yml is in builded image and language is Python. Basically when I run container, db_config.yml is red by a script and use file's credentials. When I create the image, this db_config.yml have default credentials. but when I run the container, I want to replace this file
To debug this try running:
docker exec -it <name-of-the-container> <command>
In your case this translates to:
docker exec -it test sh
This should open a shell inside the container.
Then type:
printenv
This will print all Environment variables and their values (that way You will see if the values You have passed are present)
There will be a problem if the container is crashing at startup (in this case it's not possible to use docker exec).
TIP:
Use .env file located in the same directory as docker-compose.yml (or whatever your docker-compose file is) to pass variables.
.env:
KEY1=value1
KEY2=value2
In your case this might look something like:
HOST=1.2.3.4
USER=sa
PASSWORD=42
PORT=4242
DB_NAME=mydb
When your running:
docker-compose up
docker-compose will look for this .env file and will inject the values from this file
Good luck
So I need rolling-updates with docker on my single node server. Until now, I was using docker-compose but unfortunately, I can't achieve what I need with it. Reading the web, docker-swarm seems to be the way to go.
I have found how to run an app with multiple replicas on a single node using swarm:
docker service create --replicas 3 --name myapp-staging myapp_app:latest
myapp:latest being built from my docker-compose.yml:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
working_dir: /app
depends_on:
- "postgres"
env_file:
- ".env"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Unfortunately, this doesn't work since it doesn't get the config from the docker-compose.yml file: .env file, command entry etc.
Searching deeper, I find that using
docker stack deploy -c docker-compose.yml <name>
will create a service using my docker-compose.yml config.
But then I get the following error message:
failed to update service myapp-staging_postgres: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
So it seems I have to use the registry and push my image there so that it works. I understand this need in case of a multiple node architecture, but in my case I don't want to do that. (Carrying images are heavy, I don't want my image to be public, and after all, image is here, so why should I move it to the internet?)
How can I set up my docker service using local image and config written in docker-compose.yml?
I could probably manage my way using docker service create options, but that wouldn't use my docker-compose.yml file so it would not be DRY nor maintainable, which is important to me.
docker-compose is a great tool for developers, it is sad that we have to dive into DevOps tools to achieve such common features as rolling updates. This whole swarm architecture seems too complicated for my needs at this stage.
You don't have to use registeries in your single node setup. you can build your "app" image on your node from a local docker file using this command -cd to the directory of you docker file-
docker build . -t my-app:latest
This will create a local docker image on your node, this image is only visible to your single node which is benefitial in your use case but i wouldn't recommend this in a production setup.
You can now edit the compose file to be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
image: "my-app:latest"
depends_on:
- "postgres"
env_file:
- ".env"
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
And now you can run your stack from this node and it will use your local app image and benefit from the usage of the image [updates - rollbacks ...etc]
I do have a side note though on your stack file. You are using the same env file for both services, please mind that swarm will look for the ".env" file relative/next to the ".yml" file, so if this is not intentional please revise the location of your env files.
Also on a side note this solution is only feasable on a single node cluster and if you scale your cluster you will have to use a registery and registeries dont have to be public, you can deploy a private registery on your cluster and only your nodes can access it -or you can make it public- the accessibility of your registery is your choice.
Hope this will help with your issue.
Instead of docker images, you can directly use the docker file there. please check the below example.
version: "3.7"
services:
webapp:
build: ./dir
The error is because of compose unable to find an image on the Docker public registry.
Above method should solve your issue.
Basically you need to use docker images in order to make the rolling update to work in docker swarm. Also I would like to clarify that you can host a private registry and use it instead of public one.
Detailed Explanation:
When you try out rolling update how docker swarm works is that it sees whether there is a change in the image which is used for the service if so then docker swarm schedules service updation based on the updation criteria's set up and will work on it.
Let us say there is no change to the image then what happens? Simply docker will not apply the rolling update. Technically you can specify --force flag to make it force update the service but it will just redeploy the service.
Hence create a local repo and store the images into that and use that image name in docker-compose file to be used for a swarm. You can secure the repo by using SSL, user credentials, firewall restrictions which is up to you. Refer this for more details on deploying docker registry server.
Corrections in your compose file:
Since docker stack uses the image to create service you need to specify image: "<image name>" in app service like done in postgres service. AS you have mentioned build instruction image-name is mandatory as docker-compose doesn't know what tho name the image as.Reference.
Registry server is needed if you are going to deploy the application in multi-server. Since you have mentioned it's a single node deployment just having the image pulled/built on the server is enough. But private registry approach is the recommended.
My recommendation is that don't club all the services into a single docker-compose file. The reason is that when you deploy/destroy using docker-compose file all the services will be taken down. This is a kind of tight coupling. Of course, I understand that all the other services depend on DB. in such cases make sure DB service is brought up first before other services.
Instead of specifying the env file make it as a part of Docker file instruction. either copy the env file and source it in entry point or use ENV variable to define it.
Also just an update:
Stack is just to group the services in swarm.
So your compose file should be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
image: "image-name:tag" #the image built will be tagged as image-name:tag
working_dir: /app # note here I've removed .env file
depends_on:
- "postgres"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Dockerfile:
from baseimage:tag
COPY .env /somelocation
# your further instructions go here
RUN ... & \
... & \
... && chmod a+x /somelocation/.env
ENTRYPOINT source /somelocation/.env && ./file-to-run
Alternative Dockerfile:
from baseimage:tag
ENV a $a
ENV b $b
ENV c $c # here a,b,c has to be exported in the shell befire building the image.
ENTRYPOINT ./file-to-run
And you may need to run
docker-compose build
docker-compose push (optional needed to push the image into registry in case registry is used)]
docker stack deploy -c docker-compose.yml <stackname>
NOTE:
Even though you can create the services as mentioned here by #M.Hassan I've explained the ideal recommended way.
Hi I want to use the haproxy exporter provided here https://github.com/prometheus/haproxy_exporter in a docker container.
I am using docker-composefor managing containers and want to recreate this command:
$ docker run -p 9101:9101 prom/haproxy-exporter -haproxy.scrape-uri="http://user:pass#haproxy.example.com/haproxy?stats;csv"
in my docker-compose.yml.
I am not sure how to pass the argument, after viewing the docker-compose documentation I tried it like this:
haproxy-exporter:
image: prom/haproxy-exporter
ports:
- 9101:9101
network_mode: "host"
build:
args:
- haproxy.scrape-uri="http://user:pass#haproxy.example.com/haproxy?stats;csv"
But this gives me an file is invalid message because it requires a context with a build.
Thanks for any help in advance
The image is already built and pulled from the hub (unless you have your own Dockerfile) so you don't need the build option. Instead, pass your arguments as the command since the image appears to use an entrypoint (if their Dockerfile only had a cmd option, then you'd need to also pass /bin/haproxy-exporter in your command):
haproxy-exporter:
image: prom/haproxy-exporter
ports:
- 9101:9101
network_mode: "host"
command: -haproxy.scrape-uri="http://user:pass#haproxy.example.com/haproxy?stats;csv"