I have a yaml file called original.yml with the following contents:
version: '2'
services:
vendor-app:
image: vendor/vendor-app#sha256:f9db328fe5eed87f89cc0390a91c65eb92351fedda8a0c833b30c9147bd7ba07
logging:
driver: json-file
options:
max-size: "50m"
max-file: "2"
volumes:
- /my_vol
labels:
io.rancher.container.pull_image: always
com.vendor-app.builder.version: "3bc55d00c"
com.vendor-app.built: "Mon, 01 Jan 2021 19:32:24 -0000"
myservice:
image: myregistry/myservice
logging:
driver: json-file
options:
max-size: "50m"
max-file: "2"
volumes:
- /my_vol2
labels:
io.rancher.container.pull_image: always
I have another yaml file, I will call it vendor.yml that I get from vendor periodically, with the following content:
version: '2'
services:
vendor-app:
image: vendor/vendor-app#sha256:ba065e9a04478fc17fd5b61b8c558616b7e9acba105668ef786f084545c32778
logging:
driver: json-file
options:
max-size: "50m"
max-file: "20"
volumes:
- /my_vol
labels:
com.vendor-app.builder.version: "7766jjc"
com.vendor-app.built: "Fri, 14 Dec 2018 20:50:35 -0000"
I would like to replace image in original.yml from image in vendor.yml, and merge labels from vendor.yml with the existing labels in original.yml. Basically the end result should be the following :
services:
vendor-app:
image: vendor/vendor-app#sha256:ba065e9a04478fc17fd5b61b8c558616b7e9acba105668ef786f084545c32778
logging:
driver: json-file
options:
max-size: "50m"
max-file: "2"
volumes:
- /my_vol
labels:
io.rancher.container.pull_image: always
com.vendor-app.builder.version: "7766jjc"
com.vendor-app.built: "Fri, 14 Dec 2018 20:50:35 -0000"
myservice:
image: myregistry/myservice
logging:
driver: json-file
options:
max-size: "50m"
max-file: "2"
volumes:
- /my_vol2
labels:
io.rancher.container.pull_image: always
I almost reached my goal by doing this :
Map merge(Map... maps) {
Map result = [:]
maps.each { map ->
map.each { k, v ->
result[k] = result[k] instanceof Map ? merge(result[k], v) : v
}
}
result
}
def latest_yml = readYaml file:"vendor.yml"
def current_yml = readYaml file:"original.yml"
def new_yml = readYaml file:"original.yml"
current_yml.services['vendor-app'].image = latest_yml.services['vendor'].image
current_yml.services['vendor-app'].labels = latest_yml.services['vendor-app'].labels
writeYaml file:"beforemerge.yml", data: current_yml
println "merge(new_yml, current_yml): " + merge(new_yml, current_yml)
writeYaml file:"aftermerge.yml", data: merge(new_yml, current_yml)
Almost, because this is what I get in aftermerge.yml:
services:
vendor-app:
image: vendor/vendor-app#sha256:ba065e9a04478fc17fd5b61b8c558616b7e9acba105668ef786f084545c32778
logging:
driver: json-file
options:
max-size: 50m
max-file: '2'
volumes:
- /my_vol
labels:
io.rancher.container.pull_image: always
com.vendor-app.builder.version: 7766jjc
com.vendor-app.built: Fri, 14 Dec 2018 20:50:35 -0000
myservice:
image: myregistry/myservice
logging:
driver: json-file
options:
max-size: 50m
max-file: '2'
volumes:
- /my_vol2
labels:
io.rancher.container.pull_image: always
As you can see, it has messed up the formatting somewhat...the double quotes from com.vendor-app.built and com.vendor-app.builder.version have disappeared and this obviously will cause havoc when I use the updated yaml file (the examples here are redacted versions of docker-compose.yml). The quoting around max-size and max-file are also changed to singles quotes only for max-file ?? Weird...
Why is it doing this and how can I make the logic preserve the quoting?
Is there a better way of doing this? Really all I want to do is get the values from image and labels from vendor.yml and use them to update the same values in original.yml (in the case of labels, appending onto the current value in original.yml as well as updating).
Any help would be appreciated, thanks.
Related
Can I define a global restart policy on all the containers inside of a single docker-compose file instead of adding it individually in each service?
As already pointed out by #larsks, there is no such feature yet. But it is possible (since version 3.4) to define common properties using x- keys and avoid repetition using YAML merge syntax:
version: "3.9"
x-common-options:
&common-options
restart: always
logging:
options:
max-size: '12m'
max-file: '5'
driver: json-file
services:
service_one:
<< : *common-options
image: image1
The above is the same as this:
version: "3.9"
services:
service_one:
image: image1
restart: always
logging:
options:
max-size: '12m'
max-file: '5'
driver: json-file
Is it possible to specify global settings for services in a Docker compose file?
For example, take this Docker Compose file:
version: "3.9"
services:
test1:
env_file: /path/to/env/file
image: test
container_name: test1
ports:
- "1234:22"
networks:
- dmz
restart: always
test2:
env_file: /path/to/env/file
image: test
container_name: test2
ports:
- "2345:22"
networks:
- trust
restart: always
networks:
dmz:
driver: bridge
trust:
driver: bridge
I don't want to have env_file: /path/to/env/file for every service and would like to make it apply to all services. I know I can pass it in the docker-compose command line but I'm hoping to do it from within the Docker compose file.
Although #timsmelik's answer points in the right direction and shows how to use a yaml anchor and alias with scalar values, you can probably take advantage of the merge key yaml feature here to set overridable default values for your services.
Here is an example to illustrate
version: "3.9"
x-service_defaults: &service_defaults
env_file: /path/to/env/file
image: test
restart: always
services:
test1:
<< : *service_defaults
container_name: test1
ports:
- "1234:22"
networks:
- dmz
test2:
<< : *service_defaults
container_name: test2
ports:
- "2345:22"
networks:
- trust
test3:
<< : *service_defaults
env_file: /some/override/env/file
container_name: test3
volumes:
- /some/bind/dir:/whatever/target
networks:
dmz:
driver: bridge
trust:
driver: bridge
You can find a pretty good comprehensive explanation of all possible yaml anchor/alias usage applied to docker-compose files in the following blog post
Try using extensions as fragments.
With the support for extension fields, Compose file can be written as follows to improve readability of reused fragments:
This is the example from the README.md:
x-logging: &default-logging
options:
max-size: "12m"
max-file: "5"
driver: json-file
services:
frontend:
image: awesome/webapp
logging: *default-logging
backend:
image: awesome/database
logging: *default-logging
I have created the following docker-compose.yml file to build docker containers for a Django application:
version: "2.4"
services:
db:
image: postgres:11
env_file:
- .env_prod_db
volumes:
- db:/var/lib/postgresql/data/
networks:
- net
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
web:
build:
context: .
dockerfile: Dockerfile
env_file:
- .env_prod_web
command: gunicorn roster_project.wsgi:application --disable-redirect-access-to-syslog --error-logfile '-' --access-logfile '-' --access-logformat '%(t)s [GUNICORN] %(h)s %(l)s %(u)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"' --workers 3 --bind 0.0.0.0:8000
volumes:
- static:/roster/webserver/static/
networks:
- net
expose:
- 8000
depends_on:
- db
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/roster/webserver/static/
networks:
- net
depends_on:
- web
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
networks:
net:
enable_ipv6: true
driver: bridge
ipam:
driver: default
config:
- subnet: fd02::/64
gateway: fd02::1
volumes:
db:
static:
Potential users of my app could use this file to deploy my app if they first download all the source code from Github. However, I would like them to be able to deploy the app just by using docker-compose to download docker images that I have stored in my Docker Hub repo.
If I upload my docker images to a Docker Hub repo, do I need to create an additional docker-compose.yml that refers to the repo images so that others can deploy my app on their own docker hosts ? Or can I somehow combine the build and deploy requirements into a single docker-compose.yml file ?
You can use multiple Compose files when you run docker-compose commands. The easiest way to do this is to have a main docker-compose.yml file that lists out the standard (usually production-oriented) settings, and a docker-compose.override.yml file that overrides its settings.
For example, the base docker-compose.yml file could look like:
version: "2.4"
services:
db:
image: postgres:11
volumes:
- db:/var/lib/postgresql/data/
web:
image: me/web
depends_on:
- db
nginx:
image: me/nginx
ports:
- 80:80
depends_on:
- web
volumes:
db:
Note that I've removed all of the deployment-specific setup (logging configuration, manual IP overrides, environment files); I'm using the Compose-provided default network; I avoid overwriting the static assets with old content from a Docker volume; and I provide an image: name even for things I build locally.
The differences between this and the "real" production settings can be put in a separate docker-compose.production.yml file:
version: "2.4"
services:
db:
# Note, no image: or other similar settings here;
# they come from the base docker-compose.yml
env_file:
- .env_prod_db
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
web: # similarly
db: # similarly
networks:
default:
enable_ipv6: true
# and other settings as necessary
For development, on the other hand, you need to supply the build: information. docker-compose.development.yml can contain:
version: "2.4"
services:
web:
build: .
nginx:
build: ./nginx
Then you can use a symbolic link to make one of these the current override file
ln -sf docker-compose.production.yml docker-compose.override.yml
A downstream deployer will need the base docker-compose.yml, that mentions the Docker Hub image. They can use your production values if it makes sense for them, or they can use a different setup. They shouldn't need the rest of the application source code or Dockerfiles (though it's probably all in the same GitHub repository).
I am using docker-compose file for running elk service but i am running elk stack of version 7.5 and i want to update this to 7.8 without stopping services.I've tried docker-compose pull but it can't pull the latest image of elasticsearch logstash and kibana and i tried another way by manually pulling the latest image using docker pull command and then after i've updated the image name in docker-compose
docker-compose.yml
version: "3.3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
volumes:
- elasticsearch:/usr/share/elasticsearch/data
secrets:
- source: elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
ulimits:
memlock:
soft: -1
hard: -1
nproc: 20480
nofile:
soft: 160000
hard: 160000
logging:
driver: "json-file"
options:
max-file: "9"
max-size: "6m"
restart: always
ports:
- 9200:9200
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:7.5.0
container_name: kibana
depends_on:
- elasticsearch
restart: always
logging:
driver: "json-file"
options:
max-file: "9"
max-size: "6m"
secrets:
- source: kibana.yml
target: /usr/share/kibana/config/kibana.yml
networks:
- esnet
logstash:
image: docker.elastic.co/logstash/logstash:7.5.0
container_name: logstash
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/config/jvm.options:/usr/share/logstash/config/jvm.options
- ./logstash/plugins:/usr/share/logstash/plugins
restart: always
logging:
driver: "json-file"
options:
max-file: "9"
max-size: "6m"
networks:
- esnet
when docker-compose pull command doesn't work i tried this
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.8.0
docker pull docker.elastic.co/kibana/kibana:7.8.0
docker pull docker.elastic.co/logstash/logstash:7.8.0
after that i made some changes to my docker-compose file i change image version so that docker-compose command does not take time to download the image so i already pull latest image
version: "3.3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
and finally i used this command
docker-compose restart
You can't do that. When you want to update the image, you have to run another container from the new image you want. Docker not support this feature. You can only update manually by the way change the name of the image and up again.
I got a issue in Marathon. There are 2 situations
Run docker-compose-up -d in Ubuntu command interface.
It run and deploy application successfully.
Run docker-compose-up -d in Marathon Json file
{
"id":"/piggy-demo-beta",
"cmd":"cd /home/ubuntu/spring-demo2 && sudo docker-compose up -d ",
"cpus":1,
"mem":4200,
"disk":0,
"instances":1,
"acceptedResourceRoles":[
"slave_public"
],
"portDefinitions":[
{
"port":10000,
"protocol":"tcp",
"labels":{}
}
]
}
Then it can't deploy and the Marathon always transform the state around Waiting, Delayed and Running.
When I touch sudo ps -a in the server, it appears that the container restarting ceaselessly.
And in the Mesos, the same task finished a lot of times.
Here is the compose.yml file.
version: '2'
services:
rabbitmq:
image: rabbitmq:3-management
restart: always
ports:
- 15672:15672
logging:
options:
max-size: "10m"
max-file: "10"
config:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
image: sqshq/piggymetrics-config
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
registry:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
image: sqshq/piggymetrics-registry
restart: always
ports:
- 8761:8761
logging:
options:
max-size: "10m"
max-file: "10"
gateway:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
image: sqshq/piggymetrics-gateway
restart: always
ports:
- 80:4000
logging:
options:
max-size: "10m"
max-file: "10"
auth-service:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
NOTIFICATION_SERVICE_PASSWORD: $NOTIFICATION_SERVICE_PASSWORD
STATISTICS_SERVICE_PASSWORD: $STATISTICS_SERVICE_PASSWORD
ACCOUNT_SERVICE_PASSWORD: $ACCOUNT_SERVICE_PASSWORD
MONGODB_PASSWORD: $MONGODB_PASSWORD
image: sqshq/piggymetrics-auth-service
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
auth-mongodb:
environment:
MONGODB_PASSWORD: $MONGODB_PASSWORD
image: sqshq/piggymetrics-mongodb
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
account-service:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
ACCOUNT_SERVICE_PASSWORD: $ACCOUNT_SERVICE_PASSWORD
MONGODB_PASSWORD: $MONGODB_PASSWORD
image: sqshq/piggymetrics-account-service
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
account-mongodb:
environment:
INIT_DUMP: account-service-dump.js
MONGODB_PASSWORD: $MONGODB_PASSWORD
image: sqshq/piggymetrics-mongodb
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
statistics-service:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
MONGODB_PASSWORD: $MONGODB_PASSWORD
STATISTICS_SERVICE_PASSWORD: $STATISTICS_SERVICE_PASSWORD
image: sqshq/piggymetrics-statistics-service
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
statistics-mongodb:
environment:
MONGODB_PASSWORD: $MONGODB_PASSWORD
image: sqshq/piggymetrics-mongodb
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
notification-service:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
MONGODB_PASSWORD: $MONGODB_PASSWORD
NOTIFICATION_SERVICE_PASSWORD: $NOTIFICATION_SERVICE_PASSWORD
image: sqshq/piggymetrics-notification-service
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
notification-mongodb:
image: sqshq/piggymetrics-mongodb
restart: always
environment:
MONGODB_PASSWORD: $MONGODB_PASSWORD
logging:
options:
max-size: "10m"
max-file: "10"
monitoring:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
image: sqshq/piggymetrics-monitoring
restart: always
ports:
- 9000:8080
- 8989:8989
logging:
options:
max-size: "10m"
max-file: "10"
To run group of application that were set up with docker compose on Marathon you should translate each of your service into Marathons application.
Every filed from compose.yaml has its equivalent in Marathon app.
Example
Lets assume we want to run some big application called piggy that is build from many smaller services. One service is define as
rabbitmq:
image: rabbitmq:3-management
restart: always
ports:
- 15672:15672
logging:
options:
max-size: "10m"
max-file: "10"
will result in
{
"id":"piggy/rabbitmq",
"container":{
"docker":{
"image":"rabbitmq:3-management",
"network":"BRIDGE",
"portMappings":[{
"containerPort":8761,
"hostPort":0
}],
"parameters":[
{
"key":"max-size",
"value":"10,"
},
{
"key":"max-file",
"value":"10"
}
]
},
"type":"DOCKER"
},
"cpus":1.0,
"mem":512.0,
"instances":1
}
This process need to be repeated for each service defined in compose.yaml. Prepared JSONs should be POST /v2/apps or grouped together in Marathon group.
If you take a close look you can see it's not 1 to 1 translation. In compose.yaml there are not defined resources cpus/mem. Another difference it port mapping. When you run service on Mesos you shouldn't statically allocate ports. That's why host port is set to 0 so random port will be assigned. Another important thing is healthchecking. You should define healthchecks for your application. And last is volumes. When running services on Mesos and they require files to persists you should investigate persistent volumes.