I got a issue in Marathon. There are 2 situations
Run docker-compose-up -d in Ubuntu command interface.
It run and deploy application successfully.
Run docker-compose-up -d in Marathon Json file
{
"id":"/piggy-demo-beta",
"cmd":"cd /home/ubuntu/spring-demo2 && sudo docker-compose up -d ",
"cpus":1,
"mem":4200,
"disk":0,
"instances":1,
"acceptedResourceRoles":[
"slave_public"
],
"portDefinitions":[
{
"port":10000,
"protocol":"tcp",
"labels":{}
}
]
}
Then it can't deploy and the Marathon always transform the state around Waiting, Delayed and Running.
When I touch sudo ps -a in the server, it appears that the container restarting ceaselessly.
And in the Mesos, the same task finished a lot of times.
Here is the compose.yml file.
version: '2'
services:
rabbitmq:
image: rabbitmq:3-management
restart: always
ports:
- 15672:15672
logging:
options:
max-size: "10m"
max-file: "10"
config:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
image: sqshq/piggymetrics-config
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
registry:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
image: sqshq/piggymetrics-registry
restart: always
ports:
- 8761:8761
logging:
options:
max-size: "10m"
max-file: "10"
gateway:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
image: sqshq/piggymetrics-gateway
restart: always
ports:
- 80:4000
logging:
options:
max-size: "10m"
max-file: "10"
auth-service:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
NOTIFICATION_SERVICE_PASSWORD: $NOTIFICATION_SERVICE_PASSWORD
STATISTICS_SERVICE_PASSWORD: $STATISTICS_SERVICE_PASSWORD
ACCOUNT_SERVICE_PASSWORD: $ACCOUNT_SERVICE_PASSWORD
MONGODB_PASSWORD: $MONGODB_PASSWORD
image: sqshq/piggymetrics-auth-service
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
auth-mongodb:
environment:
MONGODB_PASSWORD: $MONGODB_PASSWORD
image: sqshq/piggymetrics-mongodb
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
account-service:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
ACCOUNT_SERVICE_PASSWORD: $ACCOUNT_SERVICE_PASSWORD
MONGODB_PASSWORD: $MONGODB_PASSWORD
image: sqshq/piggymetrics-account-service
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
account-mongodb:
environment:
INIT_DUMP: account-service-dump.js
MONGODB_PASSWORD: $MONGODB_PASSWORD
image: sqshq/piggymetrics-mongodb
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
statistics-service:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
MONGODB_PASSWORD: $MONGODB_PASSWORD
STATISTICS_SERVICE_PASSWORD: $STATISTICS_SERVICE_PASSWORD
image: sqshq/piggymetrics-statistics-service
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
statistics-mongodb:
environment:
MONGODB_PASSWORD: $MONGODB_PASSWORD
image: sqshq/piggymetrics-mongodb
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
notification-service:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
MONGODB_PASSWORD: $MONGODB_PASSWORD
NOTIFICATION_SERVICE_PASSWORD: $NOTIFICATION_SERVICE_PASSWORD
image: sqshq/piggymetrics-notification-service
restart: always
logging:
options:
max-size: "10m"
max-file: "10"
notification-mongodb:
image: sqshq/piggymetrics-mongodb
restart: always
environment:
MONGODB_PASSWORD: $MONGODB_PASSWORD
logging:
options:
max-size: "10m"
max-file: "10"
monitoring:
environment:
CONFIG_SERVICE_PASSWORD: $CONFIG_SERVICE_PASSWORD
image: sqshq/piggymetrics-monitoring
restart: always
ports:
- 9000:8080
- 8989:8989
logging:
options:
max-size: "10m"
max-file: "10"
To run group of application that were set up with docker compose on Marathon you should translate each of your service into Marathons application.
Every filed from compose.yaml has its equivalent in Marathon app.
Example
Lets assume we want to run some big application called piggy that is build from many smaller services. One service is define as
rabbitmq:
image: rabbitmq:3-management
restart: always
ports:
- 15672:15672
logging:
options:
max-size: "10m"
max-file: "10"
will result in
{
"id":"piggy/rabbitmq",
"container":{
"docker":{
"image":"rabbitmq:3-management",
"network":"BRIDGE",
"portMappings":[{
"containerPort":8761,
"hostPort":0
}],
"parameters":[
{
"key":"max-size",
"value":"10,"
},
{
"key":"max-file",
"value":"10"
}
]
},
"type":"DOCKER"
},
"cpus":1.0,
"mem":512.0,
"instances":1
}
This process need to be repeated for each service defined in compose.yaml. Prepared JSONs should be POST /v2/apps or grouped together in Marathon group.
If you take a close look you can see it's not 1 to 1 translation. In compose.yaml there are not defined resources cpus/mem. Another difference it port mapping. When you run service on Mesos you shouldn't statically allocate ports. That's why host port is set to 0 so random port will be assigned. Another important thing is healthchecking. You should define healthchecks for your application. And last is volumes. When running services on Mesos and they require files to persists you should investigate persistent volumes.
Related
I was given a task to convert a site from HTTP to HTTPS. The project is running in ASP.NET with React, and it's Dockerised. Below is the Dockerfile code:
FROM mcr.microsoft.com/dotnet/aspnet:3.1
WORKDIR /app
COPY bin/Release/netcoreapp3.1/publish/ /app
ENTRYPOINT ["dotnet", "some.dll"]
My docker-compose.yml file contains:
version: '3.4'
services:
timescaledb:
image: timescale/timescaledb:latest-pg12
environment:
POSTGRES_PASSWORD: "******"
volumes:
- silver_timescaledb_data:/var/lib/postgresql/data
ports:
- "5432:5432"
restart: always
logging:
options:
max-size: "10m"
max-file: "5"
silver_ui:
image: dev.azurecr.io/silver/silver_ui:1.0.1
ports:
- "80:80"
volumes:
- /etc/silver:/etc/silver
- /var/lib/silver/:/var/lib/silver/
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
logging:
options:
max-size: "10m"
max-file: "5"
restart: always
volumes:
silver_timescaledb_data:
external: true
Can anyone help me to fix this?
I provide you an official Microsoft article about what you are asking for: Hosting ASP.NET Core images with Docker over HTTPS.
We discovered that some of our containers write so much logging messages that it will be a problem in the future, if we don`t limit the size. I found this article which seems to be the perfect solution to our problem.
So I took this docker-compose file
version: "3.7"
services:
rabbit:
image: rabbitmq:1.0.4
container_name: rabbitmq
volumes:
- mysql-designer-rabbitmq:/var/lib/rabbitmq/mnesia
environment:
- "RABBITMQ_DEFAULT_USER=user"
- "RABBITMQ_DEFAULT_PASS=pw"
ports:
- "15674:15674"
- "15672:15672"
- "5672:5672"
- "1883:1883"
restart: unless-stopped
mysql:
image: mysql:1.0.5
container_name: mysql
volumes:
- mysql-designer-db:/var/lib/mysql
environment:
- "MYSQL_ALLOW_EMPTY_PASSWORD=true"
- "MYSQL_DATABASE=db"
- "MYSQL_USER=user"
- "MYSQL_PASSWORD=pw"
- "MYSQL_ROOT_PASSWORD="
depends_on:
- rabbit
restart: unless-stopped
ports:
- "3306:3306"
sitestructure:
image: sitestructure:319
container_name: sitestructure
volumes:
- ./.docker/sitestructure/appsettings.json:/app/appsettings.json
depends_on:
- mysql
- rabbit
links:
- mysql
- rabbit
ports:
- "5000:5000"
restart: unless-stopped
deploy:
restart_policy:
condition: on-failure
max_attempts: 10
and edited the sitestructure container and put these lines to it:
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m
Now when I try to update the containers the Command Line just says
Recreating sitestructure ...
And this seems to never end. Only if I remove the lines from the compose file I can use it again
I got it working. But not per container but for all containers at the same time. I edited my daemon.json file and added
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
After a restart and removing and adding all containers it now works.
I have created the following docker-compose.yml file to build docker containers for a Django application:
version: "2.4"
services:
db:
image: postgres:11
env_file:
- .env_prod_db
volumes:
- db:/var/lib/postgresql/data/
networks:
- net
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
web:
build:
context: .
dockerfile: Dockerfile
env_file:
- .env_prod_web
command: gunicorn roster_project.wsgi:application --disable-redirect-access-to-syslog --error-logfile '-' --access-logfile '-' --access-logformat '%(t)s [GUNICORN] %(h)s %(l)s %(u)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"' --workers 3 --bind 0.0.0.0:8000
volumes:
- static:/roster/webserver/static/
networks:
- net
expose:
- 8000
depends_on:
- db
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/roster/webserver/static/
networks:
- net
depends_on:
- web
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
networks:
net:
enable_ipv6: true
driver: bridge
ipam:
driver: default
config:
- subnet: fd02::/64
gateway: fd02::1
volumes:
db:
static:
Potential users of my app could use this file to deploy my app if they first download all the source code from Github. However, I would like them to be able to deploy the app just by using docker-compose to download docker images that I have stored in my Docker Hub repo.
If I upload my docker images to a Docker Hub repo, do I need to create an additional docker-compose.yml that refers to the repo images so that others can deploy my app on their own docker hosts ? Or can I somehow combine the build and deploy requirements into a single docker-compose.yml file ?
You can use multiple Compose files when you run docker-compose commands. The easiest way to do this is to have a main docker-compose.yml file that lists out the standard (usually production-oriented) settings, and a docker-compose.override.yml file that overrides its settings.
For example, the base docker-compose.yml file could look like:
version: "2.4"
services:
db:
image: postgres:11
volumes:
- db:/var/lib/postgresql/data/
web:
image: me/web
depends_on:
- db
nginx:
image: me/nginx
ports:
- 80:80
depends_on:
- web
volumes:
db:
Note that I've removed all of the deployment-specific setup (logging configuration, manual IP overrides, environment files); I'm using the Compose-provided default network; I avoid overwriting the static assets with old content from a Docker volume; and I provide an image: name even for things I build locally.
The differences between this and the "real" production settings can be put in a separate docker-compose.production.yml file:
version: "2.4"
services:
db:
# Note, no image: or other similar settings here;
# they come from the base docker-compose.yml
env_file:
- .env_prod_db
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
web: # similarly
db: # similarly
networks:
default:
enable_ipv6: true
# and other settings as necessary
For development, on the other hand, you need to supply the build: information. docker-compose.development.yml can contain:
version: "2.4"
services:
web:
build: .
nginx:
build: ./nginx
Then you can use a symbolic link to make one of these the current override file
ln -sf docker-compose.production.yml docker-compose.override.yml
A downstream deployer will need the base docker-compose.yml, that mentions the Docker Hub image. They can use your production values if it makes sense for them, or they can use a different setup. They shouldn't need the rest of the application source code or Dockerfiles (though it's probably all in the same GitHub repository).
I am using docker-compose file for running elk service but i am running elk stack of version 7.5 and i want to update this to 7.8 without stopping services.I've tried docker-compose pull but it can't pull the latest image of elasticsearch logstash and kibana and i tried another way by manually pulling the latest image using docker pull command and then after i've updated the image name in docker-compose
docker-compose.yml
version: "3.3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
volumes:
- elasticsearch:/usr/share/elasticsearch/data
secrets:
- source: elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
ulimits:
memlock:
soft: -1
hard: -1
nproc: 20480
nofile:
soft: 160000
hard: 160000
logging:
driver: "json-file"
options:
max-file: "9"
max-size: "6m"
restart: always
ports:
- 9200:9200
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:7.5.0
container_name: kibana
depends_on:
- elasticsearch
restart: always
logging:
driver: "json-file"
options:
max-file: "9"
max-size: "6m"
secrets:
- source: kibana.yml
target: /usr/share/kibana/config/kibana.yml
networks:
- esnet
logstash:
image: docker.elastic.co/logstash/logstash:7.5.0
container_name: logstash
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/config/jvm.options:/usr/share/logstash/config/jvm.options
- ./logstash/plugins:/usr/share/logstash/plugins
restart: always
logging:
driver: "json-file"
options:
max-file: "9"
max-size: "6m"
networks:
- esnet
when docker-compose pull command doesn't work i tried this
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.8.0
docker pull docker.elastic.co/kibana/kibana:7.8.0
docker pull docker.elastic.co/logstash/logstash:7.8.0
after that i made some changes to my docker-compose file i change image version so that docker-compose command does not take time to download the image so i already pull latest image
version: "3.3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
and finally i used this command
docker-compose restart
You can't do that. When you want to update the image, you have to run another container from the new image you want. Docker not support this feature. You can only update manually by the way change the name of the image and up again.
i'm starting my docker stack with command:
docker stack deploy --with-registry-auth -c docker-compose.yml app
my docker-compose.yml contains entry for mongo:
mongodb:
image: mongo:3.6
volumes:
- mongodb:/var/lib/mongodb
ports:
- 27017:27017
networks:
- backend
environment:
- AUTH=yes
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "5"
deploy:
replicas: 1
placement:
constraints: [node.hostname == hostname]
networks:
frontend:
backend:
volumes:
mongodb:
im stoping docker stack with docker stack rm app Why i'm losing data in mongo after second start with same command docker stack deploy --with-registry-auth -c docker-compose.yml app ? How to avoid id?
Thanks, smola
ok, i've found answer..
based on image:mongo:3.6 Dockerfile, there are already specified two
volumes: VOLUME /data/db /data/configdb
so in docker-compose.yml need to mount host directories into that volumes:
mongodb:
image: mongo:3.6
volumes:
- /sampledir/db:/data/db <-----
- /sampledir/configdb:/data/configdb <-----
ports:
- 127.0.0.1:27017:27017
networks:
- backend
environment:
- AUTH=yes
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "5"
deploy:
replicas: 1
placement:
constraints: [node.hostname == hostname]