How to remove all data in docker container after restart - docker

I have docker compose file:
x11:
network_mode: host
container_name: x11
image: dorowu/ubuntu-desktop-lxde-vnc
restart: on-failure
environment:
- RESOLUTION=1920x1080
my_app:
network_mode: host
container_name: my_app
image: my-image-path
restart: on-failure
depends_on:
- x11
environment:
- LANG=ru_RU.UTF-8
- DISPLAY=:1
entrypoint:
./start_script
Sometimes, during my tests my application crashes. And because I have option "restart: on-failure" it automatically restarts. The problem is that the state of the application is saved at the time of the crash, for example some windows remain open. Is there a way to delete the data in the container on restart or make it so that a new container is created each time?

So far, it is not possible to do that automatically (I mean in docker-compose.yaml). You will need to rely on something external for enforcing like create a bash script that automatically removes containers as shown here. However, you still need to link it up with your app on failure.

Related

docker-compose down wait for container X finish before stop container Y

I have to deploy a web app with a Jetty Server. This app need a database, running on MariaDB. Here the docker-compose file used to deploy the app:
version: '2.1'
services:
jetty:
build:
context: .
dockerfile: docker/jetty/Dockerfile
container_name: app-jetty
ports:
- "8080:8080"
depends_on:
mariadb:
condition: service_healthy
networks:
- app
links:
- "mariadb:mariadb"
mariadb:
image: mariadb:10.7
container_name: app-mariadb
restart: always
environment:
MARIADB_ROOT_PASSWORD: myPassword
MARIADB_DATABASE: APPDB
ports:
- "3307:3306"
healthcheck:
test: [ "CMD", "mariadb-admin", "--protocol", "tcp", "ping", "-u root", "-pmyPassword" ]
interval: 10s
timeout: 3m
retries: 10
volumes:
- datavolume:/var/lib/mysql
networks:
- app
networks:
app:
driver: bridge
volumes:
datavolume:
I use a volume to keep the data of mariaDB even if I use docker-compose down. On my Jetty app, the data is store into the database when the contextDestroyed function is load (when the container is stopped).
But I have an another problem: when I execute docker-compose down, all the containers are stopped and deleted. Although the mariaDB is the last stopped container (that's what the terminal is saying), the save on the contexDestroyed is "interrupt" and I lost some informations because mariaDB container is stopped when the Jetty container still saving data. I tested to stop every container but the mariaDB and my data is succefully saved without loss, so the problem is obviously the mariaDB container.
How can I indicate to the mariadb container to wait for all containers for stopping before stop itself?
According to the depends_on documentation - your dependency will force the following order of shutdown:
web
mariadb
You might want to look into what's happening during the shutdown of these containers and add some custom logic that will guarantee your data is consistent.
You can influence what happens during the shutdown of a container in 2 main ways:
adding a custom script as an entrypoint
handling the SIGTERM signal yourself
here's the relevant documentation on this.
Maybe the simplest - not necessarily the smartest - way would be to add a sleep(5) to the db shutdown, so your app has enough time to flush its writes.

How to bind folders inside docker containers?

I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.

How to use ipaddreses instead of container names in docker compse networking

I'm using docker compose for a web application that I'm creating with asp.net core, postgres and redis. I have everything set up in compose to connect to postgres using the service name I've specified in the docker-compose.yml file. When trying to do the same with redis, I get an exception. After doing research it turns out this exception is a known issue and the work around is using the ip address of the the machine instead of a host name. However I cannot figure out how to get the ipaddress of the redis service from the compose file. Is there a way to do that?
Edit
Here is the compose file
version: "3"
services:
postgres:
image: 'postgres:9.5'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5433:5432'
redis:
image: 'redis:3.0-alpine'
command: redis-server --requirepass devpassword
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6378:6379'
web:
build: .
env_file:
- '.env'
ports:
- "8000:80"
volumes:
- './src/edb/Controllers:/app/Controllers'
- './src/edb/Views:/app/Views'
- './src/edb/wwwroot:/app/wwwroot'
- './src/edb/Lib:/app/Lib'
volumes:
postgres:
redis:
Ok, I found the answer. It was something I was trying but didn't realize the address may change everytime you restart the containers.
Run docker ps to get a list of running contianers then copy the id of your container and run docker inspect {container_id} and that will output the ipaddress that you can access it with from within the other running containers.
The reason I was confused was because that address may change when the containers are started. So I had to guess what the ip address was going to be before I started the containers. Luckly after 5 times I guessed correctly.

How can I run configuration commands after startup in Docker?

I have a Dockerfile set up to run a service that requires some subsequent commands be run in order to initialize properly. I have created a startup script with the following structure and set it as my entrypoint:
Set environment variables for service, generate certificates, etc.
Run service in background mode.
Run configuration commands to finish initializing service.
Obviously, this does not work since the service was started in background and the entry point script will exit with code 0. How can I keep this container running after the configuration has been done? Is it possible to do so without a busy loop running?
How can I keep this container running after the configuration has been done? Is it possible to do so without a busy loop running?
Among your many options:
Use something like sleep inf, which is not a busy loop and does not consume CPU time.
You could use a process supervisor like supervisord to start your service and start the configuration script.
You could run your configuration commands in a separate container after the service container has started.
You can look at this GitHub issue and specific comment -
https://github.com/docker-library/wordpress/issues/205#issuecomment-278319730
To summarize, you do something like this:
version: '2.1'
services:
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
image: wordpress:latest
volumes:
- "./wp-init.sh:/usr/local/bin/apache2-custom.sh"
depends_on:
db:
condition: service_started
ports:
- 80:80
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress
command:
- apache2-custom.sh
wp-init.sh is where you write the code to execute.
Note the command yml tag:
command:
- apache2-custom.sh
because we bounded the two in the volumes tag, it will actually run the code in wp-init.sh within your container.

Restart Docker Containers when they Crash Automatically

I want to restart a container if it crashes automatically. I am not sure how to go about doing this. I have a script docker-compose-deps.yml that has elasticsearch, redis, nats, and mongo. I run this in the terminal to set this up: docker-compose -f docker-compose-deps.yml up -d. After this I set up my containers by running: docker-compose up -d. Is there a way to make these containers restart if they crash? I noticed that docker has a built in restart, but I don't know how to implement this.
After some feedback I added restart: always to my docker-compose file and my docker-compose-deps.yml file. Does this look correct? Or is this how you would implement the restart always?
docker-compose sample
myproject-server:
build: "../myproject-server"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5880:5880
- 6971:6971
volumes:
- "../myproject-server/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
myproject-associate:
build: "../myproject-associate"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5870:5870
volumes:
- "../myproject-associate/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
docker-compose-deps.yml sample
nats:
image: nats
container_name: nats
restart: always
ports:
- 4222:4222
mongo:
image: mongo
container_name: mongo
restart: always
volumes:
- "./data:/data"
ports:
- 27017:27017
If you're using compose, it has a restart flag which is analogous to the one existing in the docker run command, so you can use that. Here is a link to the documentation about this part -
https://docs.docker.com/compose/compose-file/
When you deploy out, it depends where you deploy to. Most container clusters like kubernetes, mesos or ECS would have some configuration you can use to auto-restart your containers. If you don't use any of these tools you are probably starting your containers manually and can then just use the restart flag just as you would locally.
Looks good to me. What you want to understand when working on Docker policies is what each one means. always policy means that if it crashes for any reason automatically restart.
So if it stops for any reason, go ahead and restart it.
So why would you ever want to use always as opposed to say on-failure?
In some cases, you might have a container that you always want to ensure is running such as a web server. If you are running a public web application chances are you want that server to be available 100% of the time.
So for web application I expect you want to use always. On the other hand if you are running a worker process on a file and then naturally exit, that would be a good use case for the on-failure policy, because the worker container might be finished processing the file and you probably want to let it close out and not have it restart.
Thats where I would expect to use the on-failure policy. So not just knowing the syntax, but when to apply which policy and what each one means.

Resources