We have been running a fabric network for a while and the docker containers ran out of disk space because of the logs. How can we trim the logs so that they don't take up more than e.g., 1GB of disk space? Older messages should be discarded.
As it sounds like you are running Fabric in Docker, you should just use Docker's native logging options. Sounds like you are just using the default logging which means the json-file driver. You can either specify Docker-wide settings or per container settings.
Here's an example of a daemon.json file to set global options to limit log file size to 10m and limit the number of log files to keep to 3:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "production_status",
"env": "os,customer"
}
}
If you are using docker-compose to run your containers, you can set per container logging options in your yams config file.
If you are starting containers using docker run ...., you can use the --log-opt flag, e.g. docker run --log-opt max-file=3 --log-opt max-size=10m ....
Related
I have some containers which all of them have the always restart value in the docker-compose file like this:
version: "3.7"
services:
container:
image: ghost:latest
container_name: some_container
restart: always
depends_on:
- ...
ports:
- ...
...
As soon as the OS (Flatcar Linux / CoreOS) has updated itself none of the containers restart. But if I just do $ sudo docker ps all of the containers starts at once. Whats up with that and how do I fix it so my containers automatically restarts after an update?
EDIT:
Not sure what is unclear about my question, restart: always is turned on. Unless I'm missing some vital thing in the documentation, this command should restart the container even if the docker daemon is restarted (after an os reboot).
Copy of one my comments from below:
Ok, so help me out here. As you can see in my question, I have
restart: always turned on. All these containers are started
successfully and are running well. Then the OS updates itself
automatically and restarts itself. After this restart the docker
daemon is restarted. But for some reasons the containers I had running
WITH RESTART: ALWAYS turned on DOES NOT START. If I enter my server
at this moment, type sudo docker ps to list my running containers,
suddenly all containers are booted up and I see the list. So why
wasn't the containers started, even though the daemon is running?
From the comments it appears the docker service was not configured to automatically start on boot. Docker is a client server app, and the server runs from systemd with a separate service for the docker socket used by the client to talk to the server. Therefore it's possible for any call with the docker command to cause the server to get launched by hitting the docker socket.
The service state in systemd can be checked with:
systemctl status docker
or you may want to check:
systemctl is-enabled docker
It can be manually started with:
systemctl start docker
And it can be enabled to start with:
systemctl enable docker
All of the above commands need to be run as root.
This requires the Docker service to get started on boot instead of using the default socket activation that starts on-demand like you decribed with execution of "docker ps"
Here is the required Container Linux Config to enable the Docker service while disabling socket activation:
systemd:
units:
# Ensure docker starts automatically instead of being socket-activated
- name: docker.socket
enabled: false
- name: docker.service
enabled: true
always Always restart the container if it stops. If it is manually stopped, it is restarted only when the Docker daemon restarts, or the container itself is manually restarted.
unless-stopped Similar to always, except that when the container is stopped (manually or otherwise), it is not restarted even after the Docker daemon restarts.
If you had an already running container that you wanted to change the restart policy for, you could use the docker update command to change that, and the below command will ensure all currently running containers will be restarted unless stopped
$ docker update --restart unless-stopped $(docker ps -q)
NOTE: Keep the following in mind when using restart policies
A restart policy only takes effect after a container starts successfully. In this case, starting successfully means that the container is up for at least 10 seconds and Docker has started monitoring it. This prevents a container that does not start at all from going into a restart loop.
If you manually stop a container, its restart policy is ignored until the Docker daemon restarts, or the container is manually restarted. This is another attempt to prevent a restart loop.
Restart policies only apply to containers. Restart policies for swarm services are configured differently
Documentation
If the docker container has been created before, it's [restart policy][1] may not be updated automatically by changing it in the docker compose YAML file.
If you change Restart Policy in the YAML file:
# cat docker-compose.yml
version: "3"
services:
<your-service>:
restart: always
You can see the container details in which RestartPolicy has old value yet:
# docker inspect <your-container> | fgrep -i restart -A 5
"RestartCount": 0,
--
"RestartPolicy": {
"Name": "",
Name is the Restart Policy name! and has no value that means no restart policy is set and the default value [no][1] is used.
So you may not only need to update the Restart Policy in the file, but also update pre created container manually:
# docker update <your-container> --restart always
So new value is changed:
# docker inspect <your-container> | fgrep -i restart -A 5
"RestartCount": 0,
--
"RestartPolicy": {
"Name": "always",
"MaximumRetryCount": 0
},
Just this :(
[1]: https://docs.docker.com/config/containers/start-containers-automatically/
It's container restart policy. restart: always restart the container if it stops. If it is manually stopped, it is restarted only when Docker daemon restarts or the container itself is manually restarted.Please check this link restart_policy.
I am trying to use a local docker container registry as my imageRegistry .
I created a registry by -
docker run –d –p 5000:5000 –-name registry registry:2
I have tagged and pushed my image to localhost:5000, and I can see it running by docker images.
I have modified my launch.json as -
{
"configurations": [
{
"name": "Run/Debug on Kubernetes",
"type": "cloudcode.kubernetes",
"request": "launch",
"skaffoldConfig": "${workspaceFolder}/skaffold.yaml",
"watch": true,
"cleanUp": true,
"portForward": true,
"imageRegistry": "localhost:5000"
}
]
}
But when I do a Run on Kubernetes,
I get an error waiting for rollout to finish: 0 of 1 updated replicas are available...
pod/serviceb-847d79694c-6lxbd: container server is waiting to start: localhost:5000/serviceb:latest#sha256:*** can't be pulled.
I'm guessing you are using a local cluster solution such as minikube, Docker's built-in Kubernetes cluster, kind, microk8s, or k3s. Although these clusters are running on your machine, they're running within a virtual machine and so their localhost is the VM, not your machine. So your private registry is not accessible within the cluster.
I'm new to log4j2 and the elastic stack.
I have a filebeat docker container that doesn't work exactly how I want and now I want to take a look at the logs. But when I do docker-compose logs I get a lot debug messages and json objects. It's unreadable how much there is.
How can I create a log4j2 properties setup to create some rolling log files. Maybe put the old logs into a monthly based folder or something? and where do I put this log4j2.properties file?
It's generating a lot of logs because you're running docker-compose logs, which will get the logs for all containers in your docker compose file.
What you want is probably:
docker logs <name-of-filebeat-container>. The name of the filebeat container can be found doing a docker ps.
docker compose logs <name-of-filebeat-service>. The name of the service can be found on your docker-composer.yml file.
Regarding the JSON outputs, you can query your Docker engine default logging driver with:
# docker info | grep 'Logging Driver'
Logging Driver: json-file
If your container have a different Logging Driver you can check with:
docker inspect -f '{{.HostConfig.LogConfig.Type}}' <name-or-id-of-the-container>
You can find all log drivers in this link
To run containers with a different log-driver you can do:
With docker run: docker run -it --log-driver <log-driver> alpine ash
With docker-compose:
`logging:
driver: syslog
options:
syslog-address: "tcp://192.168.0.42:123"`
Regarding your log rotation questio, I'd say the easyest way is to configure the logging driver with the syslog driver, configure it to your local machine (or your syslog server) and then logrotate the files.
You can find several logrotate articles for Linux (which I assume you're using), for example this one
I've got a daemon.json file, stored in /etc/docker/daemon.json. to configure the docker daemon with following contents:
{
"log-driver" : "syslog",
"log-opts": {
"syslog-facility": "local1",
"tag": "{{.Name}}"
},
"storage-driver": "devicemapper",
"storage-opts": [
"dm.fs=xfs",
"dm.thinpooldev=/dev/mapper/vg00-docker--pool",
"dm.use_deferred_removal=true"
]
}
None of the docker-compose services have logging options configured, nor are any of the docker containers configured to start with --log-driver in their cmd or entrypoint.
Inspecting the output of the docker info command, I can verify that the logging driver is set to syslog.
However when running a docker-compose stack, all of the containers still show json-file upon inspecting them with the command docker inspect --format='{{.HostConfig.LogConfig.Type}}' , which seems to me as if docker-compose is not respecting the /etc/docker/daemon.json config file, just for the logging config, as the storage-driver is set correctly.
The docker version I used to run this is 17.12.0, docker-compose is at 1.19.0
/etc/docker/daemon.json is default config file and docker daemon should access if exists when starts. Maybe there's something wrong in your file according to the configuration (because it looks ok according to syntax).
Let's try to force config-file read with debug enabled and see which error it shows.
/usr/bin/dockerd stop
/usr/bin/dockerd start -D -l debug --config-file /etc/docker/daemon.json
After that, you can see logs with journalctl -u docker
Alternatively, you know that you can test easily each config param passing them one by one via cli instead json config file, in order to figure out which of them causes that configuration is not load.
/usr/bin/dockerd stop
/usr/bin/dockerd start -D -l debug --log-driver syslog --storage-driver devicemapper ...
Adding one by one you will be able to check if for example it fails with storage-opts because /dev/mapper/vg00-docker--pool is not mounted or whatever.
My final conclusion is that I wasn't able to set the /c/users/... location because it wasn't shared in "Docker".
After this I was able to see the /c/users/.. directory in all my container instances. I was then able to use the -v flag with this directory on every instance basically writing files to my host machine.
What I still don't get is that I don't think I'm actually using volumes at the moment... But it works...
I'm trying to have my Docker-hosted Redis instance to persist its data but the mounted volume doesn't seem to be used. I was using Docker with VirtualBox/boot2docker where the composition worked, however I have since moved to Docker for Windows where the compose file still works, but I'm not sure about the volumes property.
My docker-compose.yml file:
vq-redis:
image: redis:latest
ports:
- "6379:6379"
volumes:
- /c/users/r/.docker/data/redis/data:/data
It doesn't matter if I add or remove the volumes definition, because it will always show something like this with docker inspect:
"Mounts": [
{
"Name": "40791b26771b5d62778d85b0ef24e74e516f95d32cf217424232ce8f8a1b8c6f",
"Source": "/var/lib/docker/volumes/40791b26771b5d62778d85b0ef24e74e516f95d32cf217424232ce8f8a1b8c6f/_data",
"Destination": "/data",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
Is the volumes property still working with Docker for Windows or am I missing a point?
Edit:
If I run...
docker run --name vq-redis -d -v //c/users/r/.docker/data/vq-redis:/data redis redis-server --appendonly yes
... I can see the container appearing in Kitematic and with docker inspect I can see a mount going to my local folder. However the local folder isn't shown in Kitematic...
If I add data to the Redis server hosted in the container, then stop the container and start it again the data is gone.
I tried setting the local folder manually in Kitematic. This restarts the container so it seems, but I'm unsure if the initial parameters are passed again. You say:
"If the volumes aren't networked on run"
I guess they were actually networked on run as seen in the console.
Still, I can add data to the Redis instance hosted in the container. But as soon as I restart the container it's gone...
It should work. I assume you didn't get any errors (e.g., permission issues, etc.) and that you are removing old builds before rebuilding. Does the "/var/lib/docker/volumes/4079..." directory get created?
You could try using double leading slashes on Windows, which was a work-around for some versions:
volumes:
- //c/users/r/.docker/data/redis/data:/data
Redis wouldn't have anything to do with the volume not being created but have you tried other services or even basic docker create -v ... or docker run -v ...?
UPDATE:
There may be some gaps in your understanding of how Docker works that may be getting in the way here.
If you do docker run --name some-redis -d redis redis-server --appendonly yes it will create a volume similar to the one you have in your docker inspect output. Clearly you don't have a /var/lib/docker/volumes/... directory on your Windows machine -- that's in the VM docker host (e.g., boot2docker). How you get to the Docker host volumes differs depending on a number of factors.
If the volumes aren't networked on run, restarting won't help. Do docker stop some-redis && docker rm some-redis and re-run.
Eg. running this command
docker run --name some-redis -d -v $(pwd)/data:/data redis redis-server --appendonly yes
should work as you expect.
ls ./data => appendonly.aof.
It will obviously be empty at first. Destroying the container and creating a new one with the same directory will show the data is still there:
docker exec some-redis echo "set bar baz" | redis-cli
docker stop some-redis
docker rm some-redis
docker run --name some-redis2 -d -v $(pwd)/data:/data redis redis-server --appendonly yes
docker exec some-redis2 echo "get bar" | redis-cli
=> "baz"
(the previous value for "bar" set in the destroyed container).
If this doesn't work for you there could be some issues specific to your environment -- perhaps try a Vagrant-based solution or beta Docker or a native Linux host.