What does the parameter labels and env in docker daemon.json do?
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "test",
"env": "os,customer"
}
}
After reading the docker documentation, there is no description for it. And I set it up, I didn't find anything that worked.
Is it just a marker for docker daemon?
Reference document:dockerd、Docker object labels
Update by 01/12/2023:
According to your quote from the document, no additional fields were added to the log after my testing.
If the logging driver supports it, this adds additional fields to the logging output. The following output is generated by the json-file logging driver.
So, I created a test container (hello-world), but I don't see any information about the env or label fields in it.
# docker run hello-world
# less /var/lib/docker/<Container_ID>/<Container_ID>-json.log
The only way that I found out about env and labels was to run docker inpect
# docker inspect <Container_ID>
So, they are just arbitrary labels.
Just an arbitrary label you can set, a log driver might use to configure its behavior.
In this case:
If the logging driver supports it, this adds additional fields to the logging output. The following output is generated by the json-file logging driver:
https://docs.docker.com/config/containers/logging/configure/#configure-the-logging-driver-for-a-container
Related
I'm making some management system, and want to manage docker container's log with fluentd.
What I really want to do is saving logs dynamically with parameter in --log-opt tag.
For example, when I deploy a container, I use command like this:
docker run --log-driver=fluentd --log-opt fluentd-address=some_addr --log-opt tag={task_id} some_image
What I'm trying to do is classifying logs by task_id in the log-opt's tag.
In fluent.conf, I want to set path like this: /fluent/log/{task_id}/data.*.log
How can I pass variables or placeholder into fluentd conf file?
You can try after adding environment variable in command. PFB link for fluentd deploy(daemonset) file in YAML(kubernetes), I am passing Environment variable in Fluentd daemonset file(Fluentd Deployment) and using the same in fluentd.conf.
How to get ${kubernetes.namespace_name} for index_name in fluentd?
Pass environment variable in docker- https://stackoverflow.com/questions/30494050/how-do-i-pass-environment-variables-to-docker-containers#:~:text=Using%20docker%2Dcompose%20%2C%20you%20can,commands%20specific%20to%20the%20environment.&text=Use%20%2De%20or%20%2D%2Denv,set%20environment%20variables%20(default%20%5B%5D).
I log the activity of my docker containers via journald. The hostnames provided by the containers are non-descriptive. An example for a Minecraft docker container:
Jul 25 16:51:38 srv c34ebd053ff5[19692]: [14:51:38 ERROR]: Could not pass event ArmorEquipEvent to Carmor v1.2.2
c34ebd053ff5 is hardly informative, and I fear that it will change with time (with a new image for instance, if it is some kind of hash).
Is there a way to force the name of a container for logging purposes?
I tried to use tags /etc/docker/daemon.json but it did not help:
{
"log-driver": "journald",
"log-opts": {
"tag": "{{.Name}}"
}
}
EDIT: the containers are managed by docker-compose and each entry has a meaningful container_name (which therefore is not used in the logs by default)
The solution was to add a hostname entry to docker-compose.yml :
mc-mi:
image: itzg/minecraft-server
container_name: mc-mi
hostname: mc-mi
From that point on, the logs were seen as coming from mc-mi instead of c34ebd053ff5.
It is worth noting that container_name is not used as {{.Name}}.
Thank you to #johnharris85 for showing the way.
I'm trying to pass default parameters such as volumes or envs to my docker container, which I create through Marathon and Apache Mesos. It is possible through arguments passed to mesos-slave. I've put in /etc/mesos-slave/default_container_info file with JSON content (mesos-slave read this file and put it as its arguments):
{
"type": "DOCKER",
"volumes": [
{
"host_path": "/var/lib/mesos-test",
"container_path": "/tmp",
"mode": "RW"
}
]
}
Then I've restarted mesos-slave and create new container in marathon, but I can not see mounted volume in my container. Where I could do mistake? How can I pass default values to my containers in other way?
This will not work for you. When you schedule task on Marathon with docker, Marathon creates TaskInfo with ContainerInfo and that's why Mesos do not fill your default.
From the documentation
--default_container_info=VALUE JSON-formatted ContainerInfo that will be included into any ExecutorInfo that does not specify a ContainerInfo
You need to add volumes to every Marathon task you have or create RunSpecTaskProcessor that will augment all tasks with your volumes
I was setting up some materials for a trainning, when I came around this sample compose file:
https://github.com/dockersamples/example-voting-app/blob/master/docker-compose.yml
and I couldn't find out how this volume is mounted, on lines 48 and 49 of the file:
volumes:
db-data:
Can someone explain me where is this volume on the host? Couldn't find it and I wouldn't like to keep any postgresql data dangling around after the containers are gone. Similar thing happens to the networks:
networks:
front-tier:
back-tier:
Why docker compose accepts empty network definitions like this?
Finding the volumes
Volumes like this are internal to Docker and stored in the Docker store (which is usually all under /var/lib/docker). You can get a list of volumes:
$ docker volume ls
DRIVER VOLUME NAME
local 1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465
local 2f13b0cec834a0250845b9dcb2bce548f7c7f35ed9cdaa7d5990bf896e952d02
local a3d54ec4582c3c7ad5a6172e1d4eed38cfb3e7d97df6d524a3edd544dc455917
local e6c389d80768356cdefd6c04f6b384057e9fe2835d6e1d3792691b887d767724
You can find out exactly where the volume is stored on your system if you want to:
$ docker inspect 1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465/_data",
"Name": "1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465",
"Options": {},
"Scope": "local"
}
]
Cleaning up unused volumes
As far as just ensuring that things are not left dangling, you can use the prune commands, in this case docker volume prune. That will give you this output, and you choose whether to continue pruning or not.
$ docker volume prune
WARNING! This will remove all volumes not used by at least one container.
Are you sure you want to continue? [y/N]
"Empty" definitions in docker-compose.yml
There is a tendency to accept these "empty" definitions for things like volumes and networks when you don't need to do anything other than define that a volume or network should exist. That is, if you want to create it, but are okay with the default settings, then there is no particular reason to specify the parameters.
first method
you have to see your volume list :
docker volume ls
then run this command :
sudo docker inspect <volume-name> | grep Mountpoint | awk '{ print $2 }'
second method
you can use this method :
first run docker ps to get your container id then run this :
docker inspect --format="{{.Mounts}}" $containerID
We will get volume path.
Background: I'm using docker-compose in order to place a tomcat service into a docker swarm cluster but I'm presently struggling with how I would approach the logging directory given that I want to scale the service up yet retain the uniqueness of the logging directory.
Consider the (obviously) made up docker-compose which simply starts tomcat and mounts a logging filesystem in which to capture the logs.
version: '2'
services:
tomcat:
image: "tomcat:latest"
hostname: tomcat-example
command: /start.sh
volumes:
- "/data/container/tomcat/logs:/opt/tomcat/logs,z"
Versions
docker 1.11
docker-compose 1.7.1
API version 1.21
Problem: I'm looking to understand how I would approach inserting a variable into the 'volume' log path so that the log directory is unique for each instance of the scaled service
say,
volumes:
- "/data/container/tomcat/${container_name}/logs:/opt/tomcat/logs,z"
I see that based on project name (or directory I'm in) the container name is actually known, so could I use this ?
eg, setting the project name to 'tomcat' and running docker-compose scale tomcat=2 I would see the following containers.
hostname/tomcat_1
hostname/tomcat_2
So is there any way I could leverage this as a variable in the logging volume, Other suggestions or approaches welcome. I realise that I could just specify a relative path and let the container_id take care of this, but now if I attach splunk or logstash to the logging devices I'd need to know which ones are indeed logging devices as opposed to the base containers f/s. However Ideally I'm looking use a specific absolute path here.
Thanks in advance dockers!
R.
You should really NOT log to the filesystem, and use a specialized log management tool like graylog/logstash/splunk/... instead. Either configure your logging framework in Tomcat with a specific appender, or log to sysout and configure a logging driver in Docker to redirect your logs to the external destination.
This said, if you really want to go the filesystem way, simply use a regular unnamed volume, and then call docker inspect on your container to find the volume's path on the filesystem :
[...snip...]
"Mounts": [
{
"Type": "volume",
"Name": "b8c...SomeHash...48d6e",
"Source": "/var/lib/docker/volumes/b8c...SomeHash...48d6e/_data",
"Destination": "/opt/tomcat/logs",
[...snip...]
If you want to have nice-looking names in a specific location, use a script to create symlinks.
Yet, I'm still doubtfull on this solution, especially in a multi-host swarm context. Logging to an external, specialized service is the way to go in your use case.