I log the activity of my docker containers via journald. The hostnames provided by the containers are non-descriptive. An example for a Minecraft docker container:
Jul 25 16:51:38 srv c34ebd053ff5[19692]: [14:51:38 ERROR]: Could not pass event ArmorEquipEvent to Carmor v1.2.2
c34ebd053ff5 is hardly informative, and I fear that it will change with time (with a new image for instance, if it is some kind of hash).
Is there a way to force the name of a container for logging purposes?
I tried to use tags /etc/docker/daemon.json but it did not help:
{
"log-driver": "journald",
"log-opts": {
"tag": "{{.Name}}"
}
}
EDIT: the containers are managed by docker-compose and each entry has a meaningful container_name (which therefore is not used in the logs by default)
The solution was to add a hostname entry to docker-compose.yml :
mc-mi:
image: itzg/minecraft-server
container_name: mc-mi
hostname: mc-mi
From that point on, the logs were seen as coming from mc-mi instead of c34ebd053ff5.
It is worth noting that container_name is not used as {{.Name}}.
Thank you to #johnharris85 for showing the way.
Related
What we want to do:
We want to use docker-compose to link one already running container (A) to another container (B) by container name. We use "external-link" as both containers are started from different docker-compose.yml files.
Problem:
Container B fails to start with the error although a container with that name is running.
ERROR: for container_b Cannot start service container_b: Cannot link to a non running container: /PREVIOUSLY_LINKED_ID_container_a_1 AS /container_b_1/container_a_1
output of "docker ps":
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
RUNNING_ID container_a "/docker-entrypoint.s" 15 minutes ago Up 15 minutes 5432/tcp container_a_1
Sample code:
docker-compose.yml of Container B:
container_b:
external_links:
- container_a_1
What differs this question from the other "how to fix"-questions:
we can't use "sudo service docker restart" (which works) as this is a production environment
We don't want to fix this every time manually but find the reason so that we can
understand what we are doing wrong
understand how to avoid this
Assumptions:
It seems like two instances of the container_a exist (RUNNING_ID and PREVIOUSLY_LINKED_ID)
This might happen because we
rebuilt the container via docker-compose build and
changed the forwarded external port of the container (80801:8080)
Comment
Do not use docker-compose down as suggested in the comments, this removes volumnes!
Docker links are deprecated so unless you need some functionality they provide or are on an extremely old version of docker, I'd recommend switching to docker networks.
Since the containers you want to connect appear to be started in separate compose files, you would create that network externally:
docker network create app_net
Then in your docker-compose.yml files, you connect your containers to that network:
version: '3'
networks:
app_net:
external:
name: app_net
services:
container_a:
# ...
networks:
- app_net
Then in your container_b, you would connect to container_a as "container_a", not "container_a_1".
As an aside, docker-compose down is not documented to remove volumes unless you pass the -v flag. Perhaps you are using anonymous volumes, in which case I'm not sure that docker-compose up would know where to find your data. A named volume is preferred. More than likely, your data was not being stored in a volume, which is dangerous and removes your ability to update your containers:
$ docker-compose down --help
By default, the only things removed are:
- Containers for services defined in the Compose file
- Networks defined in the `networks` section of the Compose file
- The default network, if one is used
Networks and volumes defined as `external` are never removed.
Usage: down [options]
Options:
--rmi type Remove images. Type must be one of:
'all': Remove all images used by any service.
'local': Remove only images that don't have a custom tag
set by the `image` field.
-v, --volumes Remove named volumes declared in the `volumes` section
of the Compose file and anonymous volumes
attached to containers.
--remove-orphans Remove containers for services not defined in the
Compose file
Background: I'm using docker-compose in order to place a tomcat service into a docker swarm cluster but I'm presently struggling with how I would approach the logging directory given that I want to scale the service up yet retain the uniqueness of the logging directory.
Consider the (obviously) made up docker-compose which simply starts tomcat and mounts a logging filesystem in which to capture the logs.
version: '2'
services:
tomcat:
image: "tomcat:latest"
hostname: tomcat-example
command: /start.sh
volumes:
- "/data/container/tomcat/logs:/opt/tomcat/logs,z"
Versions
docker 1.11
docker-compose 1.7.1
API version 1.21
Problem: I'm looking to understand how I would approach inserting a variable into the 'volume' log path so that the log directory is unique for each instance of the scaled service
say,
volumes:
- "/data/container/tomcat/${container_name}/logs:/opt/tomcat/logs,z"
I see that based on project name (or directory I'm in) the container name is actually known, so could I use this ?
eg, setting the project name to 'tomcat' and running docker-compose scale tomcat=2 I would see the following containers.
hostname/tomcat_1
hostname/tomcat_2
So is there any way I could leverage this as a variable in the logging volume, Other suggestions or approaches welcome. I realise that I could just specify a relative path and let the container_id take care of this, but now if I attach splunk or logstash to the logging devices I'd need to know which ones are indeed logging devices as opposed to the base containers f/s. However Ideally I'm looking use a specific absolute path here.
Thanks in advance dockers!
R.
You should really NOT log to the filesystem, and use a specialized log management tool like graylog/logstash/splunk/... instead. Either configure your logging framework in Tomcat with a specific appender, or log to sysout and configure a logging driver in Docker to redirect your logs to the external destination.
This said, if you really want to go the filesystem way, simply use a regular unnamed volume, and then call docker inspect on your container to find the volume's path on the filesystem :
[...snip...]
"Mounts": [
{
"Type": "volume",
"Name": "b8c...SomeHash...48d6e",
"Source": "/var/lib/docker/volumes/b8c...SomeHash...48d6e/_data",
"Destination": "/opt/tomcat/logs",
[...snip...]
If you want to have nice-looking names in a specific location, use a script to create symlinks.
Yet, I'm still doubtfull on this solution, especially in a multi-host swarm context. Logging to an external, specialized service is the way to go in your use case.
I am using a bash script to spin up a virtual network with two docker containers on it. This feels prehistoric. Is there some tool that can spin such an ensemble up and down & show its current status, or does one have to take care of that on their own?
In case docker-compose, it is unclear from docker documentation whether docker-compose is self-contained or tied to swarm, and an authoritative example of a compose definition file, with commands for starting and stopping the ensemble would be very helpful.
E.g. here is what a bash script would do to define/start an application of two interrelated containers, needless to say this script does not help with managing its lifecycle beyond just starting it up once.
docker network create --driver bridge FooAppNet
docker run --rm --net=FooAppNet --name=component1 -p 9000:9000 component1-image
docker run --rm --net=FooAppNet --name=component2 component2-image
Also in this example, container component1 exposes port 9000 to the host, and its contained application has it hardwired in its configuration file, to consume the service of component2 by its name (following the common docker networking practice relying on docker networks' internal DNS).
For the example you've given, the following Docker Compose file would give you what you want:
component1:
image: component1-image
net: FooAppNet
container_name: component1
ports:
- "9000:9000"
component2:
image: component2-image
net: FooAppNet
container_name: component2
If you store this in a docker-compose.yml file and then run docker-compose up -d it will create/start/restart your containers and assign them to your FooAppNet network.
The -d flag runs the containers in detached mode and prevents the logging output being printed to your terminal window when you start the containers. You can still get their log via docker logs -f ... like with any other container.
You can then use docker-compose down and docker-compose restart etc to control the ensemble's lifecycle. As an aside, using variables can spice up the definition file towards greater flexibility.
See in the comments below about using the network automatically spun up by docker compose.
TL;DR ― see the beginning section of https://docs.docker.com/compose/networking/ for the solution. It walks you through the entire necessary configuration. Works nicely, and need to master the various docker-compose command-line options to be productive with it.
This question is coming from an issue on the Docker's repository:
https://github.com/docker/compose/issues/942
I can't figure it out how to create a data container (no process running) with docker compose.
UPDATE: Things have changed in the last years. Please refer to the answer from #Frederik Wendt for a good and up-to-date solution.
My old answer: Exactly how to do it depends a little on what image you are using for your data-only-container. If your image has an entrypoint, you need to overwrite this in your docker-compose.yml. For example this is a solution for the official MySql image from docker hub:
DatabaseData:
image: mysql:5.6.25
entrypoint: /bin/bash
DatabaseServer:
image: mysql:5.6.25
volumes_from:
- DatabaseData
environment:
MYSQL_ROOT_PASSWORD: blabla
When you do a docker-compose up on this, you will get a container like ..._DatabaseData_1 which shows a status of Exited when you call docker ps -a. Further investigation with docker inspect will show, that it has a timestamp of 0. That means the container was never run. Like it is stated by the owner of docker compose here.
Now, as long as you don't do a docker-compose rm -v, your data only container (..._DatabaseData_1) will not loose its data. So you can do docker-compose stop and docker-compose up as often as you like.
In case you like to use a dedicated data-only image like tianon/true this works the same. Here you don't need to overwrite the entrypoint, because it doesn't exist. It seems like there are some problems with that image and docker compose. I haven't tried it, but this article could be worth reading in case you experience any problems.
In general it seems to be a good idea to use the same image for your data-only container that you are using for the container accessing it. See Data-only container madness for more details.
The other answers to this question are quite out of date, and data volumes have been supported for some time now. Example:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
See
https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose for details and options.
A data only container (DOC) is a container that is created only to serve as a volume provider. The container itself has no function other than that other containers can mount it's volume by using the volumes_from directive.
The DOC has to run only once to create the volume. Other containers can reference the volumes in it even if it's stopped.
The OP Question:
The docker-compose.yml starts the DOC every time you do a docker-compose up. OP asks for an option to only create container and volume, and not run it, using some sort of an create_only: true option.
As mention in the issue from the OP's question:
you either create a data container with the same name as the one specified in the docker-compose.yml, and run docker-compose up --no-recreate (the one specified in docker-compose.yml won't be recreated).
or you run a container with a simple command which never returns.
Like: tail -f /dev/null
I want to be able to add hostnames to my laptop /etc/hosts that maps to a docker container. Since container ips are not static, each site I start/restart a container I would need to update the /etc/hosts file manually which is not very practical.
I am looking for a simple way to solve this.
I could do some sort of script, which listens to docker events, checks for container ip and updates /etc/hosts but I dont want to reinvent the wheel. (something like this: https://github.com/discordianfish/docker-spotter) but I couldnt really understand how it works.
Anybody has a suggestion?.
Thank you.
You can do that using in your docker-compose.yml using extra_hosts. For instance:
version: '2'
services:
daemon:
build: ./
extra_hosts:
- "example.com:127.0.0.1"