I am running a new container using the following command:
docker run -d --log-driver=gelf --log-opt gelf-address=tcp://<my_log_server> nginx
Looking at the documentation, this should send the logs to my_log_server, and if I run the docker logs <my_container> command I shoult NOT see any logs.
But actually I do, and I don't want this.
Do you have any idea why this is happening?
I'm testing it with Docker version 20.10.8, build 3967b7d
According to the docker documentation - https://docs.docker.com/config/containers/logging/configure/
When using Docker Engine 19.03 or older, the docker logs command is only functional for the local, json-file and journald logging drivers. Docker 20.10 and up introduces “dual logging”, which uses a local buffer that allows you to use the docker logs command for any logging driver. Refer to reading logs when using remote logging drivers for details.
Hope it clarifies.
Related
I am using a centos 7.9 for my docker. I'm running a C++ program in my docker image which writes logging via syslog.
However, I cannot find these log anywhere when I run it in docker. On my centos machine, the logging would by default go to /var/log/messages, but on docker /var/log/messages is empty. I've tried setting the docker logging driver to syslog (in the docker run command) and I can see rsyslogd running in the docker container as well with ps -aux.
On my host machine, /var/log/syslog does not receive the logs either.
How do I get the log files to write to the /var/log/messages on the docker environment (or to my host machine, if storing them on the docker environment is not advisable)?
Thanks.
Since docker does not support systemd, it's very likelly that syslog is not running on your container
My suggestion is... start a shell at your container
make sure syslog is installed using
yum list --installed
or
rpm -qa
Them run the syslog daemon from the shell manually
after that, start a new terminal and run your code
ps: since I'm on my phone, i can't be more helpful
I'll try to edit it as soon as possible
I have my logging driver setup of journald. Does the log-level config in daemon.json file impact logs when using a logging driver or only the container logs when using docker logs <container_name> ?
For example, docker and journald have documentation showing how to set log level/priority.
Docker's default setting is info: log-level: info.
With journald I can also use -p to set the log priority to info: journalctl -p info.
If my docker logging driver is journald with log priority set to info, do I even need to worry about setting log-level to info in daemon.json file?
I think maybe you confused the following concepts: logs of docker daemon, logs of container(s) and logs print with journalctl command.
The configuration in docker.json file impact logs of docker
daemon.
The logs of container(s) would be only impacted by your application
configuration in that container.
The command journalctl -p ONLY impact the logs showing on your
screen, which means -p only do the filtering thing. No matter what
level you've indicated, err or info, the logs are there already.
Hope this would be helpful.
I used the below command to start the splunk server using Docker.
docker run -d -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_USER=root" -p "8000:8000" splunk/splunk
But when I opened the URL localhost:8000, I am getting Server can't be reached message
What am I missing here?
I followed a tutorial from the source :- https://medium.com/#caysever/docker-splunk-logging-driver-c70dd78ad56a
Depending on your docker version and host OS, you could be missing the need to map 8080 from the VirtualBox.
This should not be needed if you are using HyperV (Windows host) or XHyve (Mac host), but can still be needed with VirtualBox.
The link to the Docker Image is https://hub.docker.com/r/splunk/splunk/. Referring to this we can see some details related to pulling and running the image. According to the link the right command is:
docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=<password>" --name splunk splunk/splunk:latest
This works correctly for me. The image uses Ansible to do the configurations once the container has been created. If you do not specify password, the respective Ansible task will fail and your container will not be configured.
To follow the progress of the container configuration, you may run this command after running the above command:
docker logs -f splunk
Given that the name of your container is splunk. Here you will be able to see the progress of Ansible in configuring Splunk.
In case you are looking to create a clustered Splunk deployment then, you might want to have a look at this: https://github.com/splunk/docker-splunk
Hope this helps!
I use docker on centOS, and test docker on MAC OS X. But I want to see the log information when docker run. How to find the log?
The daemon on Boot2docker or CentOS should have its log in /var/log/docker.log.
For boot2docker, you need to be in a boot2docker session.
(this isn't always the case: On Ubuntu, for instance, this would be different: /var/log/upstart/docker.log).
If you need to see more log, you can start the daemon with the -D option.
I have a ubuntu machine which is a VM where I have installed docker in it. I am using this machine from my local windows machine and doing ssh , opening the terminal to the ubuntu machine.
Now , I am going to take a docker image which contains all the necessary softwares for eg: apache installed in it. Later I am going to deploy a sample appication(which is a web applicationP on to it and save it .
Now , I am in a confused mode as in how to check the deployed application if its running properly. i.e., what would be the address of the container which containds the deployed application.
for eg:- If I type http://127.x.x.x which is the address of the ubuntu machine , I am just getting time out .
Can anyone tell me how to verify the deployed application . Also, the printing the output of the program on the console works seemlessly fine , as the output gets printed , only thing I have some doubts is regarding the web application.
There are some possibilities to check whether your app is running.
Remote API
As JimiDini said, one possibility is the Docker remote API. You can use it to see all running containers (which would be your use case, right?), inspect a certain container or start and stop containers. The API is a REST-API with several binding for programming languages (at https://docs.docker.io/reference/api/remote_api_client_libraries/). Some of them are very outdated. To use the Docker remote API from another machine, I needed to open it explicitly:
docker -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock -d &
Note that the API is open to the world now! In a real scenario you would need to secure it in some way (e.g. see the example at http://java.dzone.com/articles/securing-docker%E2%80%99s-remote-api).
Docker PS
To see all running containers run docker ps on your host. This will list all running containers. If you do not see your app, it is not running. It also shows you the ports your app is exposing. You can also do this via the remote API.
Logs
You can also check the logs. You can run docker attach <container id> to attach to a certain container an see its stdout. You can run also run docker logs <container id> to receive the Docker logs. What I prefer is to write the logs to a certain directory, e.g. all logs to /var/log and mount this folder to my host machine. Then all your logs will end up in /home/ubuntu/docker-logs on your host.
docker run -p 80:8080 -v /home/ubuntu/docker-logs:/var/log:rw my/application
One word to ports and IP
Every container will get its own IP address. You can check this IP address via the remote API or via Docker on the host machine directly. You can also specify a certain host name for the container (by passing the --hostname="test42" to the run command). However, you mostly did not need that.
To access the application in the container, you need to open the port in the container and bind to a port on the host.
In your Dockerfile you need to EXPOSE the port your app runs on:
FROM ubuntu
...
EXPOSE 8080
CMD run-my-app.sh
When you start your container, you need to bind this port to a port of the host:
docker run -p 80:8080 my/application
Now you can access your app on http://localhost:80 or http://127.0.0.1:80.
If you app does not response, check if the container is running by typing docker ps or the remote API. If it is not running, check the logs for the reason.
(Note: If you run your Ubuntu VM in something like VirtualBox and you try to access it from your Windows machine, make sure you opened the ports in VirtualBox too!).
Docker container has a separate IP address. By default it is private (accessible only from the host-machine).
Docker provides all metadata (including IP address) via its API:
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#inspect-a-container
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#monitor-docker-s-events
You can also take a look at a little tool called docker-gen for inspiration. It monitors docker-events and created configuration-files on host machine using templates.
To obtain the ip address of a docker container, if you know its id (a long hex string) or if you named it:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' <container-id-or-name>
Docker is running its own network and to get information about it you can run the following commands:
docker network ls
docker network inspect <network name>
docker inspect <container id>
In the output, you should be able to find the IP.
But there is also a couple of things you need to be aware of, regarding Dockerfile and docker run command:
when you EXPOSE a port in Dockerfile, the service in the container is not accessible from outside Docker, but from inside other Docker containers
and when you EXPOSE and use docker run -p ... flag, the service in the container is accessible from anywhere, even outside Docker
So for example, if your apache is running on port 8080 you should expose it in Dockerfile and then you can run it as:
docker run -d -p 8080:8080 <image name> and you should be able to access it from your host on HTTP://localhost:8080.
It is an old question/answer but it might help somebody else ;)
working as of 2020
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id