See instance journalctl logs in docker container - docker

I want to be able run docker container and see all instance journalctl logs.
In other words I want to see the same output of journalctl logs in Instance and in Docker container.
I was trying to mount the journald socket but still I don't see the journal logs from the instance
Thanks for the help.

If journalctl -u <service>.service and not giving you the journal logs you want from your container, you can run machinectl -l to find the container's UUID and then run a journalctl -M $UUID on the container uuid to see logs.
~# machinectl -l
MACHINE CLASS SERVICE OS VERSION ADDRESSES
rkt-6d427a1c-6961-45a2-a055-721edddb8558 container rkt - - -
~# journalctl -M rkt-6d427a1c-6961-45a2-a055-721edddb8558
If your systemd service starting your docker container is not listed under machinectl list, then add the following to your systemd service file that is starting your container:
[Service]
Slice=machine.slice
Cheers!

Related

How to see the logs of docker container running on different ports

I am running single docker container on two different ports using below command
docker run -p ${EXTERNAL_PORT_NUMBER}:${INTERNAL_PORT_NUMBER} -p ${EXTERNAL_PORT_NUMBER_SECOND}:${INTERNAL_PORT_NUMBER_SECOND} --network ${NETWORK} --name ${SERVICE_NAME} --restart always -m 1024M --memory-swap -1 -itd ${ORGANISATION}/${SERVICE_NAME}:${VERSION}
I can see the container is running fine
My question is How can I see the logs of this docker container.
Every time I do sudo docker logs database-service -f I can see the log of container running on 9003 port only.
How can I view the logs of container running on 9113
You are getting all the logs that was displayed on stdout or stderr in the container.
It has nothing to do with the processes which are exposed on different ports.
If 2 instance is running inside the container and both are showing there logs on system console then you will be getting both logs on the docker logs command for the container.
You can try multitail utility to tail more than one log files in docker exec command.
For that you have to install it in that container.
You can bind external volumes to container service logs and see the logs
docker run -v 'path_to_you_host_logs':'container_service_log_path'
docker run -v 'home/user/app/apache_access.log':
'/var/log/apache_access.log'

docker network port binding

I was playing around with docker and containers.
I have the docker engine setup on a Ubuntu box(running in VMware player) and am trying to bind the daemon to the network port with the following command:
root#ubuntu:~# docker -H 10.0.0.7:2375 -d &
[1] 10046
root#ubuntu:~# flag provided but not defined: -d
See 'docker --help'.
Why is it that the -d parameter throwing it off. I am very new to Linux so any suggestion is welcome.
Thanks in advance.
You're looking for docker daemon, not docker -d. This has been moved to dockerd in 1.12 but calling docker daemon still works there (it's just a pass through to the new command).

Docker swarm-manager displays old container information

I am using docker-machine with Google Compute Engine(GCE)
to run a
docker swarm cluster. I created a swarm successfully with 2
nodes
(swnd-01 & swnd-02) in the cluster. I created a daemon container
like this
in the swarm-manager environment:
docker run -d ubuntu /bin/bash
docker ps shows the container running on swnd-01. When I tried
executing a command over the container using docker exec I get the
error that container is not running while docker ps shows otherwise.
I ssh'ed into swnd-01 via docker-machine to come to know that container
exited as soon as it was created. I tried docker run command inside the
swnd-01 but it still exits. I don't understand the behavior.
Any suggestions will be thankfully received.
The reason it exits is that the /bin/bash command completes and a Docker container only runs as long as its main process (if you run such a container with the -it flags the process will keep running while the terminal is attached).
As to why the swarm manager thought the container was still running, I'm not sure. I guess there is a short delay while Swarm updates the status of everything.

How to detect a docker daemon port

I have installed Ubuntu and Docker. I am trying to launch Raik container:
$ DOCKER_RIAK_AUTOMATIC_CLUSTERING=1 DOCKER_RAIK_CLUSTER_SIZE=5 DOCKER_RIAK_BACKEND=leveldb make start-cluster ./bin/start
and get the error message:
It looks like the environment variable DOCKER_HOST has not been set.
The Riak cluster cannot be started unless this has been set
appropriately. For example:
export DOCKER_HOST="tcp://127.0.0.1:2375"
If I set
export DOCKER_HOST="tcp://127.0.0.1:2375"
all my other containers stop working and said, that can not find the Docker daemon.
It looks like my Docker damon use other than 2375 port. How can I check it ?
By default, the docker daemon will use the unix socket unix:///var/run/docker.sock (you can check this is the case for you by doing a sudo netstat -tunlp and note that there is no docker daemon process listening on any ports). It's recommended to keep this setting for security reasons but it sounds like Riak requires the daemon to be running on a TCP socket.
To start the docker daemon with a TCP socket that anybody can connect to, use the -H option:
sudo docker -H 0.0.0.0:2375 -d &
Warning: This means machines that can talk to the daemon through that TCP socket can get root access to your host machine.
Related docs:
http://basho.com/posts/technical/running-riak-in-docker/
https://docs.docker.com/install/linux/linux-postinstall/#configure-where-the-docker-daemon-listens-for-connections
Prepare extra configuration file. Create a file named /etc/systemd/system/docker.service.d/docker.conf. Inside the file docker.conf, paste below content:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
Note that if there is no directory like docker.service.d or a file named docker.conf then you should create it.
Restart Docker. After saving this file, reload the configuration by systemctl daemon-reload and restart Docker by systemctl restart docker.service.
Check your Docker daemon. After restarting docker service, you can see the port in the output of systemctl status docker.service
like /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock.
Hope this may help
Thank you!
Reference docs of docker: https://docs.docker.com/install/linux/linux-postinstall/#configure-where-the-docker-daemon-listens-for-connections
There are 2 ways in configuring the docker daemon port
1) Configuring at /etc/default/docker file:
DOCKER_OPTS="-H tcp://127.0.0.1:5000 -H unix:///var/run/docker.sock"
2) Configuring at /etc/docker/daemon.json:
{
"debug": true,
"hosts": ["tcp://127.0.0.1:5000", "unix:///var/run/docker.sock"]
}
If the docker default socket is not configured Docker will wait for infinite period.i.e
Waiting for /var/run/docker.sock
Waiting for /var/run/docker.sock
Waiting for /var/run/docker.sock
Waiting for /var/run/docker.sock
Waiting for /var/run/docker.sock
NOTE : BUT DON'T CONFIGURE IN BOTH THE CONFIGURATION FILES, the following error may occur :
Waiting for /var/run/docker.sock
unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: hosts: (from flag: [tcp://127.0.0.1:5000 unix:///var/run/docker.sock], from file: tcp://127.0.0.1:5000)
The reason for adding both the user port[ tcp://127.0.0.1:5000] and default docker socket[unix:///var/run/docker.sock] is that the user port enables the access to the docker APIs whereas the default socket enables the CLI. In case the default port[unix:///var/run/docker.sock] is not mentioned in /etc/default/docker file the following error may occur:
# docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
This error is not because that the docker is not running, but because of default docker socket is not enabled.
Once the configuration is enabled restart the docker service and verify the docker port is enabled or not:
# netstat -tunlp | grep -i 5000
tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN 31661/dockerd
Applicable for Docker Version 17.04, may vary with different versions of docker.
Since I also had the same problem of "How to detect a docker daemon port" however I had on OSX and after little digging in I found the answer. I thought to share the answer here for people coming from osx.
If you visit known-issues from docker for mac and github issue, you will find that by default the docker daemon only listens on unix socket /var/run/docker.sock and not on tcp. The default port for docker is 2375 (unencrypted) and 2376(encrypted) communication over tcp(although you can choose any other port).
On OSX its not straight forward to run the daemon on tcp port. To do this one way is to use socat container to redirect the Docker API exposed on the unix domain socket to the host port on OSX.
docker run -d -v /var/run/docker.sock:/var/run/docker.sock -p 127.0.0.1:2375:2375 bobrik/socat TCP-LISTEN:2375,fork UNIX-CONNECT:/var/run/docker.sock
and then
export DOCKER_HOST=tcp://localhost:2375
However for local client on mac os you don't need to export DOCKER_HOST variable to test the api.
If you run ps -aux | grep dockerd you should see the endpoints it is running on.
Try add -H tcp://0.0.0.0:2375(at end of Execstart line) instead of -H 0.0.0.0:2375.

Restarting host from docker container

As the title indicates, is it possible to restart the host from a container? I have a docker container running with systemd as described here and started as:
$ docker run -privileged --net host -ti -d -v /sys/fs/cgroup:/sys/fs/cgroup:ro <image_name>
Once I issue the systemctl reboot command, I see:
# systemctl reboot
[root#dhcp-40-115 /]#
[3]+ Stopped
The host doesn't reboot. However, I see [1915595.016950] systemd-journald[17]: Received SIGTERM from PID 1 (systemd-shutdow). on host's kernel buffer.
Use case:
I am experimenting with running the restraint test harness in a container and some of the tests reboot the host and hence if this is possible to do from a container, the tests can run unchanged.
Update
As I mention in my answer:
There is a detail I missed in my question above which is once I have
systemd running in the container itself, the systemctl reboot is
(roughly saying) connecting to systemd on the container itself which
is not what I want.
The accepted answer has the advantage that it is not dependent on the host and the container distro be have compatible systemd. However, on a setup where they are, my answer is what I think is a more acceptable one, since you can just use the usual reboot command.
Other init systems such as upstart is untested.
I was able to send sysrq commands to the host mounting /proc/sysrq-trigger as a volume.
This booted the host.
docker-server# docker run -i -t -v /proc/sysrq-trigger:/sysrq centos bash
docker-container# echo b > /sysrq
You can set a bit-mask permission on /proc/sys/kernel/sysrq on the host to only allow eg, sync the disks and boot. More information about this at http://en.wikipedia.org/wiki/Magic_SysRq_key but something like this (untested) should set those permissions:
echo 144 > /proc/sys/kernel/sysrq
Also remember to add kernel.sysrq = 144 to /etc/sysctl.conf to have it saved over reboots.
There is a detail I missed in my question above which is once I have systemd running in the container itself, the systemctl reboot is (roughly saying) connecting to systemd on the container itself which is not what I want.
On the hint of a colleague, here is what I did on a "stock" fedora image (nothing special in it):
$ docker run -ti -v /run/systemd:/run/systemd fedora /bin/bash
Then in the container:
bash-4.2# systemctl status docker
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
Active: active (running) since Tue 2014-07-01 04:57:22 UTC; 2 weeks 0 days ago
Docs: http://docs.docker.io
Main PID: 2589
CGroup: /system.slice/docker.service
Here, the container is able to access systemd on the host. Then, issuing a reboot command actually reboots the host:
bash-4.2# reboot
Thus, it is possible to reboot the host from the container.
The point to note here is that the host is running Fedora 20 and so is the container. If the host was a different distro not running systemd, this would not be possible. Generally speaking, if the host and the container are running distros which are not running systemd or incompatible versions of systemd, this will not work.
Adding to user59634's answer:
-v /run/systemd:/run/systemd works on fedora 27 and ubuntu 16
But the only socket you need is
docker run -it --rm -v /run/systemd/private:/run/systemd/private fedora reboot
You can also use /run/dbus, but I like this systemd method more. I do not fully understand how much power you are giving the container, I suspect it is a enough to take over your host. So I would only suggest using this in a container that you wrote, and then communicate with any another container see here.
Unrelated similar information
Sleeping/suspending/hibernating can be done with only the -v /sys/power/state:/sys/power/state, and using /lib/systemd/systemd-sleep suspend for example. If you know how to, you can echo a string directly to /sys/power/state, for example echo mem > /sys/power/state here for more explanation of the different options you get from cat /sys/power/state
docker run -d --name network_monitor --net host --restart always --privileged --security-opt apparmor=unconfined --cap-add=SYS_ADMIN \
-v /proc:/proc \
$IMAGE_URI
docker container must be granted enough permission to mount /proc

Resources