Kubernetes - How to get pod logs within container - docker

I have two containers within my pod, one container is an app, that write logs (stdout/stderr), and second container is to read the logs of the app container, and to write it logs.txt file.
My question is, how can I tell the second container to read the logs of the app container?
When I'm on the host (with k8s), I can run:
$kubectl logs -f my_pod >> logs.txt
and get the logs of pod.
But how can I tell one container to read the logs of another container inside same pod?
I did something similar with dockers:
I mounted the docker.sock of the docker of the host, and did $docker logs -f app_container >> logs.txt
But I can't (as far as I know) mount kubectl to container in pod, to run kubectl commands.
So how do you think I can do it?

you could mount a volume on both containers, write a logfile on that volume on your "app" container, then read from that logfile in the other container.
the preferred solution is probably to use some kind of logshipper or sidecar to scrape the logs from stdout
(side note: you should probably never mount docker.sock into a container or try to run kubectl from one inside a container to the cluster, because you essentially give that container control over your cluster, which is a security no-no imho.
if you still have to do that for whatever reason, make sure the cluster security is REALLY tight)

You can use http request to obtain other pod logs from inside. Here are the docs
Basically you can use curl to do the following request:
GET /api/v1/namespaces/{namespace}/pods/{name}/log

Related

How to auto-remove Docker container while persisting logs?

Is there any way to use Docker's --rm option that auto-removes the container once it exits but allow the container's logs to persist?
I have an application that creates containers to process jobs, and then once all jobs are complete, the container exits and is deleted to conserve space. However, in case a bug caused the container's process to exit prematurely, I'd like to persist the log files so I can confirm it exited cleanly or diagnose a faulty exit.
However, the --rm option appears to remove the container's logs along with the container.
Log to somewhere outside of the container.
You could mount a directory of the host in your container, so logs will be written to the host directory and kept after rm.
Or you can mount a volume on your container; which will be persisted after rm
Or you can setup rsyslog - or some similar log collection agent - to export your logs to a remote service. See https://www.simulmedia.com/blog/2016/02/19/centralized-docker-logging-with-rsyslog/ for more on this solution.
The first 2 are hacks but easier to get up and running on your workstation/server. If this is all cloud hosted there might be a decent log offloading option (Cloudwatch on AWS) which saves you the hassle of configuring rsyslog

When mounting /var/run/docker.sock into a container, which file system is used for volume mounting?

I have a container that contains logic for coordinating the deployment of the microservices on the host - let's call this service the deployer. To achieve that, I have mounted the /var/run/docker.sock file from the host into that deployer container.
So, when performing docker run hello-world from within the deployer container, the host runs it.
This system works as expected, except for one thing I have become unsure about now, since I have seen some unexpected behaviour.
When performing docker run -v "/path/to/src:/path/to/dest" hello-world, what folder will Docker be looking at?
I'm seeing two valid reasonings:
A) It will mount /path/to/src from within the deployer to the
hello-world container, since that is the shell that performs the
command.
B) It will mount /path/to/src from the source to the
hello-world container, since the docker.sock determines the context
and the command is being ran on the host.
Which of those is correct?
Moreover, when using relative paths (e.g. in docker-compose), what will be the path that is being used?
It will always use the host filesystem. There isn’t a way to directly mount one container’s filesystem into another.
For example:
host$ sudo docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker sh
0123456789ab# docker run -v /:/host --rm -it busybox sh
13579bdf0246# cat /host/etc/shadow
The last command will print out the host’s encrypted password file, not anything in the intermediate container.
If it isn’t obvious from the example, mounting the Docker socket to programmatically run Docker commands has massive security implications, and you should carefully consider whether it’s actually a good approach for you.
I’m pretty sure relative paths in docker-compose.yml won’t actually work with this setup (because you can’t bind-mount things out of the intermediate container). You’d have to mount the same content into both containers for one to be able to send files to the other. Using named volumes can be helpful here (because the volume names aren’t actually dependent on host paths); depending on what exactly you’re doing, a roundabout path of docker create and then docker cp could work.
At an implementation level there is only one Docker daemon and it runs on the host. You can publish its socket to various places, but ultimately that daemon receives requests like “create a container that mounts host directory /x/y” and the daemon interprets those requests in the context of the host. It doesn’t know that a request came from a different container (or, potentially, a different host; but see above about security concerns).

How to mount command or busybox to docker container?

The image pulled from docker hub is a minimal system, without commands like vim,ping,etc. Sometimes when in debug environment.
For example, I need ping to test network or "vim" to modify conf, but I dont want to install them in container or indocker-file` as they are not necessary in run time.
I have tried to install the commands in my container which is not convenient.
So, I think if it can mount commands from host to container? or even "mount" a busy-box to container?
You should install these tools in your docker container, because this is how the things are done. I cant find a single reason not to do so, but in case you cant do it (why??), you can put necessary binaries into volume and mount this volume into your container. Something like:
docker run -it -v /my/binaries/here:/binaries:ro image sh
$ ls /binaries
and execute them inside using container path /binaries.
But what you have to keep in mind - these binaries usually have dependencies from system paths like /var/lib and others. And when calling them from inside container, you have to somehow resolve them.
If running on Kubernetes, the kubectl command has support for running a debug container that has access to running container. Check kubectl debug.
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container

How can i setup aws cloudwatch logs with docker ecs container

I am using Amazon ECS and docker image is using php application.
Everything is running fine.
In the entry point i am using supervisord in foreground and those logs are currently send to cloudwatch logs.
In my docker image i have logs send to files
/var/log/apache2/error.log
/var/log/apache2/access.log
/var/app/logs/dev.log
/var/app/logs/prod.log
Now i want to send those logs to aws cloudwatch. whats the best way for that.
Also i have multiple containers for single app so example all foour containers will be having these logs.
Initially i thought to install aws logs agent in container itself but i have to use same docke rimage for local and ci and nonprod environments so i dont want to use cloudwatch logs there.
Is there any other way for this?
In your task definition, specify the logging configuration as the following:
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "LogGroup",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "Prefix"
}
}
awslogs-stream-prefix is optional for EC2 launch type but required for Fargate
In the UserData section when you launch a new instance, register the instance to the cluster and make sure you specify the logging of type awslogs as well:
#!/bin/bash
echo 'ECS_CLUSTER=ClusterName' > /etc/ecs/ecs.config
echo ECS_AVAILABLE_LOGGING_DRIVERS='[\"json-file\", \"awslogs\"]' >> /etc/ecs/ecs.config
start ecs
More Info:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
You have to do two things:
Configure the ECS Task Definition to take logs from the container output and pipe them into a CloudWatch logs group/stream. To do this, you add a LogConfiguration property to each ContainerDefinition property in your ECS task definition. You can see the docs for this here, here, and here.
Instead of writing logs to a file in the container, instead write them to /dev/stdio or /dev/stdout / /dev/stderr. You can just use these paths in your Apache configuration and you should see the Apache log messages outputted to the container's log.
You can use the awslogs logging driver of Docker
Refer to the documentation on how to set it up
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
Given your defined use case:
Collect logs from 4 different files from within a container
Apply docker log driver awslog for the task
In previous answers you already have seen, that awslog applies the stdout as logging mechanism. Further, it has been stated, that awslog is applied per container, which means one aws cloud logging stream per running container.
To fulfill your goal when switching to stdout for all logging is not a choice of yours:
You apply a seperate container as logging mechanism (remember one log stream per container) for the main container
this leads to a seperate container, which applies the awslogs driver and reads the files from the other container sequentially (also async possible, more complex) and pushes them into a seperate aws cloud log stream of your choice
this way, you have seperate logging streams or groups if you like, for every file
Prerequisites:
The main container and a seperate logging container with access to a volume of the main container or the HOST
See this question how shared volumes between containers are realized
via docker compose:
Docker Compose - Share named volume between multiple containers
The logging container needs to talk to the host docker daemon. Running docker inside docker is not recomended and also not needed here!
here is a link to see how you can make the logging container talking to the host docker daemon https://itnext.io/docker-in-docker-521958d34efd
Create the logging docker container with a Dockerfile like this:
FROM ubuntu
...
ENTRYPOINT ["cat"]
CMD ["loggingfile.txt"]
You can apply this container as a function with input parameter logging_file_name to write to stdout and directly into aws Cloudwatch:
docker run -it --log-driver=awslogs
--log-opt awslogs-region= region
--log-opt awslogs-group= your defined group name
--log-opt awslogs-stream= your defined stream name
--log-opt awslogs-create-group=true
<Logging_Docker_Image> <logging_file_name>
With this setup you have a seperate docker logging container, which talks to the docker host and spins up another docker container to read the logging files of the main container and pushes them to aws Cloudwatch fully costumized by you.

Using SMB shares as docker volumes

I'm new to docker and docker-compose.
I'm trying to run a service using docker-compose on my Raspberry PI. The data this service uses is stored on my NAS and is accessible via samba.
I'm currently using this bash script to launch the container:
sudo mount -t cifs -o user=test,password=test //192.168.0.60/test /mnt/test
docker-compose up --force-recreate -d
Where the docker-compose.yml file simply creates a container from an image and binds it's own local /home/test folder to the /mnt/test folder on the host.
This works perfectly fine, when launched from the script. However, I'd like the container to automatically restart when the host reboots, so I specified 'always' as restart policy. In the case of a reboot then, the container starts automatically without anyone mounting the remote folder, and the service will not work correctly as a result.
What would be the best approach to solve this issue? Should I use a volume driver to mount the remote share (I'm on an ARM architecture, so my choices are limited)? Is there a way to run a shell script on the host when starting the docker-compose process? Should I mount the remote folder from inside the container?
Thanks
What would be the best approach to solve this issue?
As #Frap suggested, use systemd units to manage the mount and the service and the dependencies between them.
This document discusses how you could set up a Samba mount as a systemd unit. Under Raspbian, it should look something like:
[Unit]
Description=Mount Share at boot
After=network-online.target
Before=docker.service
RequiredBy=docker.service
[Mount]
What=//192.168.0.60/test
Where=/mnt/test
Options=credentials=/etc/samba/creds/myshare,rw
Type=cifs
TimeoutSec=30
[Install]
WantedBy=multi-user.target
Place this in /etc/systemd/system/mnt-test.mount, and then:
systemctl enable mnt-test.mount
systemctl start mnt-test.mount
The After=network-online.target line should cause systemd to wait until the network is available before trying to access this share. The Before=docker.service line will cause systemd to only launch docker after this share has been mounted. The RequiredBy=docker.service means that if you start docker.service, this share will be mounted first (if it wasn't already), and that if the mount fails, docker will not start.
This is using a credentials file rather than specifying the username/password in the unit itself; a credentials file would look like:
username=test
password=test
You could just replace the credentials option with username= and password=.
Should I mount the remote folder from inside the container?
A standard Docker container can't mount filesystems. You can create a privileged container (by adding --privileged to the docker run command line), but that's generally a bad idea (because that container now has unrestricted root access to your host).
I finally "solved" my own issue by defining a script to run in the /etc/rc.local file. It will launch the mount and docker-compose up commands on every reboot.
Being just 2 lines of code and not dependent on any particular Unix flavor, it felt to me like the most portable solution, barring a docker-only solution that I was unable to find.
Thanks all for the answers

Resources