I have a django application running inside of a docker container on an EC2. When I ssh into my Ec2, I docker ps find my docker container and then run exec -it [docker-image] sh. Once inside, I have a directory called /logs, that has a log I want to send to a dashboard I'm running on a remote server. I was able to scp the log.file to the remote server inside of the docker container. But I want to be able to send this log.file to my user directory on my EC2, for example /home/ec2-user so that I can use rsync or some other method to continuously update my logs on the remote server.
I'm a bit of a n00b with docker. Can someone explain to me how I can send the log.file inside my container outside of it, for example, to my ec2 home directory? From there, I would just write a bash script that runs on a chron and would just scp the log.file to my remote server every minute (or whatever interval of time).
#2, I cannot, for the life of me, find where my nginx log files are. On the Ec2, I don't have an nginx directory in /var/log or in /etc. Does anyone know where the nginx access.log files are inside of the docker container?
Thanks!
Related
I'm using an Nginx docker container as base image for an application. I'm redirecting Nginx logs to syslog but I'm not sure what is the best way to have the busybox syslogd started. It all works if I start it manually, I just need it to run as a daemon automatically when the container runs.
Seeing that nginx is in init.d I tried this in my Dockerfile:
RUN ln -s /bin/busybox syslogd /etc/init.d/syslogd || :
But syslogd still didn't run on start-up. Since the documentation says that only one [CMD] is allowed I have the following hack:
FROM nginx:mainline-alpine
CMD nginx & busybox syslogd -n
This works, locally at least, but I'm wondering what is the proper solution. By default the container already symlinks log files to stdout and stderr but I don't want to use docker's syslog logging driver, because the application will be deployed to Kubernetes so I need a self-contained solution, that will work in the pod. Thank you!
Have your container log to stdout, but collect the logs elsewhere.
One option is to configure Docker itself to send container logs to syslog:
docker run --log-driver=syslog --log-opt syslog-address=udp://... ... nginx
Since the Docker daemon itself is configuring this, the syslog-address needs to be something that can be reached from the host. If you're running syslogd in a separate container, this option needs to point at a published port.
Another option is to use the standard Docker JSON-format logging, but use another tool to forward the logs to somewhere else. This has the downside of needing an additional tool, but the upside of docker logs working unmodified. Fluentd is a prominent open-source option. (Logstash is another, but doesn't seem to directly have a Docker integration.)
I’m looking for the appropriate way to monitor applicative logs produced by nginx, tomcat, springboot embedded in docker with filebeat and ELK.
In the container strategy, a container should be used for only one purpose.
One nginx per container and one tomcat per container, meaning we can’t have an additional filebeat within a nginx or tomcat container.
Over what I have read over Internet, we could have the following setup:
a volume dedicated for storing logs
a nginx container which mount the dedicated logs volume
a tomcat / springboot container which mount the dedicated logs volume
a filebeat container also mounting the dedicated logs volume
This works fine but when it comes to scale out nginx and springboot container, it is a little bit more complex for me.
Which pattern should I use to push my logs using filebeat to logstash if I have the following configuration:
several nginx containers in load balancing with the same configuration (logs configuration is the same: same path)
several springboot rest api containers behing nginx containers with the same configuration (logs configuration is the same:same path)
Should I create one volume by set of nginx + springboot rest api and add a filebeat container ?
Should I create a global log volume shared by all my containers and have a different log filename by container
(having the name of the container in the filename of the logs?) and having only one filebeat container ?
In the second proposal, how to scale filebeat ?
Is there another way to do that ?
Many thanks for your help.
The easiest thing to do, if you can manage it, is to set each container process to log to its own stdout (you might be able to specify /dev/stdout or /proc/1/fd/1 as a log file). For example, the Docker Hub nginx Dockerfile specifies
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
so the ordinary nginx logs become the container logs. Once you do that, you can plug in the filebeat container input to read those logs and process them. You could also see them from outside the container with docker logs, they are the same logs.
What if you have to log to the filesystem? Or there are multiple separate log streams you want to be able to collect?
If the number of containers is variable, but you have good control over their configuration, then I'd probably set up a single global log volume as you describe and use the filebeat log input to read every log file in that directory tree.
If the number of containers is fixed, then you can set up a volume per container and mount it in each container's "usual" log storage location. Then mount all of those directories into the filebeat container. The obvious problem here is that if you do start or stop a container, you'll need to restart the log manager for the added/removed volume.
If you're actually on Kubernetes, there are two more possibilities. If you're trying to collect container logs out of the filesystem, you need to run a copy of filebeat on every node; a DaemonSet can manage this for you. A Kubernetes pod can also run multiple containers, so your other option is to set up pods with both an application container and a filebeat "sidecar" container that ships the logs off. Set up the pod with an emptyDir volume to hold the logs, and mount it into both containers. A template system like Helm can help you write the pod specifications without repeating the logging sidecar setup over and over.
We're trying to setup a GitLab Runner, which is resposible for building and testing our web application. For running the jobs we use the Docker executor with DinD.
Our problem is now: When trying to access certain services from inside the Runner Container (docker image) we get a timeout and no response back. It includes:
logging in to our own docker registry which is hosted on the same
system
wget on our domain (which is hosted on the same system)
What we can do:
ping our domain as well as the registry
ping other domains
wget other domains
Logging into the registry and wget our domain is successful when trying it native on the server and not in a docker container.
So it maybe looks like a docker problem.
Hope someone can help us.
I am running buildbot which is a CI tool on an EC2 machine. It's currently running as docker containers one for buildbot master and one for buildbot worker. Inside buildbot worker, I have to again run docker for building images and running containers.
After doing some research on how to best do this, I have mounted the docker sock file from the host machine to the buildbot worker container. Now from inside the buildbot worker, I am able to connect to the host docker daemon and use the build cache.
Main problem now is that inside the buildbot worker, I have a docker compose file in which for one service, I am mounting a file like this
./configs/my.cnf:/etc/my.cnf
but it is failing. And doing some more research, it's because the configs/my.cnf is relative to the buildbot worker directory and since I am using the host docker daemon which resolves the files using the host paths, it is not able to find the file.
I am not able to figure out on how to best do this. There were some suggestions on using the data volumes for this, but I am not sure on how best to use those.
Any idea on how we can do this?
Do you have any control over the creation of the buildbot worker? Can you control the buildbot worker directory.
export BUILD_BOT_DIR=$(mktemp -d) &&
docker container create -v /var/run/docker.sock:/var/run/docker.sock -v ${BUILD_BOT_DIR}:${BUILD_BOT_DIR} -e BUILD_BOT_DIR ...
In this scenario, the path './configs/my:conf' points to the same file on both the container and the host.
I'm trying to do an automatic deploy, so...
I have a .sh script to automatically pull docker images, for example:
docker pull mongo
docker stop db
docker rm db
docker run --name db -d mongo
And I am waiting for a POST request to start it.
So I have a container (with nginx) to act as a server. But I have to call that script outside the container, because it can update any container.
Is that possible? If so, how?
It sounds to me like you are looking for the Docker UNIX socket. See some explanation here (might be best to scroll down to the 'The Solution' part of that page.
Basically, you would start your Nginx container with the mounted UNIX socket. This allows you to use the docker command from inside the Nginx container, on other sibling containers.
Important security note:
Using the UNIX socket is a definite security issue, especially if you are exposing it to the worldwide web. See [1] and [2]. Other alternatives might include using Docker-in-Docker, though I am not certain that's suitable for your case right here. Docker did publish a blogpost on how to secure the UNIX socket here, if that is the path you want to go.