Graylog stream rule with application running on docker - docker

I have an application that running on a docker container, and logs to our Graylog server, however, the Graylog source field is actually the container ID:
source
97c0212d3d75
Since the container ID changes frequently, I cannot apply the source to the stream rules.
I had a look at the message and seems like there is nothing much I can rely on to create stream rules for this application:
Can someone please share some experience on this case? My problem here is that I cannot identify the application nor environment.
I am looking for ideas like:
Is there a way to make container id static (prob not)
Is there a way to send more information to graylog without making code changes or making code to specify the specific values
Any better ideas

I just realised I can set the hostname to my docker containers, so just by adding the following to my docker-compose file should work
hostname: billing-rq-${ENV_NAME}

Related

Docker - Accessing host from private network

My team and I have a lot of small services that work with each other to get the job done. We came up with some internal tooling and some some solutions that work for us, however we are always trying to improve.
We have created a docker setup where we do something like this:
That is, we have a private network in place where the services can call each other by name and that works.
In this situation if service-a call service-b like http://localhost:6002 it wouldn't work.
We have a scenario where we want to work on a module and run the rest on docker. So we could run the service-a directly out of our IDE for example and leave service-b and service-c on docker. The last service (service-c) we would than reference as localhost:6003 from the host network.
This works fine!
However things can get "out of hand" the more we go down the line. In the example we only have 3 services but our longest chain is like 6 services. Supposing I want to work on the one before the last, I have to start all services that came before in order to simulate a complete chain. (In most of the cases we work through APIs, which render the point mute, but bear with me). Something like this:
The QUESTION
Is there a way to allow a situation such as this one?
In order to run part of the services as docker containers and maybe one service from my IDE for example?
Or would for that be necessary that I put all of them on the host network for them all to be able to call each other through localhost?
I appreciate any help!
You can use --add-host service-b:$(hostname -i) to push the host IP address into a container so that the container doesn't need to know about which service is not running in docker.
You could set up your docker compose file to accept an argument as to WHICH service you want to do...
extra_hosts:
- ${HOST_SVC:fakehost}:${HOST_IP}
But honestly- the easiest solution is probably just to set up the service you want to work on with the source code mounted in, run them all in docker, and restart the container as needed.

logging nginx events from a docker container managed by kubernetes

Currently, to my understanding, kubernetes offers no logging solutions on it's own and it also does not allow one to specify the logging driver when using docker as the container technology due to scope encapsulation concerns.
This leaves folks with the ugly solution of tailing json logs from shared volumes using either fluentd, filebeat, or some other file tailing demon, parsing these, then sending them to the desired storage backend.
My question is, is there any repo or public knowledge config store for this type of scenario for people that have gone through this before? My use case would involve tailing the logs of a nginx docker image and writing out the fluentd/grok pattern myself seems really painful, plus i wouldn't want to struggle on an issue already solved by someone else.
Thanks
We tried logdna and the integration with k8s is pretty solid. Most of the time I just tail the log of some container using kubectl logs -f [CONTAINER_ID]. I'm guessing you're looking for a persistent approach.

How should I create Dockerfile to run multiple services through docker-compose?

I'm new to Docker. I wanted to create a Dockerfile to start services like RabbitMQ, ftp server and elastic search. But I'm not able to think from where should I start ?
I have asked a similar question here : How should I create a Dockerfile to run more than one services in one instance?
There I got to know, to create different containers : one for rabbitmq, one for ftp server and other for elasticsearch and run them using docker-compose file. There you'll find my created Dockerfile code.
It will be great if someone can help me out with this thing. Thanks!
They are correct. Each container & by extension, each image should be responsible for one concern & that is typically mapped to a single process. So if you need to run more than one thing (or more than one process, generally speaking, not strictly) then you most probably require to build separate images. One of the easiest & recommended ways of creating an image is writing a Dockerfile. This is expected to be an extremely simple process and most of it will be a copy paste of the same commands you would have used to install that component.
One you write the Dockerfile's for each service, you must build them using docker build command, which will result in the images.
When you run an image you get what is known as a container. Think of it roughly like an iso file is the image & the actual vm or running machine is the container.
Now you can use docker-compose to orchestrate how these various containers so they can communicate (or be isolated from) with each other. A docker-compose.yml file is a plain text file in the yaml format that describes the relationship between the different components within the app. Apps can be made up of several services - like webserver, appserver, searchengine, database server, cache engine etc etc. Each of these is a service and runs as a container, but it is also not necessary to run everything as a container. Some can remain running in the traditional way, on VM's or on bare metal servers.
I'll check your other post and add if there is anything needed. But I hope this helps you get started at least.

How exactly does docker work? (Theory)

I am venturing into using docker and trying to get a firm grasp of the product.
While I love everything it promises it is a big change from doing things manually.
Right now I understand how to build a container, attach your code, commit and push it to your repo.
But what I am really wondering is how do I update my code once deployed, for example, I have some minor bug fixes but no change to dependencies but I also run a database in the same container.
Container:
Node & NPM
Nginx
Mysql
php
Right now the only way I understand you can do it is to close thje container re pull the new container and run, but I am thinking you will lose database data.
I have been reading into https://docs.docker.com/engine/tutorials/dockervolumes/
and thinking maybe the container mounts a data file that persists between containers.
What I am trying to do is run a web app/website with the above container layout and just change code with latest bugfixes/features.
You're quite correct. Docker images are something you should be rebuilding and discarding with each update - avoid commit wherever possible (outside your build scripts anyway).
Persistent state should be managed via data containers that you then mount with your image. Thus your "data" is decoupled from that specific version and instance of the application.

How do I setup a docker image to dynamically pull app code from a repository?

I'm using docker cloud at the moment. I'm trying to figure out a development to production workflow using docker with docker compose to pull application code for multiple applications of the same type, but simply changing the repository each pulls from. I understand the concept of mounting a volume, but all the examples show the source code in the same repo with the dockerfile and docker compose file. example. I want the app code from this example to come from a remote, dynamic repo. Would I set an environment variable in the docker image? If so how?
Any example or link to a workflow example is appreciated.
If done right, the code "baked" into Docker images should be immutable and the only thing that should change at runtime is configurable parameters like environment variables (e.g. to set the port the app will listen on).
Ideally, you should bake your code into the image. Otherwise you're losing a lot of the benefit of using Docker in the first place.
The problem is..
.. your use case does not match with the best practice. You want an image without any code embedded in it, but rather fetched at each update. If you browse the docker hub you'll find many image named as service:version. That's one of the benefit of Docker, offering different versions of the same service. If you want to always get the most up-to-date code your workflow may have some down sides.
One solution could be
Webhooks, especially if your code is versionned on GH. Or any tools of continuous integration.

Resources