Is it possible to run ELK as a Docker service? - docker

I am new to Docker world and trying to run ElasticSearch stack on Docker. I am able to start the ELK as an Container and it works perfectly.
docker run -v /var/lib/docker/volumes/elk-data:/var/lib/elasticsearch \
-v /var/lib/docker/volumes/elk-data:/var/log/elasticsearch \
-p 5601:5601 -p 9200:9200 -p 5044:5044 \
--name elk sebp/elk
I am using journalbeat to forward the metrics to ElasticSearch service and do visualization in Kibana.
I was able to run journalbeat as a service using the following command:
sudo docker service create --replicas 2 --mount type=bind,source=/opt/apps/shared/dev/docker/volumes/journalbeat/config/journalbeat.yml,target=/journalbeat.yml --mount type=bind,source=/run/log/journal,target=/run/log/journal --mount type=bind,source=/etc/machine-id,target=/etc/machine-id --constraint node.labels.nodename==devlabel --name journalbeat-svc mheese/journalbeat:v5.5.2
Is there a way can we run ELK as a service? so that we can start 2 containers - 1 one on Master Swarm and other on Worker Node.

An example of running the full ELK stack as separate docker containers is available here: https://github.com/elastic/examples/tree/master/Miscellaneous/docker/full_stack_example
This uses docker-compose so you can easily bring the containers up and down.

ELK means Elasticsearch, Logstash, and Kibana, so there are 3 services that must be running. In Docker swarm a service has zero or more instances, but every instance is a container that is based on the same Dockerfile.
So, in order to run ELK as a service you would have to start Elasticsearch, Logstash, and Kibana in the same container. Although theoretically it is possible, this is not recommended (there should be one process per container).
Instead, you should create 3 services, one for Elasticsearch, Logstash, and Kibana.

Related

Why docker container is not able to access other container?

I have 3 docker applications(containers) in which one container is communicating with other 2 containers. If I run that containers using below command, container 3 is able to access the container 1 and container 2.
docker run -d --network="host" --env-file container1.txt -p 8001:8080 img1:latest
docker run -d --network="host" --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --network="host" --env-file container3.txt -p 8000:8080 img3:latest
But this is working only with host network if I remove this --network="host" option then I am not able to access this application outside(on web browser). In order to access it outside i need to make the host port and container ports same as below.
docker run -d --env-file container1.txt -p 8001:8001 img1:latest
docker run -d --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --env-file container3.txt -p 8000:8000 img3:latest
With this above commands i am able to access my application on web browser but container 3 is not able to communicate with container 1. here container 3 can access the container 2 because there i am exposing 8080 host + container port. But i can't expose again 8080 host port for container 3.
How to resolve this issue??
At last my goal is this application should be accessible on browser without using host network, it should use the bridge network . And container 3 needs to communicate with container 1 & 2.
On user-defined networks, containers can not only communicate by IP address but can also resolve a container name to an IP address. This capability is called automatic service discovery.
Read this for more details on Docker container networking.
You can perform the following steps to achieve the desired result.
Create a private bridge network.
docker network create --driver bridge privet-net
Now start your application containers along with the --network private-net added to your docker run command.
docker run -d --env-file container1.txt -p 8001:8001 --network private-net img1:latest
docker run -d --env-file container2.txt -p 8080:8080 --network private-net img2:latest
docker run -d --env-file container3.txt -p 8000:8000 --network private-net img3:latest
With this way, all the three containers will be able to communicate with each other and also to the internet.
In this case when you are using --network=host, then you are telling docker to not isolate the network rather to use the host's network. So all the containers are on the same network, hence can communicate with each other without any issues. However when you remove --newtork=host, then docker will isolate the network as well there by restricting container 3 to communicate with container 1.
You will need some sort of orchestration service like docker compose, docker swarm etc.

How to integrate Elassandra in my Jhipster app by Docker Image?

I want to integrate elassandra in my jhipster app. In which i'm using cassandra as db.
i'm following the official elassandra installation process with docker image but their is confusion to understand which container_name have to add in which command.
here is official link: http://doc.elassandra.io/en/latest/installation.html#docker-image
and my port 9200 is not enabled
docker run --name some-elassandra -d strapdata/elassandra
docker run --name some-app --link some-elassandra:elassandra -d app-that-uses-elassandra
Elasticsearch ports 9200 and 9300 are exposed for communication between containers.
So when an elassandra container has been started like this :
docker run --name some-elassandra -d strapdata/elassandra
Contacting the REST API could be done with something like :
docker run -it --link some-elassandra --rm strapdata/elassandra curl some-elassandra:9200
If it still not working, be sure you pulled a recent version of the image, and feel free to open an issue on the github repository strapdata/docker-elassandra
This elassandra image is based on the official cassandra image. You may refer to its documentation for advanced setup.

Docker Networking - to link or not to link?

We have 1000s of python unit tests and to run them efficiently we parallelize them in batches. Each batch has its own Docker environment consisting of the core application and a mongo instance. It's setup something like this:
docker network create --driver bridge ut_network-1
docker network create --driver bridge ut_network-2
docker run -d --name mongo-unittest-1 --network ut_network-1 mongo:3.4.2
docker run -d --name mongo-unittest-2 --network ut_network-1 mongo:3.4.2
docker run untapt_ut --rm --network ut_network-{%} --link mongo-unittest-{%}:db python discover.py
The connection string is "mongodb://db:27017/mydb"
{%} is the number associated with the environment - so on ut_network-1, the database would be mongo-unittest-1. Note the alias to 'db'.
This works fine but I read that --link will be deprecated.
I thought the solution would be as simple as removing --link and setting the hostname:
docker run -d --hostname db --network ut_network-1 mongo:3.4.2
docker run -d --hostname db --network ut_network-2 mongo:3.4.2
docker run untapt_ut --rm --network ut_network-{%} python discover.py
However, if I do this then the application cannot find the mongo instance. Further:
I can't use --name db because Docker would attempt to create multiple containers called 'db' which it obviously cannot do (even though they are on a different network).
the default hostname of the mongo database is the first few digits of the container id. My unit tests all get the mongo database string from a secrets file which assumes the database is called 'db'.
as I said, if I use --hostname db the core app cannot find the mongo instance. But if I hard-code the container id as the server, then the core application finds the mongo instance fine.
I want to keep the alias 'db' so the unit tests can use one single source for the mongo database string that I don't need to mess with.
The documentation here implies I can use --link.
So am I doing this correctly? Or if not, how should I configure Docker networking such that I can create multiple networks and alias a 'static' hostname for 'db'?
Any advice would be much appreciated.
Thanks in advance!
Yes, links are being deprecated and should be avoided. For the DNS discovery, I thought the hostname would work, but I'm seeing the same results you are seeing. You could use the container name with --name db which has the unique container issue, so I recommend against that for the same reasons you've found. The best solution is to go directly to the goal of a network alias with --network-alias db:
docker run -d --network-alias db --network ut_network-1 mongo:3.4.2
docker run -d --network-alias db --network ut_network-2 mongo:3.4.2
docker run untapt_ut --rm --network ut_network-{%} python discover.py

docker reverse-proxy doesnt work when change network (by --net)

docker version: 17.05.0-ce
I have some containers running by hand using docker run ... but recently for new project I create docker-compose.yml file based on this tutorial. However when i run following commands in my hosting:
docker network create --driver bridge reverse-proxy
docker-compose up
and
docker run -d --name nginx-reverse-proxy --net reverse-proxy -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
The proxy not work for old containers and I am unable to use subdomain to that projects (they "stop work").
So what to do?
I make experiments with --net parameter in docker run ... and using docker network inspect network_name. I get many different results like welcome to nginx or http 404 not found or http 503 temporarily unavailable and get following conclusions:
if no --net command then container is run in bridge network
if --net xxx command then cntainer is run only in 'xxx' network (not in bridge !)
if --net xxx --net yyy then container is run only in 'yyy' (no 'xxx' at all!)
The bridge is default docker network for containers inter-communication.
So when on running proxy we use only --net reverse-proxy then proxy container not see bridge and cannot communicate with other containers. If we try to use --net reverse-proxy --net bridge (two or more times in one line - like -p) then container will be connected only to last network.
So solution is... run proxy in following way:
docker run -d --name nginx-reverse-proxy -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
docker network connect reverse-proxy reverse-proxy
As you see we not use --net command at all. The network connect command allow container to connect to use multiple networks. When you execute docker network inspect reverse-proxy and docker network inspect bridge we will see that nginx-reverse-proxy is in both networks :)

Logging from one docker container to another

I think I am missing something linking docker containers.
I have 2 containers, 1 running Jenkins and 1 running elk stack.
From the host I can easily get logs to flow to elk.. and linking the Jenkins container to the elk one via --link gets some generic events into the elk stack.
But want I really want is the Jenkins container to (via the Jenkins Notification plugin) to log builds into elk no matter what I try tcp or http the the port I use on the docker host nothing shows up.
On the host the port 3333 is input to the elk container (3333 is the port for logstash).
From the docker host I can just do something like "echo "hello new World" | nc localhost 3333" and elk picks it up.
I am starting elk first with this:
docker run -d --name elk-docker -p 8686:80 -p 3333:3333 -p 9200:9200 elk-docker
Then Jenkins with this:
docker run -p 8585:8080 -v $PWD/docker/jenkins/jenkins_home:/var/lib/jenkins -t jenkins-docker
I have also tried this linking the two with no success.
docker run -p 8585:8080 --link elk-docker:elk -v $PWD/docker/jenkins/jenkins_home:/var/lib/jenkins -t jenkins-docker
In Jenkins I have the job notifiers plug-in installed and I was trying to use a simple TCP to port 333 and get the main events of the Jenkins job showing up in Elk by using the URL 172.17.0.5:3333. 172.17.0.5 is the IP of the logstash container.

Resources