tl; dr? Jump straight to Question ;)
Context & Architecture
In this application designed with a micro-service architecture in mind, one can find notably two components:
monitor: probes system metrics and report them via HTTP
controller: read metrics reported by monitor and take actions according to rules defined in a configuration file.
+------------------------------------------------------+
| host / |
+-----/ |
| |
| +-----------------+ +-------------------+ |
| | monitor / | | controller / | |
| +--------/ | +-----------/ | |
| | +----------+ | | +-------------+ | |
| | | REST :80 |>--+--------+->| application | | |
| | +----------+ | | +-------------+ | |
| +-----------------+ +-------------------+ |
| |
+------------------------------------------------------+
Trouble with Docker
The only way I found for monitor to be able to read network statistics not contrived to its docker network stack was to start its container with --network=host. The following question assumes this is the only solution. If (fortunately) I were mistaken, please do answer with an alternative.
version: "3.2"
services:
monitor:
build: monitor
network_mode: host
controller:
build: controller
network_mode: host
Question
Is there a way for monitor to serve its report on a docker network even though it reads statistics from the host network stack?
Or, is there a way for controller to not be on --network=host even though it connects to monitor which is?
(note: I use docker-compose but a pure docker answer suits me)
When I run docker-compose up, I get these logs:
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:46Z","tags":["reporting","browser-driver","warning"],"pid":6,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:46Z","tags":["reporting","warning"],"pid":6,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:46Z","tags":["status","plugin:reporting#7.3.1","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:46Z","tags":["info","task_manager"],"pid":6,"message":"Installing .kibana_task_manager index template version: 7030199."}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:46Z","tags":["info","task_manager"],"pid":6,"message":"Installed .kibana_task_manager index template: version 7030199 (API version 1)"}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:47Z","tags":["info","migrations"],"pid":6,"message":"Creating index .kibana_1."}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:47Z","tags":["info","migrations"],"pid":6,"message":"Pointing alias .kibana to .kibana_1."}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:47Z","tags":["info","migrations"],"pid":6,"message":"Finished in 254ms."}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:47Z","tags":["listening","info"],"pid":6,"message":"Server running at http://0:5601"}
is there some configuration I can use so that it only spits out JSON? I am looking for it to omit the "kibana_1 | " part before each line.
And of course, ideally it could make that part of the JSON, like {"source":"kibana_1", ...}
Note: Not sure if docker-compose supports this out of the box but you can look at Docker logging drivers.
What you could do is use the cut command piping output from docker-compose logs -f.
Here is an example below:
docker-compose logs -f kibana | cut -d"|" -f2
..
{"type":"log","#timestamp":"2019-08-11T03:44:01Z","tags":["status","plugin:xpack_main#6.8.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","#timestamp":"2019-08-11T03:44:01Z","tags":["status","plugin:graph#6.8.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","#timestamp":"2019-08-11T03:44:01Z","tags":["status","plugin:searchprofiler#6.8.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
..
The cut -d"|" -f2 command will look for a | character and output everything after.
You can take it a step further (although i'm sure there are better ways to do this) by deleting the leading space.
docker-compose logs -f kibana | cut -d"|" -f2 | cut -d" " -f2
..
{"type":"log","#timestamp":"2019-08-11T03:47:53Z","tags":["status","plugin:maps#6.8.1","error"],"pid":1,"state":"red","message":"Status
{"type":"log","#timestamp":"2019-08-11T03:47:53Z","tags":["status","plugin:index_management#6.8.1","error"],"pid":1,"state":"red","message":"Status
{"type":"log","#timestamp":"2019-08-11T03:47:53Z","tags":["status","plugin:index_lifecycle_management#6.8.1","error"],"pid":1,"state":"red","message":"Status
..
I am new to docker, I am trying to dockerize existing tomcat based application with below structure.Directory strucutre
ProductHomeDir
|
|
|----Tomcat
| |
| |---webapp
| |
| |---feature.war
| |---support.war
|
|
|----versionDir
|
|----ConfDir
|
|----log4j.properties
is it possible to create docker image for above structure?
Thanks
you can copy all the ProductHomeDir into the docker container using COPY command in dockerfile https://docs.docker.com/engine/reference/builder/#copy
then use CMD in dockerfile to execute scripts inside that container
Am trying to setup Minikube, and have a challenge. My minikube is setup, and I started the Nginex pod. I can see that the pod is up, but the service doesn't appear as active. On dashboard too, although the pod appears the depolyment doesn't show up. Here are my power shell command outputs.
Am learning this technology and may have missed something. My understanding is that when using docker tools, no explicit configurations are necessary at docker level, other than setting it up. Am I wrong here ? If so where ?
relevant PS output
Lets deploy hello-nginx deployment
C:\> kubectl.exe run hello-nginx --image=nginx --port=80
deployment "hello-nginx" created
View List of pods
c:\> kubectl.exe get pods
NAME READY STATUS RESTARTS AGE
hello-nginx-6d66656896-hqz9j 1/1 Running 0 6m
Expose as a Service
c:\> kubectl.exe expose deployment hello-nginx --type=NodePort
service "hello-nginx" exposed
List exposed services using minikube
c:\> minikube.exe service list
|-------------|----------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------|-----------------------------|
| default | hello-nginx | http://192.168.99.100:31313 |
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | http://192.168.99.100:30000 |
|-------------|----------------------|-----------------------------|
Access Nginx from browser http://192.168.99.100:31313
This method can be used
this worked for me on centos 7
$ systemctl enable nginx
$ systemctl restart nginx
or
$ systemctl start nginx
For a study I deployed on my computer a cloud architecture using Docker. (Nginx for the load balancing an some Apache servers to run a simple Php application.
I wanted to know if it was possible to use several computers to deploy my containers in order to increase the power available.
(I'm using a MacBook Pro with Yosemite. I've installed boot2docker with Virtual box)
Disclosure: I was a maintainer on Swarm Legacy and Swarm mode
Edit: This answer mentions Docker Swarm Legacy, the first version of Docker Swarm. Since then a new version called Swarm mode was directly included in the docker engine and behaves a bit differently in terms of topology and features even though the big ideas remain.
Yes you can deploy Docker on multiple machines and manage them altogether as a single pool of resources. There are several solutions that you can use to orchestrate your containers on multiple machines using docker.
You can use either Docker Swarm, Kubernetes, Mesos/Marathon or Fleet. (there might be others as this is a fast-moving area). There are also commercial solutions like Amazon ECS.
In the case of Swarm, it uses the Docker remote API to communicate with distant docker daemons and schedule containers according to the load or some extra constraints (other systems are similar with more or less features). Here is an example of a small Swarm deployments.
Docker CLI
+
|
|
| 4000 (or else)
| server
+--------v---------+
| |
+------------> Swarm Manager <------------+
| | | |
| +--------^---------+ |
| | |
| | |
| | |
| | |
| client | client | client
| 2376 | 2376 | 2376
| | |
+---------v-------+ +--------v--------+ +--------v--------+
| | | | | |
| Swarm Agent | | Swarm Agent | | Swarm Agent |
| Docker | | Docker | | Docker |
| Daemon | | Daemon | | Daemon |
| | | | | |
+-----------------+ +-----------------+ +-----------------+
Choosing one of those systems is basically a choice between:
Cluster deployment simplicity and maintenance
Flexibility of the scheduler
Completeness of the API
Support for running VMs
Higher abstraction for groups of containers: Pods
Networking model (Bridge/Host or Overlay or Flat network)
Compatibility with the Docker remote API
It depends mostly on the use case and what kinds of workload you are running. For more details on the differences between those systems, see this answer.
That sounds like clustering, which is what docker swarm does (see its github repo).
It turns a pool of Docker hosts into a single, virtual host.
See for example issue 247: How replication control and load balancing being taken care of?