When I run docker-compose up, I get these logs:
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:46Z","tags":["reporting","browser-driver","warning"],"pid":6,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:46Z","tags":["reporting","warning"],"pid":6,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:46Z","tags":["status","plugin:reporting#7.3.1","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:46Z","tags":["info","task_manager"],"pid":6,"message":"Installing .kibana_task_manager index template version: 7030199."}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:46Z","tags":["info","task_manager"],"pid":6,"message":"Installed .kibana_task_manager index template: version 7030199 (API version 1)"}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:47Z","tags":["info","migrations"],"pid":6,"message":"Creating index .kibana_1."}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:47Z","tags":["info","migrations"],"pid":6,"message":"Pointing alias .kibana to .kibana_1."}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:47Z","tags":["info","migrations"],"pid":6,"message":"Finished in 254ms."}
kibana_1 | {"type":"log","#timestamp":"2019-09-09T21:41:47Z","tags":["listening","info"],"pid":6,"message":"Server running at http://0:5601"}
is there some configuration I can use so that it only spits out JSON? I am looking for it to omit the "kibana_1 | " part before each line.
And of course, ideally it could make that part of the JSON, like {"source":"kibana_1", ...}
Note: Not sure if docker-compose supports this out of the box but you can look at Docker logging drivers.
What you could do is use the cut command piping output from docker-compose logs -f.
Here is an example below:
docker-compose logs -f kibana | cut -d"|" -f2
..
{"type":"log","#timestamp":"2019-08-11T03:44:01Z","tags":["status","plugin:xpack_main#6.8.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","#timestamp":"2019-08-11T03:44:01Z","tags":["status","plugin:graph#6.8.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","#timestamp":"2019-08-11T03:44:01Z","tags":["status","plugin:searchprofiler#6.8.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
..
The cut -d"|" -f2 command will look for a | character and output everything after.
You can take it a step further (although i'm sure there are better ways to do this) by deleting the leading space.
docker-compose logs -f kibana | cut -d"|" -f2 | cut -d" " -f2
..
{"type":"log","#timestamp":"2019-08-11T03:47:53Z","tags":["status","plugin:maps#6.8.1","error"],"pid":1,"state":"red","message":"Status
{"type":"log","#timestamp":"2019-08-11T03:47:53Z","tags":["status","plugin:index_management#6.8.1","error"],"pid":1,"state":"red","message":"Status
{"type":"log","#timestamp":"2019-08-11T03:47:53Z","tags":["status","plugin:index_lifecycle_management#6.8.1","error"],"pid":1,"state":"red","message":"Status
..
Related
I'm trying to use the resources from other computers using the python3-mpi4py since my research uses a lot of calculations.
My codes and data are on the docker container.
To use mpi I have to be able to ssh directly to the docker container from other computers inside the same network as the host computer is located. But I cannot ssh into it.
my image is like below
|Host | <- On the same network -> | Other Computers |
| port 10000 | | |
| ^ | | |
|-------|-----------| | |
| V | | |
| port 10000 | | |
|docker container <-|------------ ssh ------------|--> |
Can anyone teach me how to do this?
You can running ssh server in the Host computer, then you can ssh to Host, then use docker command such as docker exec -i -t containerName /bin/bash to get interactive shell.
example:
# 1. On Other Computers
ssh root#host_ip
>> enter into Host ssh shell
# 2. On Host ssh shell
docker exec -i -t containerName /bin/bash
>> enter into docker interactive shell
# 3. On docker interactive shell
do sth.
I am trying to get mount points and their respective paths on linux. So when I run the mount -v command I get this example output
//cifst/FSR on /mnt/share/cifst/FSR type cifs ...
//sydatsttbsq01/TheBooks statements to be parsed on /mnt/share/TheBooks type cifs ...
I am trying to parse this text to display this output
/mnt/share/cifst/FSR;//cifst/FSR
/mnt/share/TheBooks;//sydatsttbsq01/TheBooks
But the /mnt on the first row is in column 3, while on the second row is in column 5 so how do I do this to get the /mnt part
mount -v | grep mnt | awk '{ print $1'} gets me the path but how do I get the mount points.
Lots of assumptions, but this works for your sample input/output:
$ cat << EOF | awk '{print $(NF-2), $1}' OFS=\;
> //cifst/FSR on /mnt/share/cifst/FSR type cifs
> //sydatsttbsq01/TheBooks statements to be parsed on /mnt/share/TheBooks type cifs
> EOF
/mnt/share/cifst/FSR;//cifst/FSR
/mnt/share/TheBooks;//sydatsttbsq01/TheBooks
The trick is to notice that it's not column 3 and 5 you're interested in, but in each case it is column NF - 2.
In this particular case, the grep is redundant because it matches each line of input, and in general grep is (almost) always redundant with awk. If you need to add the filter, do it with awk and use:
awk '/mnt/{print $(NF-2), $1}' OFS=\;
If the fields you are interested in are #1 and the next after the first field equal to "on", and they do not contain spaces, you could try this:
mount -v | awk '{a="";for(i=2;i<=NF;i++){if(a=="on")break;a=$i};print $i";"$1}'
If we add one more hypothesis that there is only one field equal to "on", another possibility is to use gensub:
mount -v | awk '{print gensub(/^(\S+).*\<on\>\s+(\S+).*/,"\\2;\\1",1)}'
Which brings us to a sed equivalent:
mount -v | sed -r 's/^(\S+).*\<on\>\s+(\S+).*/\2;\1/'
For this particular output something like this will work; bear in mind that it will break if any of the paths with spaces that you use have the word "on" in them.
mount -v | awk 'BEGIN{FS="( on | type )"; OFS=";"} $3 ~ /cifs/ {print $2,$1}'
/mnt/share/cifst/FSR;//cifst/FSR
/mnt/share/TheBooks;//sydatsttbsq01/TheBooks statements to be parsed
P.S.: You'd be much better off if you didn't use spaces in paths, replace them with ., or _, or camelcase them ...
I am working on building a high availability setup using keepalived, where each server will have its own set of dockers that will get handled appropriately depending on if it is in BACKUP or MASTER. However, for testing purposes, I don't have 2 boxes available that I can turn on and off for this. So, is there is a good (preferably lightweight) way I can setup multiple dockers with the same name on the same machine?
Essentially, it would like it to look something like this:
Physical Server A
-----------------------------------------
| Virtual Server A |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
| ^ |
| | |
| v |
| Virtual Server B |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
-----------------------------------------
Thanks
You cannot have multiple containers with the exact same name, but you can use docker-compose file to have several directories and containers with same name (but with some differences that I explain below).
You can read more about it in Docker Docs regarding my below explanation.
Let us suppose yours:
Physical Server A
-----------------------------------------
| Virtual Server A |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
| ^ |
| | |
| v |
| Virtual Server B |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
-----------------------------------------
In your case, I would create two directories: vsb and vsb. Now let's go into these two directories.
We have these one file (at least, but you can have more per your requirement):
-----------------------------------------
| /home/vsa/docker-compose.yml |
| /home/vsa/keepalived/Dockerfile |
| /home/vsa/htmld/Dockerfile |
| /home/vsa/accessd/Dockerfile |
| /home/vsa/mysql/Dockerfile |
| -------------------------------------- |
| ^ |
| | |
| v |
| /home/vsb/docker-compose.yml |
| /home/vsb/keepalived/Dockerfile |
| /home/vsb/htmld/Dockerfile |
| /home/vsb/accessd/Dockerfile |
| /home/vsb/mysql/Dockerfile |
| -------------------------------------- |
-----------------------------------------
Note the file names exactly, as Dockerfile starts with capital D.
Let's watch docker-compose.yml:
version: '3.9'
services:
keepalived:
build: ./keepalived
restart: always
htmld:
build: ./htmld
restart: always
accessd:
build: ./accessd
restart: always
mysql:
build: ./mysql
restart: always
networks:
default:
external:
name: some_network
volumes:
some_name: {}
Let's dig into docker-compose.yml first:
Version part defines which version to use. Services part starts the services and containers you want to create and run.
I've used names like keepalived under services. You can use any name you want there, as it's your choice.
Under keepalived, the keyword build specifies in which path Dockerfile exists, so that as the path is called /home/vsa/keepalived, so we use . which means here and then it goes to keepalived directory, searching for Dockerfile (in docker-compose.yml for vsb, it searches for this file in /home/vsb/keepalived).
networks part specifies the external network these containers use, so that when all of our containers from docker-compose are running, then they're in the same docker network, so they can see and talk to each other. name part has the name some_network that you can choose any name you want that created before.
How to create a network called some_network is, if you're in Linux, you should run docker network create some_network before running docker-compose file.
volumes part specifies the name of volume of these services.
And here is an example in keepalived directory for a file called Dockerfile:
FROM ubuntu:latest # see [Dockerfile Docs][2] for more info
# after FROM command, you can use
# other available commands to reach
# your own goal
Now let's go to Dockerfile:
FROM command specifies which OS base to use. In this case, we want to use ubuntu for example, so that we create our image based on ubuntu.
There are other commands you can see them all in above link.
After having finished both Dockerfile and docker-compose.yml files with your own commands and keywords, you can run and create them by these commands:
docker-compose -f /home/vsa/docker-compose.yml up -d
docker-compose -f /home/vsb/docker-compose.yml up -d
Now we'll have eight containers calling these (docker automatically called them, otherwise you explicitly name them on your own):
vsa_keepalived
vsa_htmld
vsa_accessd
vsa_mysql
vsb_keepalived
vsb_htmld
vsb_accessd
vsb_mysql
After installing shellhub and starting the the containers using docker-compse i got the error message on the console
c./bin/docker-compose up
shellhub_mongo_1 is up-to-date
shellhub_ssh_1 is up-to-date
shellhub_api_1 is up-to-date
shellhub_ui_1 is up-to-date
Starting shellhub_gateway_1 ... done
Attaching to shellhub_mongo_1, shellhub_ssh_1, shellhub_api_1, shellhub_ui_1, shellhub_gateway_1
api_1 |
api_1 | ____ __
api_1 | / __/___/ / ___
api_1 | / _// __/ _ \/ _ \
api_1 | /___/\__/_//_/\___/ v3.3.10-dev
api_1 | High performance, minimalist Go web framework
api_1 | https://echo.labstack.com
api_1 | ____________________________________O/_______
api_1 | O\
api_1 | ⇨ http server started on [::]:8080
mongo_1 | 2021-02-24T14:48:50.370+0000 I COMMAND [conn3] CMD: dropIndexes main.users: "tenant_id"
mongo_1 | 2021-02-24T14:48:50.403+0000 I COMMAND [conn3] CMD: dropIndexes main.users: "session_record"
mongo_1 | 2021-02-24T14:53:32.846+0000 I SHARDING [LogicalSessionCacheReap] Marking collection config.transactions as collection version: <unsharded>
shellhub_gateway_1 exited with code 132
shellhub_gateway_1 exited with code 132
shellhub_gateway_1 exited with code 132
It seems that shellhub_gateway use AVX(Advanced Vector Extensions) which is not supported on my old intel atom CPU.
Any idea how to get shellhub work on old CPUs ?
This issue was solved by Shellhub core team bellow quoted their response:
Looks good, no errors.
It seems it's fixed in 8a14707.
You can create a docker-compose.override.yml with following config to workaround the issue until we release next version with a fix:
version: '3.7'
services:
gateway:
ports:
- "${SHELLHUB_HTTP_PORT}:80"
Hi I have just built my Zabbix server and in the process of configuring some checks currently setup in Nagios.
One these checks is check_load. Can anyone explain what this check means in Nagios and how I can replicate it in Zabbix.
In Nagios check_load monitors server load. Server load is a good indication of what your overall utilisation looks like : http://en.wikipedia.org/wiki/Load_(computing)
You can view server load easily on most *nix servers using the top command. The 3 numbers at the top right show your 1, 5 and 15 minute load averages. As a brief guide the load should be less than your number of processors. So for instance if you have a 4 cpu server then I would expect your load average to sit below 4.00.
I recently did a quick load monitor in nagios script format for http://www.dataloop.io
It was done quickly and needs a fair bit of work to work across other systems. But it gives a feel for how to scrape the output of top:
#!/bin/bash
onemin=$(top -b -n1 | sed -n '1p' | cut -d ' ' -f 13 | sed 's/%//')
fivemin=$(top -b -n1 | sed -n '1p' | cut -d ' ' -f 14 | sed 's/%//')
fifteenmin=$(top -b -n1 | sed -n '1p' | cut -d ' ' -f 15 | sed 's/%//')
int_fifteenmin=$( printf "%.0f" $fifteenmin )
echo "OK | 1min=$onemin;;;; 5min=$fivemin;;;; 15min=$fifteenmin;;;;"
alert=10
if [ "$int_fifteenmin" -gt "$alert" ]
then
exit 2
fi
exit 0
Hope this explains enough for you to create a Zabbix equivalent.
In zabbix, it is a zabbix agent built-in check. Search for system.cpu.load here.
As for what it measures, the already posted link to wikipedia article is a great read.