I have a MySQL, Logstash, and ES setup but I need to set some fields to keyword type instead of text. I've read that it is not possible to do this in Logstash (logstash.conf) and so it needs to be done in ES. I've followed a similar question here and slightly modified it to PUT a mapping but I have got this error: "stacktrace": ["org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: unknown setting [es.path.data] please check that any required plugins are installed, or check the breaking changes documentation for removed settings",
I am using docker-compose to start all the services at once under the same network, and so the mapping must be specified before logstash ports the data to ES. (Mapping can't be changed on a non-empty index).
I have seen other questions and they do seem a bit old so I wanted to ask if there is a better approach to doing this now.
My mapping.json
{
"mappings": {
"properties": {
"authors": {"type": "keyword"},
"tags": {"type": "keyword"}
}
}
}
Dockerfile
FROM elasticsearch:7.5.1
COPY ./docker-entrypoint.sh .
COPY ./mapping.json .
RUN mkdir /data && chown -R elasticsearch:elasticsearch /data && echo 'es.path.data: /data' >> config/elasticsearch.yml && echo 'path.data: /data' >> config/elasticsearch.yml
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh /utils/wait-for-it.sh
# Copy the files you may need and your insert script
RUN ./docker-entrypoint.sh elasticsearch -p /tmp/epid & /bin/bash /utils/wait-for-it.sh -t 0 localhost:9200 -- curl -X PUT 'http://localhost:9200/cnas_publications' -d #./mapping.json; kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;
Edit: I've used the docker-entrypoint.sh from the official repo here
It seems that I was mistaken, and it is actually possible to define the mapping in Logstash. Assuming you're using the official elasticsearch image, create a ES template and make a volume with it to the logstash container.
Here's a sample of my output of my logstash.conf
output {
stdout { codec => "rubydebug" }
elasticsearch {
hosts => "http://elasticsearch:9200"
index => "test"
template => "/logstash/mapping.json"
template_name => "mapping"
document_id => "%{[#metadata][_id]}"
}
}
and don't forget to set index_patterns in your ES template.
Related
I started a docker images anapsix/webdis:
sudo docker run -d -p 7379:7379 -e LOCAL_REDIS=true anapsix/webdis
and changed the etc/webdis.json to allow websockets and committed it with
sudo docker commit <container-id>
however, when I used the new image to start a container, it does not keep the changes. Is there something I'm doing wrong?
Thanks!
In this case your problem is that the anapsix/webdis image has an entrypoint script (/entrypoint.sh) that generates /etc/webdis.json when the container starts.
Looking at the script, you can set the value of websockets by setting the WEBSOCKETS variable when you start the container:
docker run -d -p 7379:7379 \
-e LOCAL_REDIS=true \
-e WEBSOCKETS=true \
anapsix/webdis
When we run it like this, the generated /etc/webdis.json looks like:
{
"redis_host": "127.0.0.1",
"redis_port": 6379,
"redis_auth": null,
"http_host": "0.0.0.0",
"http_port": 7379,
"threads": 5,
"pool_size": 10,
"daemonize": false,
"websockets": true,
"database": 0,
"acl": [
{
"disabled": ["DEBUG", "FLUSHDB", "FLUSHALL"]
},
{
"http_basic_auth": "user:password",
"enabled": ["DEBUG"]
}
],
"verbosity": 8,
"logfile": "/dev/stdout"
}
More broadly, using docker commit is almost always the wrong thing to do; you should generate custom images using a Dockerfile (this gives you a much more manageable, reproducible process for creating container images).
I was maintaining application which develop on C, running by Systemd and it was microservices. Each services can communicate by Linux shared memory (IPCS), and use HTTP to communicate to outside. My question is, is it good to move this all services to one Docker container? I new in the container topic and people was recommended me to learn and use it.
The simple design of my application is below:
Note: MS is Microservice
Docker official web says :
It is generally recommended that you separate areas of concern by using one service per container
When the docker starts, it needs to link to a live and foreground process. If this process ends, the entire container ends. Also the default behavior in docker is related to logs is "catch" the stdout of the single process.
several process
If you have several process, a no one is the "main", I think it is possible to start them as background process but you will need a heavy while in bash to simulate a foreground process. Inside this loop you could check if your services still running because has no sense a live container when its internal process are exited or has errors.
while sleep 60; do
ps aux |grep my_first_process |grep -q -v grep
PROCESS_1_STATUS=$?
ps aux |grep my_second_process |grep -q -v grep
PROCESS_2_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
One process
As apache and other tools make, you could create one live process and then, start your another child process from inside. This is called spawn process. Also as you mentioned http, this process could expose http endpoints to exchange information with the outside.
I'm not a C expert, but system method could be an option to launch another process:
#include <stdlib.h>
int main()
{
system("commands to launch service1");
system("commands to launch service2");
return 0;
}
Here some links:
How do you spawn another process in C?
https://suchprogramming.com/new-linux-process-c/
http://cplusplus.com/forum/general/250912/
Also to create a basic http server in c, you could check this:
https://stackoverflow.com/a/54164425/3957754
restinio::run(
restinio::on_this_thread()
.port(8080)
.address("localhost")
.request_handler([](auto req) {
return req->create_response().set_body("Hello, World!").done();
}));
This c program will keep live when it starts because is a server. So this will be perfect for docker.
Rest Api is the most common strategy to exchange information over internet between servers and/or devices.
If you achieve this, your c program will have these features:
start another required process (ms1,ms2,ms3, etc)
expose rest http endpoints to send a receive some information between your services and the world. Sample
method: get
url: https://alexsan.com/domotic-services/ms1/message/1
description: rest endpoint which returns the message 1 from service ms1 queue
returns:
{
"command": "close gateway=5"
}
method: post
url: https://alexsan.com/domotic-services/ms2/message
description: rest endpoint which receives a message containing a command to be executed on service ms2
receive:
{
"id": 100,
"command" : "open gateway=2"
}
returns:
{
"command": "close gateway=5"
}
These http endpoints could be invoked from webs, mobiles, etc
Use high level languages
You could use python, nodejs, or java to start a server and from its inside, launch your services and if you want expose some http endpoints. Here a basic example with python:
FROM python:3
WORKDIR /usr/src/app
# create requirements
RUN echo "bottle==0.12.17" > requirements.txt
# app.py is creating with echo just for demo purposes
# in real scenario, app.py should be another file
RUN echo "from bottle import route, run" >> app.py
RUN echo "import os" >> app.py
RUN echo "os.spawnl(os.P_DETACH, '/opt/services/ms1.acme')" >> app.py
RUN echo "os.spawnl(os.P_DETACH, '/opt/services/ms2.acme')" >> app.py
RUN echo "os.spawnl(os.P_DETACH, '/opt/services/ms3.acme')" >> app.py
RUN echo "os.spawnl(os.P_DETACH, '/opt/services/ms4.acme')" >> app.py
RUN echo "#route('/domotic-services/ms2/message')" >> app.py
RUN echo "def index():" >> app.py
RUN echo " return 'I will query the message'" >> app.py
RUN echo "run(host='0.0.0.0', port=80)" >> app.py
RUN pip install --no-cache-dir -r requirements.txt
CMD [ "python", "./app.py" ]
Also you can use nodejs:
https://github.com/jrichardsz/nodejs-express-snippets/blob/master/01-hello-world.js
https://nodejs.org/en/knowledge/child-processes/how-to-spawn-a-child-process/
I don't suppose anyone knows if it's possible to call the docker run or docker compose up commands from a web app?
I have the following scenario in which I have a react app that uses openlayers for it's maps. I have it so that when the user loses internet connection it fallback onto making the requests to a map server running locally on docker. The issue is that the user needs to manually start the server via the command line. To make things easier for the user, I added the following bash script and docker compose file to boot up the server with a single command, but was wondering if I could incorporate that functionality into the web app and have the user boot the map server by the click of a button?
Just for references sake these are my bash and compose files.
#!/bin/sh
dockerDown=`docker info | grep -qi "ERROR" && echo "stopped"`
if [ $dockerDown ]
then
echo "\n ********* Please start docker before running this script ********* \n"
exit 1
fi
skipInstall="no"
read -p "Have you imported the maps already and just want to run the app (y/n)?" choice
case "$choice" in
y|Y ) skipInstall="yes";;
n|N ) skipInstall="no";;
* ) skipInstall="no";;
esac
pbfUrl='https://download.geofabrik.de/asia/malaysia-singapore-brunei-latest.osm.pbf'
#polyUrl='https://download.geofabrik.de/asia/malaysia-singapore-brunei.poly'
#-e DOWNLOAD_POLY=$polyUrl \
docker volume create openstreetmap-data
docker volume create openstreetmap-rendered-tiles
if [ $skipInstall = "no" ]
then
echo "\n ***** IF THIS IS THE FIRST TIME, YOU MIGHT WANT TO GO GET A CUP OF COFFEE WHILE YOU WAIT ***** \n"
docker run \
-e DOWNLOAD_PBF=$pbfUrl \
-v openstreetmap-data:/var/lib/postgresql/12/main \
-v openstreetmap-rendered-tiles:/var/lib/mod_tile \
overv/openstreetmap-tile-server \
import
echo "Finished Postgres container!"
fi
echo "\n *** BOOTING UP SERVER CONTAINER *** \n"
docker compose up
My docker compose file
version: '3'
services:
map:
image: overv/openstreetmap-tile-server
volumes:
- openstreetmap-data:/var/lib/postgresql/12/main
- openstreetmap-rendered-tiles:/var/lib/mod_tile
environment:
- THREADS=24
- OSM2PGSQL_EXTRA_ARGS=-C 4096
- AUTOVACUUM=off
ports:
- "8080:80"
command: "run"
volumes:
openstreetmap-data:
external: true
openstreetmap-rendered-tiles:
external: true
There is the Docker API, and you are able to start containers,
In the Docker documentation,
https://docs.docker.com/engine/api/
To start the containers using the Docker API
https://docs.docker.com/engine/api/v1.41/#operation/ContainerStart
I want to put at daemon (atd) in separate docker container for running as external environment independent scheduler service.
I can run atd with following Dockerfile and docker-compose.yml:
$ cat Dockerfile
FROM alpine
RUN apk add --update at ssmtp mailx
CMD [ "atd", "-f" ]
$ cat docker-compose.yml
version: '2'
services:
scheduler:
build: .
working_dir: /mnt/scripts
volumes:
- "${PWD}/scripts:/mnt/scripts"
But problems are:
1) There is no built-in option to reditect atd logs to /proc/self/fd/1 for showing them via docker logs command. at just have -m option, which sends mail to user.
Is it possible to redirect at from user mail to /proc/self/fd/1 (maybe some compile flags) ?
2) Now I add new task via command like docker-compose exec scheduler at -f test.sh now + 1 minute. Is it a good way ? I think a better way is to find a file where at stores a queue, add this file as volume, update it externally and just send docker restart after file change.
But I can't find where at stores its data on alpine linux ( I just found /var/spool/atd/.SEQ where at stores id of last job ). Anyone knows where at stores its data ?
Also will be glad to hear any advices regarding at dockerization.
UPD. I found where at stores its data on alpine, it's /var/spool/atd folder. When I create a task via at command it creates here executable file with name like a000040190a2ff and content like
#!/bin/sh
# atrun uid=0 gid=0
# mail root 1
umask 22
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin; export PATH
HOSTNAME=e605e8017167; export HOSTNAME
HOME=/root; export HOME
cd /mnt/scripts || {
echo 'Execution directory inaccessible' >&2
exit 1
}
#!/usr/bin/env sh
echo "Hello world"
UPD2: the difference between running at with and without -m option is third string of generated script
with -m option:
#!/bin/sh
# atrun uid=0 gid=0
# mail root 1
...
without -m :
#!/bin/sh
# atrun uid=0 gid=0
# mail root 0
...
According official man
The user will be mailed standard error and standard output from his
commands, if any. Mail will be sent using the command
/usr/sbin/sendmail
and
-m
Send mail to the user when the job has completed even if there was no
output.
I tried to run schedule simple Hello World script and found that no mail was sent:
# mail -u root
No mail for root
I would like to run Filebeat as Docker container in Azure IoT Edge. I would like Filebeat to get logs from others running containers.
I'm already able to run filebeat as Docker container, from the documentation (https://www.elastic.co/guide/en/beats/filebeat/6.8/running-on-docker.html#_volume_mounted_configuration)
docker run -d \
--name=filebeat \
--user=root \
--volume="$(pwd)/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro" \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
docker.elastic.co/beats/filebeat:6.8.3 filebeat -e -strict.perms=false
With this command and with the correct filebeat.yml file I'm able to collect logs for every running containers on my device.
Now I would like to deploy this configuration as Azure IoT Edge Modules.
I created a docker image having the filebeat.yml file included with the following Dockerfile:
FROM docker.elastic.co/beats/filebeat:6.8.3
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN chmod go-w /usr/share/filebeat/filebeat.yml
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
USER filebeat
From documentation: https://www.elastic.co/guide/en/beats/filebeat/6.8/running-on-docker.html#_custom_image_configuration
I tested this Dockerfile by running locally
docker build -t filebeat .
and
docker run -d \
--name=filebeat \
--user=root \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
filebeat:latest filebeat -e -strict.perms=false
This works fine, logs from other containers are collected as they should.
Now my question is :
In Azure IoT Edge, how can I mount volumes to access others Docker containers running on the devices, like it's done with
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro"
in order to collect logs?
From this other SO post (Mount path to Azure IoT Edge module) in the Azure IoT Edge portal I tried the following:
"HostConfig": {
"Mounts": [
{
"Target": "/var/lib/docker/containers",
"Source": "/var/lib/docker/containers",
"Type": "volume",
"ReadOnly: true
},
{
"Target": "/var/run/docker.sock",
"Source": "/var/run/docker.sock",
"Type": "volume",
"ReadOnly: true
}
]
}
}
But when I deploy this module I have the following error:
2019-11-25T10:09:41Z [WARN] - Could not create module FilebeatAgent
2019-11-25T10:09:41Z [WARN] - caused by: create /var/lib/docker/containers: "/var/lib/docker/containers" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path
I don't understand this error. How can I specify a path using only [a-zA-Z0-9][a-zA-Z0-9_.-] ?
Thanks for your help.
EDIT
In the Azure IoT Edge portal, createOptions json:
{
"HostConfig": {
"Binds": [
"/var/lib/docker/containers:/var/lib/docker/containers",
"/var/run/docker.sock:/var/run/docker.sock"
]
}
}
There is an article that describes how to mount storage from the host here: https://learn.microsoft.com/en-us/azure/iot-edge/how-to-access-host-storage-from-module