problems running logstash with -f flag in docker - docker

I'm trying to run the logstash container in red hat 7 with the command:
docker run -v /home/logstash/config:/conf -v /home/docker/logs:/logs logstash logstash -f /conf/logstash.conf --verbose
and the response received is:
{:timestamp=>"2016-05-05T10:21:20.765000+0000", :message=>"translation missing: en.logstash.runner.configuration.file-not-found", :level=>:error}
{:timestamp=>"2016-05-05T10:21:20.770000+0000", :message=>"starting agent", :level=>:info}
and the logstash container is not running.
If I execute the folowing command:
docker run -dit -v /home/logstash/config:/conf -v /home/docker/logs:/logs --name=logstash2 logstash logstash -e 'input { stdin { } } output { stdout { } }'
Enter in the container with the command :
docker exec -ti bash
and execute:
logstash -f /conf/logstash.conf
A new logstash proccess is now running in the container and I can manage log files setted up in the config file.
Any idea why am i having this strange behaviour?
Thanks, for all.

Problem solved. It was a directory permissions in the host machine.Thanks for helping #NOTtardy

Try making double quotes around logstash -f /conf/logstash.conf --verbose
Edit: just tried you command myself and it works fine. Could be that your logstash.conf is the reason for the error.
I've used a very simple logstash.conf
input {
udp {
port => 5000
type => syslog
}
}
output {
stdout {
codec => rubydebug
}
}
And got the following output
{:timestamp=>"2016-05-05T18:24:26.294000+0000", :message=>"starting agent", :level=>:info}
{:timestamp=>"2016-05-05T18:24:26.332000+0000", :message=>"starting pipeline", :id=>"main", :level=>:info}
{:timestamp=>"2016-05-05T18:24:26.370000+0000", :message=>"Starting UDP listener", :address=>"0.0.0.0:5000", :level=>:info}
{:timestamp=>"2016-05-05T18:24:26.435000+0000", :message=>"Starting pipeline", :id=>"main", :pipeline_workers=>1, :batch_size=>125, :batch_delay=>5, :max_inflight=>125, :level=>:info}
{:timestamp=>"2016-05-05T18:24:26.441000+0000", :message=>"Pipeline main started"}

Related

Logstash Crashes When Filebeat Running

Logstash is running well without beats configuration over tcp and I can see the all logs when I send over tcp.
input {tcp{
port => 8500 }
}
output { elasticsearch { hosts => ["elasticsearch:9200"] }
}
But I want to send logs to logstash from filebeat. I changed logstash config with this:
input {
beats {
port => 5044
}
}
output { elasticsearch { hosts => ["elasticsearch:9200"] }
}
This is docker run for logstash
docker run -d -p 8500:8500 -h logstash --name logstash --link elasticsearch:elasticsearch -v C:\elk2\config-dir:/config-dir docker.elastic.co/logstash/logstash:7.5.2 -f /config-dir/logstash.conf
I am running filebeat in docker with following:
docker run -d docker.elastic.co/beats/filebeat:6.8.6 setup --template -E output.logstash.enabled=true -E 'output.logstash.hosts=["127.0.0.1:5044"]'
But whenever I run filebeat, logstash and filenbeat containers are being stopped:
There is no docker log meaningfull:
[2020-01-24T14:13:37,104][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-01-24T14:13:37,978][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2020-01-24T14:13:38,657][INFO ][logstash.runner ] Logstash shut down.
You need to expose your beat listening port
docker run -d -p 5044:5044 -h logstash --name logstash --link elasticsearch:elasticsearch -v C:\elk2\config-dir:/config-dir docker.elastic.co/logstash/logstash:7.5.2 -f /config-dir/logstash.conf

Flink cannot be run in Marathon

I have three physical nodes with docker installed on them. I have one docker container with Mesos, Marathon, Hadoop and Flink. I configured Master node and Slave nodes for Mesos,Zookeeper and Marathon. I do these works step by step.
First, In Master node, I enter to docker container with this command:
docker run -v /home/user/.ssh:/root/.ssh --privileged -p 5050:5050 -p 5051:5051 -p 5052:5052 -p 2181:2181 -p 8082:8081 -p 6123:6123 -p 8080:8080 -p 50090:50090 -p 50070:50070 -p 9000:9000 -p 2888:2888 -p 3888:3888 -p 4041:4040 -p 7077:7077 -p 52222:22 -e WEAVE_CIDR=10.32.0.2/12 -e MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins -e LIBPROCESS_IP=10.32.0.2 -e MESOS_RESOURCES=ports*:[11000-11999] -ti hadoop_marathon_mesos_flink_2 /bin/bash
Then run Mesos and Zookeeper :
/home/zookeeper-3.4.14/bin/zkServer.sh restart
/home/mesos-1.7.2/build/bin/mesos-master.sh --ip=10.32.0.1 --hostname=10.32.0.1 --roles=marathon,flink --quorum=1 --work_dir=/var/run/mesos --log_dir=/var/log/mesos
After that run Marathon in the same container:
/home/marathon-1.7.189-48bfd6000/bin/marathon --master 10.32.0.1:5050 --zk zk://10.32.0.1:2181/marathon --hostname 10.32.0.1 --webui_url 10.32.0.1:8080 --logging_level debug
And finally, I run hadoop:
/opt/hadoop/sbin/start-dfs.sh
Marathon,Mesos and Hadoop are run without any problems.
The most important part of my work is running Flink in Marathon. I configured Flink in docker container like this:
env.java.home: /opt/java
jobmanager.rpc.address: 10.32.0.1
high-availability: zookeeper
high-availability.storageDir: hdfs:///flink/ha/
high-availability.zookeeper.quorum: 10.32.0.1:2181,10.32.0.2:2181
recovery.zookeeper.path.mesos-workers: /mesos-workers
In Marathon UI, I create Application and put this JSON file on it, but it is failed.
{
"id": "flink",
"cmd": "/home/flink-1.7.0/bin/mesos-appmaster.sh
-Dmesos.master=10.32.0.1:5050,10.32.0.2:5050
-Dmesos.initial-tasks=1",
"cpus": 1.0,
"mem": 1024
}
Flink application is failed in Mesos UI. It shows this error:
I0428 06:01:39.586699 6155 exec.cpp:162] Version: 1.7.2
I0428 06:01:39.596458 6154 exec.cpp:236] Executor registered on agent 984595ae-e811-48fb-a9f5-ca6128e1cc1a-S0
I0428 06:01:39.598870 6157 executor.cpp:188] Received SUBSCRIBED event
I0428 06:01:39.599761 6157 executor.cpp:192] Subscribed executor on 10.32.0.3
I0428 06:01:39.599963 6157 executor.cpp:188] Received LAUNCH event
I0428 06:01:39.601236 6157 executor.cpp:697] Starting task flink.16a7cc18-697b-11e9-928f-ce235caa831e
I0428 06:01:39.613719 6157 executor.cpp:712] Forked command at 6163
I0428 06:01:39.787395 6157 executor.cpp:1013] Command exited with status 1 (pid: 6163)
I0428 06:01:40.791885 6162 process.cpp:927] Stopped the socket accept loop
The strange thing is that in STDout, I see this text; even though I set JAVA_HOME in /etc/environment and flink-conf.yam.
Please specify JAVA_HOME. Either in Flink config ./conf/flink-conf.yaml or as system-wide JAVA_HOME.
Would you please tell me what I should do for that problem?
Many Thanks.
You can check your Flink log in Slave node. Also, it is better to change your JSON file like this. It helps you to follow your application.
{
"id": "flink",
"cmd": "/home/flink-1.7.0/bin/mesos-appmaster.sh -Djobmanager.heap.mb=1024
-Djobmanager.rpc.port=6123 -Drest.port=8081
-Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=1024
-Dtaskmanager.numberOfTaskSlots=2 -Dparallelism.default=2
-Dmesos.resourcemanager.tasks.cpus=1",
"cpus": 1.0,
"mem": 1024,
"fetch": [
{
"uri": "/home/flink-1.7.0/bin/mesos-appmaster.sh",
"executable": true
}
]
}
Also, JAVA_HOME to Flink_conf.yaml in every nodes, Master and Slaves.
env.java.home: /opt/java
With adding JAVA_HOME, you do not see the error in STDOUT.
I think it is useful.

Vault Docker Image - Cant get REST Response

I am deploying vault docker image on Ubuntu 16.04, I am successful initializing it from inside the image itself, but I cant get any Rest Responses, and even curl does not work.
I am doing the following:
Create config file local.json :
{
"listener": [{
"tcp": {
"address": "127.0.0.1:8200",
"tls_disable" : 1
}
}],
"storage" :{
"file" : {
"path" : "/vault/data"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
}
under /vault/config directory
running the command to start the image
docker run -d -p 8200:8200 -v /home/vault:/vault --cap-add=IPC_LOCK vault server
entering bash terminal of the image :
docker exec -it containerId /bin/sh
Running inside the following command
export VAULT_ADDR='http://127.0.0.1:8200' and than vault init
It works fine, but when I am trying to send rest to check if vault initialized:
Get request to the following url : http://Ip-of-the-docker-host:8200/v1/sys/init
Getting No Response.
even curl command fails:
curl http://127.0.0.1:8200/v1/sys/init
curl: (56) Recv failure: Connection reset by peer
Didnt find anywhere online with a proper explanation what is the problem, or if I am doing something wrong.
Any Ideas?
If a server running in a Docker container binds to 127.0.0.1, it's unreachable from anything outside that specific container (and since containers usually only run a single process, that means it's unreachable by anyone). Change the listener address to 0.0.0.0:8200; if you need to restrict access to the Vault server, bind it to a specific host address in the docker run -p option.

Appending to file not showing up on logstash or elasticsearch output

I spun up logstash and elasticsearch docker containers using images from elastic.co. When I append to the file which I have set as my input file I don't see any output from logstash or elasticsearch. This page didn't help much and couldn't find my exact problem on google or stackoverflow.
This is how I started my containers:
docker run \
--name elasticsearch \
-p 9200:9200 \
-p 9300:9300 \
-e "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.3.1
docker run \
--name logstash \
--rm -it -v $(pwd)/elk/logstash.yml:/usr/share/logstash/config/logstash.yml \
-v $(pwd)/elk/pipeline.conf:/usr/share/logstash/pipeline/pipeline.conf \
docker.elastic.co/logstash/logstash:6.3.1
This is my pipeline configuration file:
input {
file {
path => "/pepper/logstash-tutorial.log"
}
}
output {
elasticsearch {
hosts => "http://x.x.x.x:9200/"
}
stdout {
codec => "rubydebug"
}
}
Logstash and elasticsearch started fine it seems.
Sample logstash startup output:
[INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x6dc306e7 sleep>"}
[INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>[]}
[INFO ][org.logstash.beats.Server] Starting server on port: 5044
[INFO ][logstash.inputs.metrics ] Monitoring License OK
[INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Sample elasticsearch startup output:
[INFO ][o.e.c.r.a.AllocationService] [FJImg8Z] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-logstash-6-2018.07.10][0]] ...]).
So when I make changes to logstash-tutorial.log, I don't see any terminal output from logstash or elasticsearch. How to get output or configure logstash and elasticsearch correctly?
The answer is on the same page that you have referred. Take a look at start_position
Choose where Logstash starts initially reading files: at the beginning or at the end. The default behavior treats files like live streams and thus starts at the end. If you have old data you want to import, set this to beginning.
Set the start position as below:
input {
file {
path => "/pepper/logstash-tutorial.log"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => "http://x.x.x.x:9200/"
}
stdout {
codec => "rubydebug"
}
}

How do I tail the logs of ALL my docker containers?

I can tail the logs of a single docker container by doing:
docker logs -f container1
But, how can I tail the logs of multiple containers on the same screen?
docker logs container1 container2
doesn’t work. It gives an error:
“docker logs” requires exactly 1 argument(s).
Thank you.
If you are using docker-compose, this will show all logs from the diferent containers
docker-compose logs -f
If you have access and root to the docker server:
tail -f /var/lib/docker/containers/*/*.log
The docker logs command can't stream multiple logs files.
Logging Drivers
You could use one of the logging drivers other than the default json to ship the logs to a common point. The systemd journald or syslog drivers would readily work on most systems. Any of the other centralised log systems would work too.
Note that configuring syslog on the Docker daemon means that docker logs command can no longer query the logs, they will only be stored where your syslog puts them.
A simple daemon.json for syslog:
{
"log-driver": "syslog",
"log-opts": {
"syslog-address": "tcp://10.8.8.8:514",
"syslog-format": "rfc5424"
}
}
Compose
docker-compose is capable of streaming the logs for all containers it controls under a project.
API
You could write tool that attaches to each container via the API and streams the logs via a websocket. Two of the Java libararies are docker-client and docker-java.
Hack
Or run multiple docker logs and munge the output, in node.js:
const { spawn } = require('child_process')
function run(id){
let dkr = spawn('docker', [ 'logs', '--tail', '1', '-t', '--follow', id ])
dkr.stdout.on('data', data => console.log('%s: stdout', id, data.toString().replace(/\r?\n$/,'')))
dkr.stderr.on('data', data => console.error('%s: stderr', id, data.toString().replace(/\r?\n$/,'')))
dkr.on('close', exit_code => {
if ( exit_code !== 0 ) throw new Error(`Docker logs ${id} exited with ${exit_code}`)
})
}
let args = process.argv.splice(2)
args.forEach(arg => run(arg))
Which dumps data as docker logs writes it.
○→ node docker-logs.js 958cc8b41cd9 1dad69882b3d db4b844d9478
958cc8b41cd9: stdout 2018-03-01T06:37:45.152010823Z hello2
1dad69882b3d: stdout 2018-03-01T06:37:49.392475996Z hello
db4b844d9478: stderr 2018-03-01T06:37:47.336367247Z hello2
958cc8b41cd9: stdout 2018-03-01T06:37:55.155137606Z hello2
db4b844d9478: stderr 2018-03-01T06:37:57.339710598Z hello2
1dad69882b3d: stdout 2018-03-01T06:37:59.393960369Z hello

Resources