Logstash is running well without beats configuration over tcp and I can see the all logs when I send over tcp.
input {tcp{
port => 8500 }
}
output { elasticsearch { hosts => ["elasticsearch:9200"] }
}
But I want to send logs to logstash from filebeat. I changed logstash config with this:
input {
beats {
port => 5044
}
}
output { elasticsearch { hosts => ["elasticsearch:9200"] }
}
This is docker run for logstash
docker run -d -p 8500:8500 -h logstash --name logstash --link elasticsearch:elasticsearch -v C:\elk2\config-dir:/config-dir docker.elastic.co/logstash/logstash:7.5.2 -f /config-dir/logstash.conf
I am running filebeat in docker with following:
docker run -d docker.elastic.co/beats/filebeat:6.8.6 setup --template -E output.logstash.enabled=true -E 'output.logstash.hosts=["127.0.0.1:5044"]'
But whenever I run filebeat, logstash and filenbeat containers are being stopped:
There is no docker log meaningfull:
[2020-01-24T14:13:37,104][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-01-24T14:13:37,978][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2020-01-24T14:13:38,657][INFO ][logstash.runner ] Logstash shut down.
You need to expose your beat listening port
docker run -d -p 5044:5044 -h logstash --name logstash --link elasticsearch:elasticsearch -v C:\elk2\config-dir:/config-dir docker.elastic.co/logstash/logstash:7.5.2 -f /config-dir/logstash.conf
Related
I have a elasticsearch 7.6.1 docker container which i want to run on port 9400,9500 port.
This is the docker run command I have used.
docker run -d --name elasticsearch761v2 -v /data/dump/:/usr/share/elasticsearch/data
-p 9400:9400 -p 9500:9500 -e "discovery.type=single-node" elasticsearch:7.6.1
Which gives the below output.
docker ps -a | grep elastic
idofcontainer elasticsearch:7.6.1 "/usr/local/bin/docke" 18 minutes ago Up 4 minutes
9200/tcp, 0.0.0.0:9400->9400/tcp, 9300/tcp, 0.0.0.0:9500->9500/tcp elasticsearch761v2
I have also set the elasticsearch.yml setting to below.
[root#idofcontainer config]# vi elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
transport.tcp.port: 9400
I have added Iptable entry for the above ports too.
The log for this container is :-
{"type": "server", "timestamp": "2020-04-06T08:25:22,684Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "4196b5b23",
"message": "Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.kibana_task_manager_1][0], [.kibana_1][0]]]).", "cluster.uuid": "gx7s8R_PTUK4lFPPGBZA", "node.id": "XfLZnNNnQnAOHJnWdDQg" }
The Curl output is this :-
curl http://eserver:9400/_cat
This is not an HTTP port
Because of this, my kibana is also not able to reach the ES server.
I have set the kibana.yml to point to the above port.
Kibana.yml
# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: ["http://eserver:9400/"]
xpack.monitoring.ui.container.elasticsearch.enabled: true
The log of this kibana container.
{"type":"log","#timestamp":"2020-04-06T08:49:14Z","tags":["warning","elasticsearch","admin"],"pid":7,"message":"Unable to revive connection: http://eserver:9400/"}
{"type":"log","#timestamp":"2020-04-06T08:49:14Z","tags":["warning","elasticsearch","admin"],"pid":7,"message":"No living connections"}
You have defined custom transport port 9400 and using it as HTTP port in your curl command to check the Elastic server, which error message is clearly pointing.
This is not an HTTP port
As you mentioned, you want to run your Elastic on 9400 and 9500, then you need to properly bind the default HTTP port 9200 to 9500, using below command.
docker run -d --name elasticsearch761v2 -v /data/dump/:/usr/share/elasticsearch/data
-p 9400:9400 -p 9500:9200 -e "discovery.type=single-node" elasticsearch:7.6.1
Note the only change required is -p 9500:9200 and after that, you can check your ES server using curl http://eserver:9500 , ie using HTTP port.
I have three physical nodes with docker installed on them. I have one docker container with Mesos, Marathon, Hadoop and Flink. I configured Master node and Slave nodes for Mesos,Zookeeper and Marathon. I do these works step by step.
First, In Master node, I enter to docker container with this command:
docker run -v /home/user/.ssh:/root/.ssh --privileged -p 5050:5050 -p 5051:5051 -p 5052:5052 -p 2181:2181 -p 8082:8081 -p 6123:6123 -p 8080:8080 -p 50090:50090 -p 50070:50070 -p 9000:9000 -p 2888:2888 -p 3888:3888 -p 4041:4040 -p 7077:7077 -p 52222:22 -e WEAVE_CIDR=10.32.0.2/12 -e MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins -e LIBPROCESS_IP=10.32.0.2 -e MESOS_RESOURCES=ports*:[11000-11999] -ti hadoop_marathon_mesos_flink_2 /bin/bash
Then run Mesos and Zookeeper :
/home/zookeeper-3.4.14/bin/zkServer.sh restart
/home/mesos-1.7.2/build/bin/mesos-master.sh --ip=10.32.0.1 --hostname=10.32.0.1 --roles=marathon,flink --quorum=1 --work_dir=/var/run/mesos --log_dir=/var/log/mesos
After that run Marathon in the same container:
/home/marathon-1.7.189-48bfd6000/bin/marathon --master 10.32.0.1:5050 --zk zk://10.32.0.1:2181/marathon --hostname 10.32.0.1 --webui_url 10.32.0.1:8080 --logging_level debug
And finally, I run hadoop:
/opt/hadoop/sbin/start-dfs.sh
Marathon,Mesos and Hadoop are run without any problems.
The most important part of my work is running Flink in Marathon. I configured Flink in docker container like this:
env.java.home: /opt/java
jobmanager.rpc.address: 10.32.0.1
high-availability: zookeeper
high-availability.storageDir: hdfs:///flink/ha/
high-availability.zookeeper.quorum: 10.32.0.1:2181,10.32.0.2:2181
recovery.zookeeper.path.mesos-workers: /mesos-workers
In Marathon UI, I create Application and put this JSON file on it, but it is failed.
{
"id": "flink",
"cmd": "/home/flink-1.7.0/bin/mesos-appmaster.sh
-Dmesos.master=10.32.0.1:5050,10.32.0.2:5050
-Dmesos.initial-tasks=1",
"cpus": 1.0,
"mem": 1024
}
Flink application is failed in Mesos UI. It shows this error:
I0428 06:01:39.586699 6155 exec.cpp:162] Version: 1.7.2
I0428 06:01:39.596458 6154 exec.cpp:236] Executor registered on agent 984595ae-e811-48fb-a9f5-ca6128e1cc1a-S0
I0428 06:01:39.598870 6157 executor.cpp:188] Received SUBSCRIBED event
I0428 06:01:39.599761 6157 executor.cpp:192] Subscribed executor on 10.32.0.3
I0428 06:01:39.599963 6157 executor.cpp:188] Received LAUNCH event
I0428 06:01:39.601236 6157 executor.cpp:697] Starting task flink.16a7cc18-697b-11e9-928f-ce235caa831e
I0428 06:01:39.613719 6157 executor.cpp:712] Forked command at 6163
I0428 06:01:39.787395 6157 executor.cpp:1013] Command exited with status 1 (pid: 6163)
I0428 06:01:40.791885 6162 process.cpp:927] Stopped the socket accept loop
The strange thing is that in STDout, I see this text; even though I set JAVA_HOME in /etc/environment and flink-conf.yam.
Please specify JAVA_HOME. Either in Flink config ./conf/flink-conf.yaml or as system-wide JAVA_HOME.
Would you please tell me what I should do for that problem?
Many Thanks.
You can check your Flink log in Slave node. Also, it is better to change your JSON file like this. It helps you to follow your application.
{
"id": "flink",
"cmd": "/home/flink-1.7.0/bin/mesos-appmaster.sh -Djobmanager.heap.mb=1024
-Djobmanager.rpc.port=6123 -Drest.port=8081
-Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=1024
-Dtaskmanager.numberOfTaskSlots=2 -Dparallelism.default=2
-Dmesos.resourcemanager.tasks.cpus=1",
"cpus": 1.0,
"mem": 1024,
"fetch": [
{
"uri": "/home/flink-1.7.0/bin/mesos-appmaster.sh",
"executable": true
}
]
}
Also, JAVA_HOME to Flink_conf.yaml in every nodes, Master and Slaves.
env.java.home: /opt/java
With adding JAVA_HOME, you do not see the error in STDOUT.
I think it is useful.
I spun up logstash and elasticsearch docker containers using images from elastic.co. When I append to the file which I have set as my input file I don't see any output from logstash or elasticsearch. This page didn't help much and couldn't find my exact problem on google or stackoverflow.
This is how I started my containers:
docker run \
--name elasticsearch \
-p 9200:9200 \
-p 9300:9300 \
-e "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.3.1
docker run \
--name logstash \
--rm -it -v $(pwd)/elk/logstash.yml:/usr/share/logstash/config/logstash.yml \
-v $(pwd)/elk/pipeline.conf:/usr/share/logstash/pipeline/pipeline.conf \
docker.elastic.co/logstash/logstash:6.3.1
This is my pipeline configuration file:
input {
file {
path => "/pepper/logstash-tutorial.log"
}
}
output {
elasticsearch {
hosts => "http://x.x.x.x:9200/"
}
stdout {
codec => "rubydebug"
}
}
Logstash and elasticsearch started fine it seems.
Sample logstash startup output:
[INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x6dc306e7 sleep>"}
[INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>[]}
[INFO ][org.logstash.beats.Server] Starting server on port: 5044
[INFO ][logstash.inputs.metrics ] Monitoring License OK
[INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Sample elasticsearch startup output:
[INFO ][o.e.c.r.a.AllocationService] [FJImg8Z] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-logstash-6-2018.07.10][0]] ...]).
So when I make changes to logstash-tutorial.log, I don't see any terminal output from logstash or elasticsearch. How to get output or configure logstash and elasticsearch correctly?
The answer is on the same page that you have referred. Take a look at start_position
Choose where Logstash starts initially reading files: at the beginning or at the end. The default behavior treats files like live streams and thus starts at the end. If you have old data you want to import, set this to beginning.
Set the start position as below:
input {
file {
path => "/pepper/logstash-tutorial.log"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => "http://x.x.x.x:9200/"
}
stdout {
codec => "rubydebug"
}
}
I'm trying to run the logstash container in red hat 7 with the command:
docker run -v /home/logstash/config:/conf -v /home/docker/logs:/logs logstash logstash -f /conf/logstash.conf --verbose
and the response received is:
{:timestamp=>"2016-05-05T10:21:20.765000+0000", :message=>"translation missing: en.logstash.runner.configuration.file-not-found", :level=>:error}
{:timestamp=>"2016-05-05T10:21:20.770000+0000", :message=>"starting agent", :level=>:info}
and the logstash container is not running.
If I execute the folowing command:
docker run -dit -v /home/logstash/config:/conf -v /home/docker/logs:/logs --name=logstash2 logstash logstash -e 'input { stdin { } } output { stdout { } }'
Enter in the container with the command :
docker exec -ti bash
and execute:
logstash -f /conf/logstash.conf
A new logstash proccess is now running in the container and I can manage log files setted up in the config file.
Any idea why am i having this strange behaviour?
Thanks, for all.
Problem solved. It was a directory permissions in the host machine.Thanks for helping #NOTtardy
Try making double quotes around logstash -f /conf/logstash.conf --verbose
Edit: just tried you command myself and it works fine. Could be that your logstash.conf is the reason for the error.
I've used a very simple logstash.conf
input {
udp {
port => 5000
type => syslog
}
}
output {
stdout {
codec => rubydebug
}
}
And got the following output
{:timestamp=>"2016-05-05T18:24:26.294000+0000", :message=>"starting agent", :level=>:info}
{:timestamp=>"2016-05-05T18:24:26.332000+0000", :message=>"starting pipeline", :id=>"main", :level=>:info}
{:timestamp=>"2016-05-05T18:24:26.370000+0000", :message=>"Starting UDP listener", :address=>"0.0.0.0:5000", :level=>:info}
{:timestamp=>"2016-05-05T18:24:26.435000+0000", :message=>"Starting pipeline", :id=>"main", :pipeline_workers=>1, :batch_size=>125, :batch_delay=>5, :max_inflight=>125, :level=>:info}
{:timestamp=>"2016-05-05T18:24:26.441000+0000", :message=>"Pipeline main started"}
I'm trying to setup a HA docker cluster on 3 dedicated pc's. I've successfully followed the instructions on docs.docker.com/engine/installation/linux/ubuntulinux and now I'm trying to follow the instructions on https://docs.docker.com/swarm/install-manual. Since I'm not using any virtualization I start at "Set up an consul discovery backend". The PC's (running ubuntu trusty 14.04 server edition) are all in the LAN 192.168.2.0/24. ubuntu001 has .104, ubuntu002 has .106, and ubuntu003 has .105
I did the following according to the instructions:
arnolde#ubuntu001:~$ docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
arnolde#ubuntu001:~$ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise 192.168.2.104:4000 consul://192.168.2.104
arnolde#ubuntu002:~# docker run -d swarm manage -H :4000 --replication --advertise 192.168.2.106:4000 consul://192.168.2.104:8500
arnolde#ubuntu003:~$ docker run -d swarm join --advertise=192.168.2.105:2375 consul://192.168.2.104:8500
But then when trying the next step, the swarm manager does NOT show up as "Primary" like it says it should, and no primary is listed:
arnolde#ubuntu001:~$ docker -H :4000 info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: swarm/1.1.0
Role: replica
Primary:
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 0
Plugins:
Volume:
Network:
Kernel Version: 3.19.0-25-generic
Operating System: linux
Architecture: amd64
CPUs: 0
Total Memory: 0 B
And:
arnolde#ubuntu001:~$ docker -H :4000 run hello-world
docker: Error response from daemon: No elected primary cluster manager.
I searched and found https://github.com/docker/swarm/issues/1491 which recommends to use dockerswarm/swarm:master instead, which I did, but it didn't help:
arnolde#ubuntu001:~$ docker run -d -p 4000:4000 dockerswarm/swarm:master manage -H :4000 --replication --advertise 192.168.2.104:4000 consul://192.168.2.104
I didn't find any other input regarding swarm+consul+primary so here I am... any suggestions? Unfortunately I'm not sure how to troubleshoot since I don't even know where to look for logging/debugging info, i.e. if the manager is connecting to consul successfully etc...
I was able to solve it myself after explicitly adding the port number to the consul:// parameter, apparently the docker docs are incomplete:
arnolde#ubuntu001:~$ docker run -d -p 4000:4000 dockerswarm/swarm:master manage -H :4000 --replication --advertise 192.168.2.104:4000 consul://192.168.2.104:8500
arnolde#ubuntu001:~$ docker -H :4000 info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: swarm/1.1.0
Role: replica
Primary: 192.168.2.106:4000
Also I added "-p 4000:4000" to the command on the replica manager (on ubuntu002). Not sure if that was necessary (or even a good idea).
My friends,the first step you should edit docker start daemon configure to write listen the port any other configure ,my environment is centos7,so my daemon configure is in /usr/lib/docker/.... edit "ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://192.168.1.102:8500 --cluster-advertise=192.168.1.103:0" each node. and the second step: "docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap" anymore...