I'm new Elastic Stack. I've been able to install Elasticsearch and Kibana via Docker using the instructions on elastic.co. However, I'm having some difficulty installing filebeats using the directions on elastic.co. After starting Elasticsearch and Kibana, when I run:
docker run docker.elastic.co/beats/filebeat:7.13.0 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["localhost:9200"]
I get the following output:
Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://localhost:9200: Get "http://localhost:9200": dial tcp [::1]:9200: connect: cannot assign requested address]
This is with a docker setup. Any guidance to fixing this would be great. Thanks.
if you were following instructions from tutorial You can see, that it should use the same network.
So instead of
docker run docker.elastic.co/beats/filebeat:7.13.0 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["localhost:9200"]
should be
docker run --net {network_name} docker.elastic.co/beats/filebeat:7.13.0 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["localhost:9200"]
Check Your elasticsearch container network with following command
docker inspect -f '{{.NetworkSettings.Networks}}' {es-container-name}
If You try to run Kibana+Elastic+Filebeat on Windows, I would suggest writing a dockerfile (or dockercompose) with Your own fileabeat.yml.
Of course, if You run Your elasticserach non-containered, You should use host network, but it's another story.
Related
I'm currently following Learn Docker in a Month of Lunches; Chapter 9 consists of running Prometheus to monitor, however, I am unable to connect to localhost.
Preliminary info:
Docker v. 20.10.17
WSL 2
deamon.json is following:
{
"metrics-addr" : "0.0.0.0:9323",
"experimental": true
}
Startup command had the following output:
systemctl --user start docker-desktop
> Failed to connect to bus: No such file or directory
I am able to run Docker Deamon using service docker start.
When trying to run Prometheus using the following commands:
hostIP=$(ip route get 1 | awk '{print $NF;exit}')
docker container run -e DOCKER_HOST=$hostIP -d -p 9090:9090 diamol/prometheus:2.13.1
This is the output:
What I've tried already is changing -e DOCKER_HOST=127.0.0.1, however, here I also get an error:
I attempted to following the documentation provided by Prometheus, but that didn't lead to a desirable result.
I am trying to do
docker run -d --name artifactory -v D://JFrog/artifactory/var/:/var/opt/jfrog/artifactory -p 8281:8081 docker.bintray.io/jfrog/artifactory-oss
By following this tutorial: https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory#InstallingArtifactory-DockerComposeInstallation
However, even after setting permissions and adding localhost as shared.node.ip in $JFROG_HOME/artifactory/var/etc/system.yaml
The logs are as follows from docker console: https://pastebin.com/dQAckECE
Any leads/help will be appreciated. TIA
I read the blog: https://www.rubix.nl/blogs/tibco-monitoring-docker-how-create-instantiate-and-start-tibco-businessworks-container-edition
The blog entry is very interesting. Unfortunately, it does not work for me. My Tibco service does not connect to the monitoring.
Here is some data:
Bwce Version: 2.3
Bwce Mon Version: 2.4
Log entry from my Tibcoservice: Failed to register with Monitoring
application - response code [400] and Reason Phrase [Bad Request]
Log entry from my bwce-mon:
INFO:{"host":"172.17.0.4","port":"8090","instanceName":"6866a20e7bd6","appName":"6866a20e7bd6"
WARN : Container is not running for (host, port):(172.17.0.4, 8090). Please register running container
Docker run command for Tibcoservice: docker run -d -p 7575:7575 --link bwceadmin --name helloworld -e EMS_URL=tcp://ubdev-ws-003:7223 -e EMS_QUEUE=docker.queue -e BW_APP_MONITORING_CONFIG='{"url":"http://bwceadmin:8080"}'
helloworld:1.0.0
Docker run command for bwce-mon: docker run -p 8080:8080 -e persistence_DB="dockerpostgres" -e
DB_URL="postgres://postgres:#172.17.0.2:5432/postgres" -e
PERSISTENCE_TYPE=postgres --name bwceadmin bwcemon:2.4.0
Do you have any idea why this did not work for me?
I didn't write the blog post, but I think your issue might be in the configuration of the property "BW_APP_MONITORING_CONFIG".
Can you check if you can access the URL http://bwceadmin:8080? If you can't access that, the issue is most likely to do with the configuration of that property.
To find the setting for that URL, you'll need to know the IP address of the container running your app:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <your container name>
After getting the IP address (like 10.100.22.1), you can start a new BWCE app and add a property for the monitoring URL:
BW_APP_MONITORING_CONFIG='{"url":"http://10.100.22.1:8080"}'
I am using two images sheepkiller/kafka-manager/ (Tool from Yahoo Inc) but the image was made by someone with a weird sense of humor but it has good reviews.
And zookeeper
I start ZooKeeper
docker run --it --restart always -d zookeeper
then try to start apache manager
docker run -it --rm -p 9000:9000 -e ZK_HOSTS="your-zk.domain:2181" -e APPLICATION_SECRET=letmein sheepkiller/kafka-manager
Document says:
(if you don't define ZK_HOSTS, default value has been set to "localhost:2181")
Error:
Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState#7bf272d3
[info] o.a.z.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
[info] k.m.a.KafkaManagerActor - zk=localhost:2181
[info] k.m.a.KafkaManagerActor - baseZkPath=/kafka-manager
[warn] o.a.z.ClientCnxn - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
I am using Docker version 17.12.0-ce, build c97c6d6 on windows 10. I have tried several different things but was unsuccessful. I am assuming there is an issue with the ports, I zookeeper config file and /sheepkiller/kafka-manager/dockerfile/ but I am not sure how to change these images after I already pulled them if that really is the case.
The following should work fine:
docker network create zookeeper-net
docker run -it --restart always -p 2181:2181 --network zookeeper-net --name zookeeper -d zookeeper
docker run -it --rm -p 9000:9000 -e ZK_HOSTS="zookeeper:2181" -e APPLICATION_SECRET=letmein sheepkiller/kafka-manager
Update:
There is also a compose file to setup everything. I suggest you use that.
docker-compose up -d
I have installed UCP master on a node and able to login using admin/orca
The UCP admin fingerprint is as below
23:B9:9E:08:B5:67:73:0A:C7:17:07:76:56:B2:F3:D2:73:CE:B5:74
Now, I am trying to configure UCP replica on a separate node using the following command
docker run --rm -i --env UCP_ADMIN_USER=admin --env UCP_ADMIN_PASSWORD=orca -v /var/run/docker.sock:/var/run/docker.sock --name ucp docker/ucp join --replica --url https://10.211.144.177 --san r1.ucp.abc.com --fingerprint 23:B9:9E:08:B5:67:73:0A:C7:17:07:76:56:B2:F3:D2:73:CE:B5:74
After I run the above command, I am getting the following error:
ERRO[0102] Orca didn't come up within 1m0s. Run "docker logs ucp-controller" for more details
FATA[0102] Unable to connect to system
Please help me resolve the error