Can not reach Kibana remotely using ELK Docker images - docker

I have a remote Ubuntu 14.04 machine. I downloaded and ran a couple of ELK Docker images, but I seem to be getting the same behavior in all of them. I tried the images in these two repositories: spujadas/elk-docker and deviantony/docker-elk. The problem is, in both images, Elasticsearch, Logstash and Kibana all work perfectly locally, however when I try to reach Kibana from a remote computer using http://host-ip:5601, I get a connection timeout and can't reach Kibana. Also, I can reach Elasticsearch from http://host-ip:9200. As both the repositories suggest, I injected some data into Logstash, but that didn't work either. Is there some tweak I need to make in order to reach Kibana remotely?
EDIT: I tried opening up port 5601 as suggested here, but that didn't work either.

As #Rawkode suggested in the comments, the problem was the firewall. The VM I'm working on was created on Azure and I had to create an inbound security rule to allow Kibana to be accessed from port 5601. More on this subject can be read from here.

Related

Docker containers cannot be accessed from the internet but work when accessing from local network

First of all, sorry if I am not following the correct format for StackOverflow, this is my first time asking something here.
I am running Docker in an Ubuntu lxc in Proxmox but the Docker containers cannot be accessed from the internet. I am using Nginx Proxy Manager. Surprisingly, the containers worked well when I was running Docker desktop on Windows 11 but I switched to Ubuntu to try to make things easier and it didn't work so I tried Proxmox - which I had used before, and it is not working either. I have NGINX set up with Cloudflare and when I try to access for example, my Nextcloud container from the internet I get a "Web Server is down" (error code 521).
Everything works fine when I access from the local network. I can ping websites from both inside the container and the host with no lost packages so I know the containers have internet access.
I forgot to add that I have opened all the ports necessary for my Nextcloud container to work and I checked online with ismyportopen.com and it looks like the ports I need are open.

How Do i link the Sample Logstash docker container to the Sample Elasticsearch cluster on Elastics website?

I was trying to do a quick bootstrap to see some sample data in elasticsearch.
Here is where you do a Docker Compose to get a ES Cluster:
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
Next I needed to get logstash in place. I did that with: https://www.elastic.co/guide/en/logstash/current/docker-config.html
When I curl my host, curl localhost:9200 it gives me the sample connection string. So i can tell it is exposed. Now when I run the logstash docker file from above, i noticed that during the bootstrap code it cant connect to: localhost:9200
I was thinking that the private network created in for elastic is fine for the cluster and that i didnt need to add logstash to it. Do I have to do something different to get the default logstash to interact with the default docker?
I have been stuck on this for awhile. My host system is Debian 9. I am trying to think of what the issues might be. I know that -p 9200:9200 would couple the ports together, but 9200 has been claimed by ES, so I'm not sure how I should be handling things. I didn't see anything on the Website though which says "To link the out of the box logstash to the out of the box elasticsearch you need to do X,Y,Z"
When attempting to create a terminal to the logstash server with -it though, it is continually bootstrapping logstash and isn't giving me a terminal to see what is going on from the inside.
What Recommendations do you have?
Add --link your_elasticsearch_container_id:elasticsearch to the docker run command of logstash. Then the elasticsearch container will be visible to logstash under http://elasticsearch:9200, assuming you don't have TLS and the default port is used (what will be the case if you follow the docs you refer to).
If you need filebeat or kibana in the next step, see this question I answered recently: https://stackoverflow.com/a/60122043/7330758

Traefik causing very slow LAN speeds and router crash

I've recently been trying to migrate my home server to a Docker microservice style setup. I've installed fresh Ubuntu Server 18.04, set up Traefik container and Nextcloud container, but am experiencing a peculiar issue.
When I access Nextcloud over the internet it works OK, however on LAN I connect to the website, attempt to download a file and the download is extremely slow for a few seconds before making my router reboot itself. I have tried a Jellyfin container as well and the behavior is the same, so not an issue with Nextcloud. I have tried exposing the ports of the service containers directly and then the issue is resolved, most probably issue is with Traefik.
Here's my traefik.toml, docker-compose.yml, and Traefik container configuration.
I'd greatly appreciate any help, as I would like to use Traefik as a reverse proxy, not directly expose any ports. :-)

Got AuthorizedOnly when pulling images behind corporate proxy

I’ve trying to get docker working behind a corporate proxy. Following the document here:
https://docs.docker.com/config/daemon/systemd/#httphttps-proxy
Basically adding:
[Service]
Environment=“HTTP_PROXY=http://[username]:[password]#127.0.0.1:3128/”
under
/etc/systemd/system/docker.service.d/http-proxy.conf
Restart docker and all.
But when running “docker pull hello-world” or “sudo docker pull hello-world”, got this error:
centos7 ~]$ docker pull hello-world
Using default tag: latest
Trying to pull repository docker. io/library/hello-world …
Pulling repository docker. io/library/hello-world
Error while pulling image: Get https:
/index.docker.io/v1/repositories/library/hello-world/images: AuthorizedOnly
Looks around the web, but couldn’t find any “AuthorizedOnly” error reported before.
docker -v
Docker version 1.12.6, build 3e8e77d/1.12.6
Any hints/help appreciated.
Found the issue: It's not the problem with docker proxy configuration. It was the proxy itself that blocks hub.docker.com.
To resolve this particular problem, I have use a different proxy with less restrictions.
Thanks all!
Double-check your enterprise proxy URL.
Usually, an enterprise proxy does not reside on localhost (127.0.0.1), but on a specific IP address.
Usually, HTTPS_PROXY needs to be set as well (to the same HTTP URL)
Usually, NO_PROXY needs to be set, at least to localhost, to avoid contacting the proxy for every remote query.

"java.net.NoRouteToHostException: No route to host" between two Docker Containers

Note: Question is related to Bluemix docker support.
I am trying to connect two different Docker Containers deployed in Bluemix. I am getting the exception:
java.net.NoRouteToHostException: No route to host
when I try such connection (Java EE app running on Liberty trying to access MySQL). I tried using both private and public IPs of MySQL Docker Container.
The point is that I am able to access MySQL Docker Container from outside Bluemix. So the IP, port, and MySQL itself are ok.
It seems something related to the internal networking of Docker Container support within Bluemix. If I try to access from inside Bluemix it fails, if I do from outside it works. Any help?
UPDATE: I continued investigating as you can see in comments, and it seems a timing issue. I mean, it seems once containers are up and running, there is some connectivity work still undone. If I am able to wait around 1 minute, before trying the connection it works.
60 seconds should be the rule of thumb for the networking start working after container creation.

Resources