Elastic Search Starts but unreachable - docker

I'm just starting to work with Elastic Search and followed this article here
This article is quite straight forward and explains how to start elastic on docker. To test, I try to do
curl -X GET "localhost:9200/_cat/nodes?v=true&pretty"
Also, tried to just browse to http://localhost:9200 in a browser.
neither returns a response, they just hang for quite a while and nothing..
Also, ran a logstash docker that pointed to http://localhost:9200 and that exits because it doenst find elastic.
I've also tried both options in the article, i.e. single node and cluster, none seem to work or so it seems..
I'm new to elastic and not a docker expert either,
Please let me know if anyone has any idea of what's going on...
Thanks,

This had nothing to do with docker or elastic it turns out. I was running on a proxy network and that was making the defaults URL's unreachable. They became available after I disabled the proxy.

Related

Why is Drupal for Docker returning an empty reply from server?

I am somewhat new to Docker. I'm trying to get it set up on my machine, but I can't seem to connect from the host.
My run command
docker run -p 8080:80 drupal:9.1-php7.4-fpm-alpine3.13
Expected result
Based on the image documentation, I would expect to see some kind of default Drupal page on 8080.
Actual result
$ curl http://localhost:8080
curl: (52) Empty reply from server
In Firefox this renders as, "The connection was reset."
What I've tried
There are other questions that have similar symptoms, but the solutions don't seem to work for me.
One common suggestion is to curl to a different IP such as 0.0.0.0:8080. I'm a little skeptical because that conflicts with the image-specific instructions above, but tried it and didn't find evidence there's anything listening there. Also, when the container isn't running, I'm not able to connect to that URL at all, which is slightly different from not getting a response, so I think I'm on the right track with http://localhost:8080/
The other common suggestion is to make sure I'm binding a port outside the container, but in my case it's right there as -p 8080:80.
Always double-check that image tag, kids!
There are a ton of variants of the official docker image, and I accidentally pulled the wrong one. You'll notice the image tag inclues "ftm." I meant Apache. When I run an Apache version of the image, it works out of the box. Facepalm.
That'll do it!
I am leaving this here as a monument to my shame.

How to expose Docker and/or Kubernetes ports on DigitalOcean

First off I want to say I am in no way inexperienced, I am a professional, and I have been Googling this issue for a week; I've followed tutorials and also largely found threads on this site that tell people they're asking for free labor and the answer is on Google. The answer is not on Google, so please bear with me. I have been working on my "homework," as people like to say here, and I am missing something significant.
My use case: I want to run code-server and JupyterLab as browser-accessible services on a DigitalOcean droplet OR Kubernetes cluster. I would like to do this in a way that allows as much of my budget for hosting as possible to be used for processing software (I write Python machine learning/natural language code). My ideal setup is that I have a subdomain, with SSL (LetsEncrypt is fine), for code-server and another for JupyterLab. Ideally they can access the same storage, but that's a secondary concern for the moment. I'd be okay with not having a domain and just passing traffic through OpenVPN to an IP and ports, but code-server just won't run full featured without SSL.
The actual problem: on nearly every attempt to implement this, I have found that I cannot access ports. On a good attempt, I manage to get one service (often something like Python http.server) where going to my domain or IP/port gets me anything other than "connection refused" instantly. I've checked firewall settings (I don't use DigitalOcean's and I have consistently opened the ports that my native services and/or Docker containers are listening on/being forwarded to). Best I pulled off was using Kubernetes and this tutorial following this tutorial: I got code-server and two example sites running in separate subdomains (pointed using a node balancer, and yes, I have a fully registered domain on DO's name servers).
There was a problem however: I couldn't get LetsEncrypt to issue a certificate on Kubernetes and I didn't know how to get it into the container for code-server.
That gets me to my next problem, which is relevant bc I'm not sure this is entirely a Kubernetes problem: I have not successfully exposed a port in any Linux distro in the past four years. I used to administer multiple sites on a single Linode, from 2012-16 or so, and it was no problem, although probably quite insecure, but I'm talking not even being able to expose ports on IP addresses now. Something in how cloud providers handle things has changed. I know AWS, GCloud etc. isolate their VMs on private networks but that's not what DO, Linode, or Vultr do, and yet I can't so much as expose a port successfully - even if I follow port exposing tutorials for the distro in question. I've literally used Rancher to launch a Docker container on a port, managed by the OS, and verified that port is exposed, and it just doesn't work. With Kubernetes SOMETIMES the load balancer helps here. I also was able to get a full server up in FreeBSD but too much of what I need to run depends on Docker and Node which sadly haven't been ported well to that system.
I want to note that I've also Googled StackOverflow and found other people with similar issues, but their questions were all closed there and they were told to Google; Googling turns up DO tutorials and the closed
StackOverflow threads. I should note I've also tried to do this on Google Cloud and Linode with similar results.
ALSO: I'm aware Docker containers are isolated by default from the OS network and have followed guidelines for deployment to make sure their OS-native ports are forwarded.
tl;dr; I'm having trouble exposing ports, despite following OS procedures, and also I am not sure if my personal development server for just me to use should be a Kubernetes cluster or a single server with Docker deployment, and I don't know how to route ports to subdomains for the two apps I want to expose if I'm not using a Kubernetes load balancer. Please don't close this as somehow "too broad" when it's an incredibly narrow situation, other people have had it, and I've been doing my research for a week.
You can find where to do it here:
https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/#ssl-certificates

No response from Docker service

I tried following the tutorial here
https://docs.docker.com/get-started/part3/.
First issue I ran into was when I called docker swarm init. It also asked for docker swarm init --advertise-addr with one of two possible IPv6 IPs.
I tried initializing the swarm on both and then starting the service. The service starts succesfully, but I can't get any response when accessing Localhost:4000. It just loads forever.
I have tried rebuilding the image, creating the swarm on both IPs, checking the logs (there was nothing there), but I kind of run out of ideas. If it helps, the computer has dual operating system, might affect the networking in ways I an unable to figure out.
How can I receive a response on my request?
The issue I was facing was a connection between google chrome and docker swarm, documented better here
https://forums.docker.com/t/google-chrome-and-localhost-in-swarm-mode/32229/9.
There is no apparent solution

logging nginx events from a docker container managed by kubernetes

Currently, to my understanding, kubernetes offers no logging solutions on it's own and it also does not allow one to specify the logging driver when using docker as the container technology due to scope encapsulation concerns.
This leaves folks with the ugly solution of tailing json logs from shared volumes using either fluentd, filebeat, or some other file tailing demon, parsing these, then sending them to the desired storage backend.
My question is, is there any repo or public knowledge config store for this type of scenario for people that have gone through this before? My use case would involve tailing the logs of a nginx docker image and writing out the fluentd/grok pattern myself seems really painful, plus i wouldn't want to struggle on an issue already solved by someone else.
Thanks
We tried logdna and the integration with k8s is pretty solid. Most of the time I just tail the log of some container using kubectl logs -f [CONTAINER_ID]. I'm guessing you're looking for a persistent approach.

docker fails in pushing local image to repository

I am just learning docker (I use windows 7 and install docker tools) and when I tried to use push command to push a local image to repository, it kept pushing for a long time without any prompts or error messages so that I have to use "ctrl+C" to stop it. I tried many times but got same results.
the screenshot is as follows:
I am not sure what is wrong with it. Maybe it's because I am now in China and it is due to the firewall?
I'm glad you pointed out that you're in China! Yes, this is very likely due to a Great Firewall issue.
docker push goes to docker.io as you can see; which returns the IP address of 34.234.103.99
A WHOIS result of this returns that this IP address belongs to Amazon Web Services (AWS); which the Great Firewall blocks access to. After a cursory search, it looks like you're not the first to hit this as well.
I'd recommend setting up a VPN or proxy in order to bypass this.
You can also try and use the docker mirror that is hosted in china, see
https://docs.docker.com/registry/recipes/mirror/#use-case-the-china-registry-mirror
https://www.docker-cn.com/registry-mirror (chinese)

Resources