Expose a random port - docker

I'm using docker to deploy a software that uses a random RTP port. How can I EXPOSE a big range of ports? Can I possibly expose all ports of the docker Instance?
I haven't been able to find a way to do this in the Docker documentation.

This is currently (Docker 1.0.1) not possible.
Other people have expressed strong interest in being able to expose and publish port ranges, and the Docker team is okay with that, see here.
Some code has even been proposed (see pull request here).
I guess it will be implemented soon, maybe get in touch with them to know if someone plans to work on that again.

Related

How to ask KAFKA about it's status - number of consumers, etc.? Particularly when it's running in a docker container?

I am looking at how to get the information on the number of consumers from a Kafka server running in a docker container.
But I'll also take almost any info to help point me in a direction that is forward movement. I've been trying through Python ond URI requests, but I'm getting the feeling I need to get back to Java to ask Kafka questions on it's status?
In relation to the current threads I've seen, many handy scripts from $KAFKA_HOME are referenced but, the configured systems I have access to do not have $KAFKA_HOME defined - nor do they have the contents of that directory. My world is a docker container without a CLI access. So I haven't been able to apply the solutions requiring shell scripts or other tools from $KAFKA_HOME to my running system.
One of the things I have tried is a python script using requests.get(uri...)
where the uri looks like:
http://localhost:9092/connectors/
The code looks like:
r = requests.get("http://%s:%s/connectors" % (config.parameters['kafkaServerIPAddress'],config.parameters['kafkaServerPort']))
currentConnectors=r.json()
So far I get a "nobody's home at that address" response.
I'm really stuck and a pointer to something akin to "Beginners Guide to Getting KAFKA Monitoring Information" would be great. Also if there's a way to grab the helpful kafka shell scripts & tools, that would be great to - where do they come from?
One last thing - I'm new enough to Kafka that I don't know what I don't know.
Thanks.
running in a Docker container
That shouldn't matter, but Confluent maintains a few pages that go over how to configure the containers for monitoring
https://docs.confluent.io/platform/current/installation/docker/operations/monitoring.html
https://docs.confluent.io/platform/current/kafka/monitoring.html
number of consumers
Such a metric doesn't exist
Python and URI requests
You appear to be using the /connectors endpoint of the Kafka Connect REST API (which runs on port 8083, not 9092). It is not a monitoring endpoint for brokers or non-Connect-API consumers
way to grab the helpful kafka shell scripts & tools
https://kafka.apache.org/downloads > Binary downloads
You don't need container shell access, but you will need external network access, just as all clients outside of a container would.

How to expose Docker and/or Kubernetes ports on DigitalOcean

First off I want to say I am in no way inexperienced, I am a professional, and I have been Googling this issue for a week; I've followed tutorials and also largely found threads on this site that tell people they're asking for free labor and the answer is on Google. The answer is not on Google, so please bear with me. I have been working on my "homework," as people like to say here, and I am missing something significant.
My use case: I want to run code-server and JupyterLab as browser-accessible services on a DigitalOcean droplet OR Kubernetes cluster. I would like to do this in a way that allows as much of my budget for hosting as possible to be used for processing software (I write Python machine learning/natural language code). My ideal setup is that I have a subdomain, with SSL (LetsEncrypt is fine), for code-server and another for JupyterLab. Ideally they can access the same storage, but that's a secondary concern for the moment. I'd be okay with not having a domain and just passing traffic through OpenVPN to an IP and ports, but code-server just won't run full featured without SSL.
The actual problem: on nearly every attempt to implement this, I have found that I cannot access ports. On a good attempt, I manage to get one service (often something like Python http.server) where going to my domain or IP/port gets me anything other than "connection refused" instantly. I've checked firewall settings (I don't use DigitalOcean's and I have consistently opened the ports that my native services and/or Docker containers are listening on/being forwarded to). Best I pulled off was using Kubernetes and this tutorial following this tutorial: I got code-server and two example sites running in separate subdomains (pointed using a node balancer, and yes, I have a fully registered domain on DO's name servers).
There was a problem however: I couldn't get LetsEncrypt to issue a certificate on Kubernetes and I didn't know how to get it into the container for code-server.
That gets me to my next problem, which is relevant bc I'm not sure this is entirely a Kubernetes problem: I have not successfully exposed a port in any Linux distro in the past four years. I used to administer multiple sites on a single Linode, from 2012-16 or so, and it was no problem, although probably quite insecure, but I'm talking not even being able to expose ports on IP addresses now. Something in how cloud providers handle things has changed. I know AWS, GCloud etc. isolate their VMs on private networks but that's not what DO, Linode, or Vultr do, and yet I can't so much as expose a port successfully - even if I follow port exposing tutorials for the distro in question. I've literally used Rancher to launch a Docker container on a port, managed by the OS, and verified that port is exposed, and it just doesn't work. With Kubernetes SOMETIMES the load balancer helps here. I also was able to get a full server up in FreeBSD but too much of what I need to run depends on Docker and Node which sadly haven't been ported well to that system.
I want to note that I've also Googled StackOverflow and found other people with similar issues, but their questions were all closed there and they were told to Google; Googling turns up DO tutorials and the closed
StackOverflow threads. I should note I've also tried to do this on Google Cloud and Linode with similar results.
ALSO: I'm aware Docker containers are isolated by default from the OS network and have followed guidelines for deployment to make sure their OS-native ports are forwarded.
tl;dr; I'm having trouble exposing ports, despite following OS procedures, and also I am not sure if my personal development server for just me to use should be a Kubernetes cluster or a single server with Docker deployment, and I don't know how to route ports to subdomains for the two apps I want to expose if I'm not using a Kubernetes load balancer. Please don't close this as somehow "too broad" when it's an incredibly narrow situation, other people have had it, and I've been doing my research for a week.
You can find where to do it here:
https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/#ssl-certificates

How to forward host traffic to more than one container?

I have a windows machine. I am running ubuntu using the virtual box on top of it. From windows, I am sending certain information to ubuntu over UDP on a specific port. I am running multiple Docker containers in ubuntu. I want to forward this data to all the containers from ubuntu. Could someone please specify a method to achieve this.
I am answering to my question.
I have written a script in python which listens on the specified port and broadcasts it over the docker network. Every container created on that network receives it.
Despite your own answer, you could use nginx to achieve such behavior. Dont need to rewrite what is already implemented but since your script work's i guess you will stick with your solution. Consider this answer mainly for future reader's therefore.

How to connect and encrypt traffic between dockers runnning on different servers?

I currently have six docker containers that were triggered by a docker-compose file. Now I wish to move some of them to a remote machine and enable remote communication between them.
The problem now is that I also need to add a layer of security by encrypting their traffic.
This should be for a production website and needs to be very stable so I am unsure about which protocols/approaches could be better for this scenario.
I have used port forwarding using ssh and know that could also apply some stability through autossh. But I am unsure if there are other approaches that could help achieve the same idea by also taking into account stability and performance.
What protocols/approaches could help on this aim? How do they differ?
I would not recommend manually configuring docker container connections across physical servers because docker already contains a solution for that called Docker Swarm. Follow this documentation to configure your containers to use a docker swarm. I've done it and it's very cool!

Is it feasible to control Docker from inside a container?

I have experimented with packaging my site-deployment script in a Docker container. The idea is that my services will all be inside containers and then using the special management container to manage the other containers.
The idea is that my host machine should be as dumb as absolutely possible (currently I use CoreOS with the only state being a systemd config starting my management container).
The management container be used as a push target for creating new containers based on the source code I send to it (using SSH, I think, at least that is what I use now). The script also manages persistent data (database files, logs and so on) in a separate container and manages back-ups for it, so that I can tear down and rebuild everything without ever touching any data. To accomplish this I forward the Docker Unix socket using the -v option when starting the management container.
Is this a good or a bad idea? Can I run into problems by doing this? I did not read anywhere that it is discouraged, but I also did not find a lot of examples of others doing this.
This is totally OK, and you're not the only one to do it :-)
Another example of use is to use the management container to hande authentication for the Docker REST API. It would accept connections on an EXPOSE'd TCP port, itself published with -p, and proxy requests to the UNIX socket.
As this question is still of relevance today, I want to answer with a bit more detail:
It is possible to work with this setup, where you pass the docker socket into a running container. This is done by many solutions and works well. BUT you have to think about the problems, that come with this:
If you want to use the socket, you have to be root inside the container. This allows the execution of any command inside the container. So for example if an intruder controlls this container, he controls all other docker containers.
If you expose the socket with a TCP Port as sugested by jpetzzo, you will have the same problem even worse, because now you won't even have to compromise the container but just the network. If you filter the connections (like sugested in his comment) the first problem stays.
TLDR;
You could do this and it will work, but then you have to think about security for a bit.

Resources