I am creating a SaaS project that lets end-users run and access a dashboard as a web app which is a Docker container, which means every user has his own dashboard which is a running docker container, and I would like them to access their servers/containers using my domain (HTTP) as follow: user1: subdomain.mydomain.com/user1app, user2: subdomain.mydomain.com/user1app, etc.
Currently, I am using LocalTunnel however it is not stable and requires me to use 1 subdomain for every user, for example: user1.mydomain.com, user2.mydomain.com, etc.
but what if we scale and get more users? I need a dynamic and automatic way to create users custom URL link to expose their running docker containers and give them access such as, subdomain.mydomain.com/user123, subdomain.mydomain.com/user456, etc.
I tried to use ngrok, however, it is limited in many ways e.g. 40-requests/minute limit, and not free.
Thanks
I recommend running an nginx-revsere-proxy with docker and the you can host as many webapp as you want only with one single ip.
https://github.com/nginx-proxy/nginx-proxy
There you can add per host configs or even change the default configuration. So you can write for each dashboard a config or use variables and a bashscript to create the host configs
If you have multiple docker containers running locally, you can expose them to the internet using ngrok.
First, install ngrok:
$ npm install -g ngrok
Then, start ngrok and specify the port that your docker container is running on:
$ ngrok http 80
Ngrok will give you a URL that you can use to access your docker container from the internet.
Related
I have a python-gunicorn web server that runs on docker. I want to run multiple servers on the same machine so I assigned a "random" port to each container like this:
$ docker run -d -p 0:80 image
If I run $ docker ps, I can see the port been used by the container:
0.0.0.0:32771->80/tcp
Now, I want to retrieve this number (32771) from within the container. Is there anyway to do this?
Edit:
I want this information because I need to connect to these servers from another machine and the way it is implemented requires that the server sends its url via HTTP Post: IP:Port/path
I have a web application running in a docker container on production server. Now I need to make API requests to this application. So, I have two possibilities:
1) Link a domain
2) Make requests directly by IP
I'm using a cloud server for that. In my previous experience I linked the domain to a folder. But now I don't know how to link the domain to a running container on ip_addr:port.
I found this link
https://docs.docker.com/v17.12/datacenter/ucp/2.2/guides/user/services/use-domain-names-to-access-services/
but it's for docker enterprice. Using of that is impossible for the moment.
To expose a docker application to the public without using compose or other orchestration tools like Kubernetes, you can use the docker run -p hostPort:containerPort option to expose your container port. Make sure your application is listening on 0.0.0.0:[container port] inside your container. To access the service externally, you would use the host's IP, and the port that the container port has been mapped to.
See more here
If you want to link to a domain, you can update your DNS records to point your domain to your host IP address.
Hope this helps!
Best way is to use kubernetes because it will ease many operations. But docker-compose can also be used.
If you want to simply deploy using docker it can be done by mapping hostPort to containerPort.
Question: How can I change a Prometheus container's host address from the default 0.0.0.0:9090 to something like 192.168.1.234:9090?
Background: I am trying to get a Prometheus container to install and start in a production environment on a remote server. Since the server uses an IP other than Prometheus's default (0.0.0.0), I need to update the host address that the Prometheus container uses. If I don't, I can't sign-in to the UI and see any of the metrics. The IP of the remote server is provided by the user during the app's installation.
From what I understand from Prometheus's config document and the output of ./prometheus -h, the host address is immutable and therefore needs to be updated using the --web.listen-address= command-line flag. My problem is I don't know how to pass that flag to my Prometheus container; I can't simply run ./prometheus --web.listen-address="<remote-ip>:9090" because that's not a Docker command. And I can't pass it to the docker run ... command because Docker doesn't recognize that flag.
Environment:
Using SaltStack for config management
I cannot use Docker Swarm (i.e. each container must use its own Dockerfile)
You don't need to change the containerized prometheus' listen address. The 0.0.0.0/0 is the anynet inside the container.
By default, it won't even be accessible from your hosts network, let alone any surrounding networks (like the Internet).
You can map it to a port on a hosts interface though. The command for that looks somewhat like this:
docker run --rm -p 8080:9090 prom/prometheus
which would expose the service at 127.0.0.1:8080 on your host
You can do that with a public (e.g. internet-facing) interface as well, although i'd generally advise against exposing containers like this, due to numerous operational implications, which are somewhat beyond the scope of this answer. You should at least consider a reverse-proxy setup, where the users are only allowed to talk to some heavy-duty webserver which then communicates with prometheus, instead of letting them access your backend directly, even if this is just a small development deployment.
For general considerations on productionizing container setups, i suggest this.
Despite it's clickbaity title, this is a useful read.
I'm trying to setup a microservice deployment (deployment file at https://github.com/mojlighetsministeriet/groups/blob/master/docker-compose.example.yml) with several services that will use HTTP (hopefully HTTPS later on) to communicate internally without being exposed outside the network. I later on will add a proxy service that will expose specific features. I want to do this specifically with docker swarm mode and I like the possibility to define the deployment in a docker-compose.yml so I can initiate with:
$ docker stack deploy my-platform -c docker-compose.example.yml
I want the API urls internally to be like GET http://identity-provider/public-key and GET http://groups/b0c44674-58e0-4a8a-87e0-e1de35088964 . I have done this with Kubernetes setups before and that works great but now I want to get this working with docker swarm mode.
The DNS parts works without any problems, but docker swarm mode won't allow me to have each service listening on port 80 (will later be 443). It keeps complaining about port conflicts even though each service has it's unique domain name like identity-provider or groups and so on.
Should I use a specific network driver to get this working? I currently use overlay.
Using domain names without random ports would make calling in between the services much more simple to remember than e.g. http://identity-provider:1234 and http://groups:1235, the ports only adds complexity to the setup.
I'm fine with using any super cutting edge version of docker-ce if that helps somehow.
This should be possible right?
Docker Swarm routes incoming requests based on the published port, you can't have two applications with the same port number in a single Swarm.
I have a LAMP with a lot of added domain names, so many different websites are stored on it. I would like to separate them into Docker containers. Every websites/webapps and all related stuffs should be in a container. File access is solved with --volumes-from flag, but what about MySQL databases and VirtualHosts? How should I set them in a per container way?
For MYSQL you could launch one for each container and then link them together using the --link flag. Or you could simply install mysql as server within the docker container itself.
You could also probalby use docker-compose to orchestrate each as a whole.
As for virtual hosts, the following would probably meet your demands?
https://github.com/jwilder/nginx-proxy
You can use the already available MySQL image to start your DB and then connect it either through linking (--link option when running your app), you can find more info in the link.
For you virtualhosts you can use nginx as a proxy and it will route to your apps depending on your criteria (e.g. /admin will be routed to app1-192.197.0.12).
You can expose the MySQL port in dockerfile by using ÈXPOSE` command and then bind your service to divert MySQL related queries on that port.