I need to set up a server that does multiple things:
Hosts a Grafana instance on docker (3000 is the default port)
Hosts a flask service for printing Grafana reports (the default Grafana printing sucks so I built a Selenium robot to grab the objects on the screen, create a PDF and download the results)
Hosts a Docker App built with Wappler (Php-based app builder)
I'd like to use free certs (lets encrypt).
I'm new to docker and new to linux server administration. What's the best resource for learning how to set this up?
It's super easy to setup reverse proxies using the Linuxserver LetsEncrypt container (It's an Nginx container that auto-manages free certs). The initial setup if you're completely new to docker might seem a little intimidating, but it's easier than it looks, and once you get the hang of it, it's cake.
Other than that, you just need to be sure to place all 3 in the same docker network so they can talk to each other, and (if you want) also expose those ports to the host during the docker run or docker compose code.
i.e. (pseudo code):
docker run -d --name grafana -p 3000:3000 grafana/grafana
For anyone who ends up finding this via a search, I ended up using Traefik for routing/SSL setup. The best article I found on how to set this up is here.
(Note many articles reference Traefik 1.7, however, they changed a lot between 1.7 and version 2. The article above uses Traefik 2.0)
Basically the way that Traefik works is it sees other docker containers that are in the same network and if the docker container contains specific labels set in the docker configuration, it will automatically generate LetsEncrypt SSL certs (see the docs) and will perform the routing to the docker container.
Related
I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.
We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).
So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).
The client's initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 ... which is not resolvable by the client. So far, we've addressed this by adding "server hostname " to the client's /etc/hosts file but this is a pain to maintain.
I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I've seen where a docker container running on the same docker network as the server had issues connecting to the server.
I realize this is not an ideal docker setup. We're migrating from a history of delivering as rpm's to delivering containers .. but it's a slow process. Our company has lots of applications.
I'm really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I'm guessing it is the /etc/hosts we're already doing)
You can do port-mapping -p 8080:80
How you build and run your container?
With a shell command, dockerfile or yml file?
Check this:
docker port
Call this and it will work:
[SERVERIP][PORT FROM DOCKERHOST]
To work with hostnames you need DNS or use hosts file.
The hosts file solution is not a good idea, it's how the internet starts in the past ^^
If something change you have to change all hosts files on every client!
Or use a static ip for your container:
docker network ls
docker network create my-network
docker network create --subnet=172.18.0.0/16 mynet123
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Assign static IP to Docker container
You're describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.
But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.
Now implementing the database is the hardest part of this, there are a bunch of different ways, and you'll need to spend some time with your team to figure out what is the best way.
Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that's a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.
I have a docker-compose setup, where an nginx container is being used as a reverse-proxy and load balancer for the rest of the containers that make up my application.
I can spin up the application using docker-compose up -d and everything works great. Then, I can scale up one of my services using docker-compose up -d --scale auth=3, and everything continues to work fine.
The only issue is that nginx is not yet aware of the two new instances, so I need to manually restart the nginx process inside the running container using docker exec revproxy nginx -s reload, "revproxy" being the name of the nginx container.
That's fine and dandy, I don't mind running an extra command when I decide to scale out one of my services. The real issue though is when there is a container failure somewhere... nginx needs to know as soon as this happens to stop sending traffic to the failed instance until the Docker engine is able to replace it with a healthy one.
With all that said, essentially I would like to accomplish what they are doing in the Traefik quickstart tutorial, except I would like to stick with nginx as my reverse-proxy.
While I personally think Traefik would be a real time saver in your case, there is another project which does what you want with nginx: jwilder/nginx-proxy.
It works by listening to docker engine events and when containers are added or removed, it updates a nginx config using a template.
You could either use this jwilder/nginx-proxy docker image at is is, or you can also make your own flavor by using the jwilder/docker-gen project which is the part that produces a file given a template and docker engine events.
But again, I would recommend Traefik ; for the time and trouble saved and for all the features that comes with it (different load balancing strategies, healthchecks, circuit breakers, automatic SSL certificate setup with ACME/Let's Encrypt, ...)
You just need to write a service discovery script that looks for the updated list of containers every X interval and update the nginx config accordingly.
I do the docker tutorial document at part 3. Because my computer is windows, I use the docker toolbox. Before part 3, I use the command docker run -p 8080:80 test, and it can connect to 192.168.99.100:8080, that's successful.
But when creates a swarm and deploies the docker-compose.yml, it was a success.
ID NAME MODE REPLICAS IMAGE PORTS
uskmy4zkflhf testswarm_web replicated 5/5 ***/get-started:test *:6666->80/tcp
However, when I used 192.168.99.100:6666 to connect, the page could not be displayed, and using ping, I could see that 192.168.99.100 could be connected.
When I uninstall the toolbox and then reinstall it, I deploy it only once, which means that the entire program sets the port only once and no containers occupy it. It doesn't work in this case either.
What's the problem with that?
The port publishing mechanism works differently when you use standalone or swarm mode. If you're using a compose file in swarm mode, you should not be using docker-compose up but docker stack deploy instead.
I would suggest taking it step-by-step, instead of using the stack deploy or compose approach, first learn to use the docker service create command, and take it one service at a time.
Try docker service create --name proxy --publish 8080:80 nginx and see if you can reach NGINX in 192.168.99.100:8080. Once you're there, try scaling it with docker service update --replicas=5 proxy.
Once you feel comfortable with this, you should be able to tell what's going on with more precision.
If you want to delve deeper into how por publishing works in swarm mode, I suggest this docs article.
I'm trying to launch a docker container that is running a tornado app in python 3.
It serves a few API calls and is writing data to a rethinkdb service on the system. RethinkDB does not run inside a container.
The system it runs on is ubuntu 16.04.
Whenever I tried to launch the docker with docker-compose, it would crash saying the connection to localhost:28015 was refused.
I went researching the problem and realized that docker has its own network and that external connections must be configured prior to launching the container.
I used this command from a a question I found to make it work:
docker run -it --name "$container_name" -d -h "$host_name" -p 9080:9080 -p 1522:1522 "$image_name"
I've changed the container name, host name, ports and image name to fit my own application.
Now, the docker is not crashing, but I have two problems:
I can't reach it from a browser by pointing to https://localhost/login
I lose the docker-compose usage. This is problematic if we want to add more services that talk to each other in the future.
So, how do I launch a docker that can talk to my rethinkdb database without putting that DB into a container?
Please, let me know if you need more information to answer this question.
I'd appreciate your guidance in this.
The end result is that the docker will serve requests coming over https.
for exmaple I have an end-point called /getURL.
The request includes a token verified in the DB. The URL is like this:
https://some-domain.com/getURL
after verification with the DB it will send back a relevant response.
the docker needs to be able to talk on 443 and also on 28015 with the rethinkdb service.
(Since 443 and https include the use of certificates, I'd appreciate a solution that handles this on regular http with some random port too and I'll take it from there)
Thanks!
P.S. The service works when I launch it without a docker on pycharm it's the docker configuration I have problems with.
I found a solution.
I needed to add this so that the container can connect to both the database and the rethinkdb:
--network="host"
Since this solution works for me right now, but it isn't the best solution, I won't mark this as the answer for now.
I have a LAMP with a lot of added domain names, so many different websites are stored on it. I would like to separate them into Docker containers. Every websites/webapps and all related stuffs should be in a container. File access is solved with --volumes-from flag, but what about MySQL databases and VirtualHosts? How should I set them in a per container way?
For MYSQL you could launch one for each container and then link them together using the --link flag. Or you could simply install mysql as server within the docker container itself.
You could also probalby use docker-compose to orchestrate each as a whole.
As for virtual hosts, the following would probably meet your demands?
https://github.com/jwilder/nginx-proxy
You can use the already available MySQL image to start your DB and then connect it either through linking (--link option when running your app), you can find more info in the link.
For you virtualhosts you can use nginx as a proxy and it will route to your apps depending on your criteria (e.g. /admin will be routed to app1-192.197.0.12).
You can expose the MySQL port in dockerfile by using ÈXPOSE` command and then bind your service to divert MySQL related queries on that port.