First off I want to say I am in no way inexperienced, I am a professional, and I have been Googling this issue for a week; I've followed tutorials and also largely found threads on this site that tell people they're asking for free labor and the answer is on Google. The answer is not on Google, so please bear with me. I have been working on my "homework," as people like to say here, and I am missing something significant.
My use case: I want to run code-server and JupyterLab as browser-accessible services on a DigitalOcean droplet OR Kubernetes cluster. I would like to do this in a way that allows as much of my budget for hosting as possible to be used for processing software (I write Python machine learning/natural language code). My ideal setup is that I have a subdomain, with SSL (LetsEncrypt is fine), for code-server and another for JupyterLab. Ideally they can access the same storage, but that's a secondary concern for the moment. I'd be okay with not having a domain and just passing traffic through OpenVPN to an IP and ports, but code-server just won't run full featured without SSL.
The actual problem: on nearly every attempt to implement this, I have found that I cannot access ports. On a good attempt, I manage to get one service (often something like Python http.server) where going to my domain or IP/port gets me anything other than "connection refused" instantly. I've checked firewall settings (I don't use DigitalOcean's and I have consistently opened the ports that my native services and/or Docker containers are listening on/being forwarded to). Best I pulled off was using Kubernetes and this tutorial following this tutorial: I got code-server and two example sites running in separate subdomains (pointed using a node balancer, and yes, I have a fully registered domain on DO's name servers).
There was a problem however: I couldn't get LetsEncrypt to issue a certificate on Kubernetes and I didn't know how to get it into the container for code-server.
That gets me to my next problem, which is relevant bc I'm not sure this is entirely a Kubernetes problem: I have not successfully exposed a port in any Linux distro in the past four years. I used to administer multiple sites on a single Linode, from 2012-16 or so, and it was no problem, although probably quite insecure, but I'm talking not even being able to expose ports on IP addresses now. Something in how cloud providers handle things has changed. I know AWS, GCloud etc. isolate their VMs on private networks but that's not what DO, Linode, or Vultr do, and yet I can't so much as expose a port successfully - even if I follow port exposing tutorials for the distro in question. I've literally used Rancher to launch a Docker container on a port, managed by the OS, and verified that port is exposed, and it just doesn't work. With Kubernetes SOMETIMES the load balancer helps here. I also was able to get a full server up in FreeBSD but too much of what I need to run depends on Docker and Node which sadly haven't been ported well to that system.
I want to note that I've also Googled StackOverflow and found other people with similar issues, but their questions were all closed there and they were told to Google; Googling turns up DO tutorials and the closed
StackOverflow threads. I should note I've also tried to do this on Google Cloud and Linode with similar results.
ALSO: I'm aware Docker containers are isolated by default from the OS network and have followed guidelines for deployment to make sure their OS-native ports are forwarded.
tl;dr; I'm having trouble exposing ports, despite following OS procedures, and also I am not sure if my personal development server for just me to use should be a Kubernetes cluster or a single server with Docker deployment, and I don't know how to route ports to subdomains for the two apps I want to expose if I'm not using a Kubernetes load balancer. Please don't close this as somehow "too broad" when it's an incredibly narrow situation, other people have had it, and I've been doing my research for a week.
You can find where to do it here:
https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/#ssl-certificates
Related
I used the zabbix official docker-compose yaml to set up a set of zabbix system and I found the server as a monitoring target was not available. I searched the Internet and found there are people also encountered such problem.Someone said the agent container's IP or DNS name should be used as the server's. I tried and found it works. But I'm confused by the agent. Does it monitor the server container,the agent container or the host machine? If it only monitors the agent container itself,what's the purpose of it?
Does it monitor the server container,the agent container or the host machine?
Agent container.
If it only monitors the agent container itself,what's the purpose of it?
For testing. And for monitoring external stuff, with custom commands. Or you can connect stuff from host and monitor it, so just in all the cases you do not want or can't install agent on the host.
Everybody who configures a Dockerized Zabbix installation like yourself bumps into to this issue- and of course find themselves on StackExchange looking for the answers that should have been in the documentation.
The reason that the Zabbix Agent in the docker-compose install you're referring to can't initially connect is that both it and server it monitors both run in isolated containers. Separate containers cannot talk to each other on 127.0.0.1 (localhost) addresses. And that is actually a good thing!
I've reviewed the documentation in the repo you're talking about and it's sparse to say the least; it certainly could be better. But to be fair to Zabbix, their docker-compose install DOES work great when you get it running and can achieve pretty fair results quickly with little effort (and a bit of Googling ;-> ).
I actually found FURTHER pain connecting to containerized Zabbix Agents raised on different hosts outside of the docker-compose install you're referring to. Connectivity was being busted because the host the docker-compose install was raised on was NAT'ing out the traffic and presenting the wrong IP address. I've documented this issue HERE.
Dockerized Zabbix is a good thing; there is a purpose to it. I agree with you though that the documentation could be better though. Stick with it!
I am posting this question due to lack of experience and I need professional suggestions. The questions in SO are mainly on how to deploy or host multiple websites using Docker running on a single Web Host. This can be done, but is it ideal for moderate traffic websites.
I deploy Docker based Containers in my local machine for development. A software container has a copy of the primary application, as well all dependencies — libraries, languages, frameworks, and everything else.
It becomes easy for me to simply migrate the “docker-compose.yml” or “dockerfile” into any remote Web Server. All the softwares and dependencies get installed and will run just like my local machine.
(Say) I have a VPS and I want to host multiple websites using Docker. The only thing that I need to configure is the Port, so that the domains can be mapped to port 80. For this I have to use an extra NGINX for routing.
But VPS can be used to host multiple websites without the need of Containerisation. So, is there any special benefit of running Docker in Web Servers like AWS, Google, Hostgator, etc., OR Is Docker best or idle for development only in local machine and not to be deployed in Web Servers for Hosting.
The main benefits of docker for simple web hosting are imo the following:
isolation each website/service might have different dependency requirements (one might require php 5, another php 7 and another nodejs).
separation of concerns if you split your setup into multiple containers you can easily upgrade or replace one part of it. (just consider a setup with 2 websites, which need a postgres database each. If each website has its own db container you won't have any issue bumping the postgres version of one of the websites, without affecting the other.)
reproducibility you can build the docker image once, test it on acceptance, promote the exact same image to staging and later to production. also you'll be able to have the same environment locally as on your server
environment and settings each of your services might depend on a different environment (for example smtp settings or a database connection). With containers you can easily supply each container it's specific environment variables.
security one can argue about this one as containers itself won't do much for you in terms of security. However due to easier dependency upgrades, seperated networking etc. most people will end up with a setup which is more secure. (just think about the db containers again here, these can share a network with your app/website container and there is no need to expose the port locally.)
Note that you should be careful with dockers port mapping. It uses the iptables and will override the settings of most firewalls (like ufw) per default. There is a repo with information on how to avoid this here: https://github.com/chaifeng/ufw-docker
Also there are quite a few projects which automate the routing of requests to the applications (in this case containers) very enjoyable and easy. They usually integrate a proper way to do ssl termination as well. I would strongly recommend looking into traefik if you setup a webserver with multiple containers which should all be accessible at port 80 and 443.
I currently have six docker containers that were triggered by a docker-compose file. Now I wish to move some of them to a remote machine and enable remote communication between them.
The problem now is that I also need to add a layer of security by encrypting their traffic.
This should be for a production website and needs to be very stable so I am unsure about which protocols/approaches could be better for this scenario.
I have used port forwarding using ssh and know that could also apply some stability through autossh. But I am unsure if there are other approaches that could help achieve the same idea by also taking into account stability and performance.
What protocols/approaches could help on this aim? How do they differ?
I would not recommend manually configuring docker container connections across physical servers because docker already contains a solution for that called Docker Swarm. Follow this documentation to configure your containers to use a docker swarm. I've done it and it's very cool!
I'm using docker on a bare metal server. I'm pretty happy with docker-compose to configure and setup applications.
Still some features are missing, like configuration management and monitoring maybe there are other solutions to solve this issues but I'm a bit overwhelmed by the feature set of Kubernetes and can't judge if it would help me here.
I'm also open for recommendations to solve the requirements separately:
Configuration / Secret management
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Remot container control (SSH is okay with only one Server)
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
I'm sure Kubernetes covers some of these features, but I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
I hope the questions scope is not too broad, else please use the comment section and help me to narrow down the question.
Thanks.
I think the Kubernetes is absolutely much your requests and it is what you need.
Let's start one by one.
I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
No, it is not focused on Clouds. Kubernates can be installed almost on any bare-metal platform (include ARM) and have many tools and instructions which can help you to do it. Also, it is easy to deploy it on your local PC using Minikube, which will prepare local cluster for you within VMs or right in your OS (only for Linux).
Configuration / Secret management
Kubernates has a powerful configuration and management based on special objects which can be attached to your containers. You can read more about configuration management in that article.
Moreover, some tools like Helm can provide you more automation and range of preconfigured applications, which you can install using a single command. And you can prepare your own charts for it.
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Kubernetes has its own dashboard where you can get many kinds of information: current applications status, configuration, statistics and many more. Also, Kubernetes has great integration with Heapster which can be used with Grafana for powerful visualization of almost anything.
Remot container control (SSH is okay with only one Server)
Kubernetes controlling tool kubectl can get logs and connect to containers in the cluster without any problems. As an example, to connect a container "myapp" you just need to call kubectl exec -it myapp sh, and you will get sh session in the container. Also, you can connect to any application inside your cluster using kubectl proxy command, which will forward a port you need to your PC.
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
Kubernetes can be scaled up to thousands of nodes. Or can have only one. It is your choice. Independent of a cluster size, you will get production-grade networking, service discovery and load balancing.
So, do not afraid, just try to use it locally with Minikube. It will make many of operation tasks more simple, not more complex.
I installed an 8-node kubernetes cluster (1 master + 7 minion) but I faced a networking problem among minions.
I installed my cluster according to this step-by-step Fedora manual, so I use Fedora 20 with its testing repository to get kubernetes binaries.
After installing, I wanted to try the guestbook example, but it seems to me there is a problem with the inter-container networking.
Although containers/PODs are in running state and I can reach my 3 frontend containers (via browser) and the redis containers as well (via natcat), but the frontend, which not on the same host with the redis, cannot reach redis master. The frontend's PHP give back network exception.
Can anybody help me why the containers cannot reach each other among the hosts?
I hope I could describe my setup enough accurately and thanks in advance.
The Fedora guide you followed will only get you running on a single machine. It avoids the issues around setting up networking across nodes.
For kubernetes to work, the following network set up must be satisfied:
Every container should be able to talk to every other container, even across nodes. This means also that the bridge IP range for those containers must not overlap.
Code running on any node that isn't in a container should be able to reach every container (and vise-versa), even across nodes.
It is not necessary (but useful) if computers on the network that aren't part of the cluster can reach the containers directly.
There are a lot of ways to achieve this -- for instance the set up for vagrant sets up GRE tunnels between each node. On GCE we use features of the platform to do the routing. If you are on physical machines on a switch you can probably just do a big layer 2 network w/ bridges. A bulletproof way to get started (but perhaps not the most performant, depending on your set up) is to use something like flannel.
We are working on making this stuff easier to start up (without using a mess of shell scripts) and are thinking of building something like flannel in so that there is a reasonable default.