what's the purpose of the zabbix officially provided docker image of the zabbix agent? - docker

I used the zabbix official docker-compose yaml to set up a set of zabbix system and I found the server as a monitoring target was not available. I searched the Internet and found there are people also encountered such problem.Someone said the agent container's IP or DNS name should be used as the server's. I tried and found it works. But I'm confused by the agent. Does it monitor the server container,the agent container or the host machine? If it only monitors the agent container itself,what's the purpose of it?

Does it monitor the server container,the agent container or the host machine?
Agent container.
If it only monitors the agent container itself,what's the purpose of it?
For testing. And for monitoring external stuff, with custom commands. Or you can connect stuff from host and monitor it, so just in all the cases you do not want or can't install agent on the host.

Everybody who configures a Dockerized Zabbix installation like yourself bumps into to this issue- and of course find themselves on StackExchange looking for the answers that should have been in the documentation.
The reason that the Zabbix Agent in the docker-compose install you're referring to can't initially connect is that both it and server it monitors both run in isolated containers. Separate containers cannot talk to each other on 127.0.0.1 (localhost) addresses. And that is actually a good thing!
I've reviewed the documentation in the repo you're talking about and it's sparse to say the least; it certainly could be better. But to be fair to Zabbix, their docker-compose install DOES work great when you get it running and can achieve pretty fair results quickly with little effort (and a bit of Googling ;-> ).
I actually found FURTHER pain connecting to containerized Zabbix Agents raised on different hosts outside of the docker-compose install you're referring to. Connectivity was being busted because the host the docker-compose install was raised on was NAT'ing out the traffic and presenting the wrong IP address. I've documented this issue HERE.
Dockerized Zabbix is a good thing; there is a purpose to it. I agree with you though that the documentation could be better though. Stick with it!

Related

How to expose Docker and/or Kubernetes ports on DigitalOcean

First off I want to say I am in no way inexperienced, I am a professional, and I have been Googling this issue for a week; I've followed tutorials and also largely found threads on this site that tell people they're asking for free labor and the answer is on Google. The answer is not on Google, so please bear with me. I have been working on my "homework," as people like to say here, and I am missing something significant.
My use case: I want to run code-server and JupyterLab as browser-accessible services on a DigitalOcean droplet OR Kubernetes cluster. I would like to do this in a way that allows as much of my budget for hosting as possible to be used for processing software (I write Python machine learning/natural language code). My ideal setup is that I have a subdomain, with SSL (LetsEncrypt is fine), for code-server and another for JupyterLab. Ideally they can access the same storage, but that's a secondary concern for the moment. I'd be okay with not having a domain and just passing traffic through OpenVPN to an IP and ports, but code-server just won't run full featured without SSL.
The actual problem: on nearly every attempt to implement this, I have found that I cannot access ports. On a good attempt, I manage to get one service (often something like Python http.server) where going to my domain or IP/port gets me anything other than "connection refused" instantly. I've checked firewall settings (I don't use DigitalOcean's and I have consistently opened the ports that my native services and/or Docker containers are listening on/being forwarded to). Best I pulled off was using Kubernetes and this tutorial following this tutorial: I got code-server and two example sites running in separate subdomains (pointed using a node balancer, and yes, I have a fully registered domain on DO's name servers).
There was a problem however: I couldn't get LetsEncrypt to issue a certificate on Kubernetes and I didn't know how to get it into the container for code-server.
That gets me to my next problem, which is relevant bc I'm not sure this is entirely a Kubernetes problem: I have not successfully exposed a port in any Linux distro in the past four years. I used to administer multiple sites on a single Linode, from 2012-16 or so, and it was no problem, although probably quite insecure, but I'm talking not even being able to expose ports on IP addresses now. Something in how cloud providers handle things has changed. I know AWS, GCloud etc. isolate their VMs on private networks but that's not what DO, Linode, or Vultr do, and yet I can't so much as expose a port successfully - even if I follow port exposing tutorials for the distro in question. I've literally used Rancher to launch a Docker container on a port, managed by the OS, and verified that port is exposed, and it just doesn't work. With Kubernetes SOMETIMES the load balancer helps here. I also was able to get a full server up in FreeBSD but too much of what I need to run depends on Docker and Node which sadly haven't been ported well to that system.
I want to note that I've also Googled StackOverflow and found other people with similar issues, but their questions were all closed there and they were told to Google; Googling turns up DO tutorials and the closed
StackOverflow threads. I should note I've also tried to do this on Google Cloud and Linode with similar results.
ALSO: I'm aware Docker containers are isolated by default from the OS network and have followed guidelines for deployment to make sure their OS-native ports are forwarded.
tl;dr; I'm having trouble exposing ports, despite following OS procedures, and also I am not sure if my personal development server for just me to use should be a Kubernetes cluster or a single server with Docker deployment, and I don't know how to route ports to subdomains for the two apps I want to expose if I'm not using a Kubernetes load balancer. Please don't close this as somehow "too broad" when it's an incredibly narrow situation, other people have had it, and I've been doing my research for a week.
You can find where to do it here:
https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/#ssl-certificates

Access to internal infrastructure from Kubernetes

If I run Docker (Docker for Desktop, 2.0.0.3 on Windows 10), then access to internal infrastructure and containers is fine. I can easily do
docker pull internal.registry:5005/container:latest
But ones I enable Kubernetes there, I completely lose an access to internal infrastructure and [Errno 113] Host is unreachable in Kubernetes itself or connect: no route to host from Docker appears.
I have tried several ways, including switching of NAT from DockerNAT to Default Switch. That one doesn't work without restart and restart changes it back to DockerNAT, so, no luck here. This option also seems not to work.
let's start from the basics form the official documentation:
Please make sure you meet all the prerequisites and all other instructions were met.
Also you can use this guide. It has more info with details pointing to what might have gone wrong in your case.
If the above won't help, there are few other things to consider:
In case you are using a virtual machine, make sure that the IP you are referring to is the one of the docker-engines’ host and not the one on which the client is running.
Try to add tmpnginx in docker-compose.
Try to delete the pki directory in C:\programdata\DockerDesktop (first stop Docker, delete the dir and than start Docker). The directory will be recreated and k8s-app=kube-dns labels should work fine.
Please let me know if that helped.

How to connect and encrypt traffic between dockers runnning on different servers?

I currently have six docker containers that were triggered by a docker-compose file. Now I wish to move some of them to a remote machine and enable remote communication between them.
The problem now is that I also need to add a layer of security by encrypting their traffic.
This should be for a production website and needs to be very stable so I am unsure about which protocols/approaches could be better for this scenario.
I have used port forwarding using ssh and know that could also apply some stability through autossh. But I am unsure if there are other approaches that could help achieve the same idea by also taking into account stability and performance.
What protocols/approaches could help on this aim? How do they differ?
I would not recommend manually configuring docker container connections across physical servers because docker already contains a solution for that called Docker Swarm. Follow this documentation to configure your containers to use a docker swarm. I've done it and it's very cool!

When Should I use the host network with docker

I understand that if I use the host network driver for a container, that container’s network stack is not isolated from the Docker host.
I also believe understand conceptually that a good reasons to still do it might be when "Security is not an Issue or concern" and network throughput performance is important but I am struggling to think of a real world example of when I can or should do this. A naive example I can think of is a public facing load-balancer or static file web server.
I realize it may be possible to mitigate the security concerns outside of using host services like AWS or Google Cloud if hosted there but what if that wasn't an option!
When would or should an you use it in a production environment?
How can you mitigate the security concerns regardless of hosting environment?
How should you interact with other services in other docker networks?
I am struggling to think of a real world example of when I can or should do this. ... When would or should an you use it in a production environment?
Your application does not run on TCP or UDP, but another protocol
Your application requires a large range of incoming ports to be published (by default a docker-proxy process is spawned per published port, this can be excessive for a large range)
Your application works with multi-cast or broadcast network traffic
Your application needs to modify the networking layer of the host itself, e.g. a VPN
How can you mitigate the security concerns regardless of hosting environment?
You need to trust this application. You've removed a layer of docker namespacing and at that point, the container is a packaging format and likely fits in with the rest of your tooling, but doesn't require the same security approach you may have for other containers.
How should you interact with other services in other docker networks?
You would interact via published ports of the other containers, same as you would an application running outside of a container that needs to connect to an application inside of a container.
but I am struggling to think of a real world example of when I can or should do this.
Here is real world example: We use host network to speed up build stage of our gitlab ci/cd pipeline.
Container in question is up and running only during build phase, doesn't have any port exposed, needs faster network to download all the necessary pieces to build and push docker image and we experienced (in some intermittent occasions) issues with throughput and inconsistent behavior during build stage that we resolved with host network. Although with host network we "expose" ip of such a container, we still don't expose any ports and after build phase is finished container is discarded.
I know this doesn't answers all of your questions, but is requested real world example.

HaProxy for service discovery on a marathon mesos docker linked containers

Please this is not asked anywhere I have checked. Here is what I have done. I am able to deploy single instance of mesos, marathon and docker. Moving next step ahead I want to have 2 mesos slave(docker containers) linked to each other. Just using docker the same can be achieved by using the docker link feature. But while using the orchestration(mesos) and scheduler(marathon)it seems u need to use service discovery.
My setup up is simple and runnning on a single host. So I will have 2 docker containers one running a simple pub/sub and one running rabbitmq. How can I use HA PRoxy in this setup. I have seen some documents provided by mesosphere
http://mesosphere.com/docs/getting-started/service-discovery/ but it is not clear how to go about it.
The canonical approach for service discovery with Mesos + Marathon + Docker is currently what is described in the document you linked.
I'm assuming you're able to get the two applications running in Marathon already.
Typically what happens is:
1) Configure your application definition to include the ports that your application requires.
2) You set up the provided haproxy-marathon-bridge script to run periodically using a utility like cron. This script scrapes Marathon's API to figure out what host and port the application instances are running on and what the known "friendly" port is.
In the example in the service discovery article, the first application has friendly ports of 80 and 443, whilst the second has a friendly port of 8081.
The script then generates a haproxy.cfg configuration that has rules mapping localhost:friendly_port to actual_host:actual_port.
3) Configure your applications to look for each other on localhost:friendly_port. HAProxy will route connections appropriately.
Hope this helps your understanding!
I created a haproxy service discovery docker container that you can run in mesos. It's not production ready but I am using it in my development environment doing exactly what you're trying to do. The reason I prefer this over what comes with marathon is I haven't found a good way to do complicated haproxy configurations with haproxy-marathon-bridge. With spiderweb you can create a template for the haproxy configuration which enables you to do things such as acl routing etc. It doesn't support health checks yet which is something that will need to be done before its production ready. You can see the project here https://github.com/SBRDevelopment/spiderweb.
We have combined Mesos and Marathon with consul and registartor,
so in the end you have haproxy configuration auto-generated with consul-template.
try https://github.com/eBayClassifiedsGroup/PanteraS
All in one container.
With Mesos-DNS you can also do the following:
Setup mesos-dns as in this guide: http://programmableinfrastructure.com/guides/service-discovery/mesos-dns-haproxy-marathon/ (you can skip HAProxy steps they are not required)
When you start your docker containers make sure that they have "namespace %slave_ip_with_mesos_dns%" (replace string with IP address) in their /etc/resolv.conf files.
if lets say name of an app is "peek" it should be reachable from other applications at peek.marathon.mesos

Resources