Can anyone tell me if it is a good idea to monitor Docker containers using SNMP? I mean, I'm thinking at installing SNMP agent on each container and collect data throug a Flink/Kafka stream, but I don't know if it's ok to proceed in this way like installing SNMP agent on each container or not.
Thank you!
There are docker APIs which many tools use to collect this info. You do not need to install anything inside the containers for these basic metrics. The most popular open source tool for this is Prometheus, but there are dozens of commercial tools which use the same method.
https://docs.docker.com/config/thirdparty/prometheus/#configure-docker
Related
I want to create a tool that couples together a lot (~10, perhaps more) of other CLI tools to automate some stuff. This tool needs to be able to just be dropped-in on any VPS and work, hence the Docker containers. Work in this case means running central program (made by me) that orchestrates all the other tools and aggregates their results in a single database to browse/export later. The tools' containers need to have network access.
In my limited knowledge of Docker I've concluded that multi-stage build to fit all the tools in a single container is a bad design here, and very cumbersome. I've thought of networking the tools' containers to the central one and doing some sort of TCP piping, but that seems less than ideal too. What should the approach here be like? Are there some ready-made solutions to this issue?
Thanks
How about docker-compose?
You can use this tool to deploy all your dockerized tools inside docker network and then communicate with them via your orchestrator. Additionally you can pack this composed dockers into another docker and create docker-in-docker environment and expose only your orchestrator as a gate to your all-in-one tool.
Cheers,
I want setup a groupware server for my company. At the moment I use Zimbra but I'm looking for an alternative to Zimbra. The requirement is, it must be running in a docker container and best would be a groupware solution with official docker support.
Has someone an idea about a suitable product? One what is available as docker image and has comparable feature like Zimbra.
An other good solution would be, if the server is easy via script to install and easy to configure.
Can docker containers be used along with UI based RPA tools like blueprism or uiPath? Blueprism recommends using virtual machines but offers no support of docker
Yes it will be possible. I'm unfamiliar with the solutions you describe so I'm unable to provide you with specific examples.
Any Linux (and Windows) process can be run in a container.
Docker made containers into a thing but they're really not. They're just (very useful) conceptual "sugar" on Linux namespaces and cgroups to make the functionality more accessible. They provide a way to segregate e.g. one or more Linux processes (and their resources).
So, unless someone else has done the "containerization" already (likely), you should be able to do this reasonably easily for yourself. The primary challenge will be in relaxing the container boundary to access machine or other process resources.
I can confirm that Kofax RPA is fully supported on Docker (and Kubernetes)
Yes, containerization is possible for BP server. But not for client/resource as it requires terminal session to run the desktop applications.
What is the common practice to get metrics from services running inside Docker containers, using tools like CollectD, or InfluxDD Telegraph?
These tools are normally configured to run as agents in the system and get metrics from localhost.
I have read collectd docs and some plugins allow to get metrics from remote systems so I could have for example, an NGINX container and then a collectd container to get the metrics, but there isnt a simpler way?
Also, I dont want to use Supervisor or similar tools to run more that "1 process per container".
I am thinking about this in conjunction with a System like DC/OS or Kubernetes.
What do you think?
Thank you for your help.
We're thinking about using mesos and mesosphere to host our docker containers. Reading the docs it says that a prerequisite is that:
Docker version 1.0.0 or later needs to be installed on each slave
node.
We don't want to manually SSH into each new machine and install the correct version of the Docker daemon. Instead we're thinking about using something like Ansible to install Docker (and perhaps other services that may be required on each slave).
Is this a good way to solve it or does Mesosphere/DCOS or any of Mesos ecosystem components have other ways of dealing with this?
I've seen the quick intro where someone from Mesosphere just use dcos resize to change the cluster size on the Google Cloud Platform. Is there a way to hook in to this process and install additional services on the (google) container when it has booted? Or is this something we should avoid and instead just use a "pre-baked image"?
In your own datacenter using your favorite configuration tool such as ansible, salt, ... is probably a good choice.
On the cloud it might be easier to use virtual machine images providing docker, so for example dcos on aws uses coreOS which comes with docker out of the box. Shouldn't be too difficult with Ubuntu either...