How to turn off notifications of docker dashboard? - docker

Well,maybe it seems a ridicilous question but I need some help and think that maybe someone else among approximately 8 billion human on the planet has same issue. Whenever I stop or delete some images or containers on docker dashboard, notification is filling so much space on screen and I cannot click on 'containers' or 'images' section again.
I just want to delete/stop containers/images quickly by clicking like "tiki-taka-tiki-taka" but not possible, I need to click 'X' to close it every single time and this process is so slow.
Usually this kind of products are made for enabling users to make things easier and faster instead of using command prompt but, for docker, using command prompt/terminal is much faster than using dashboard.
Do you know how to disable this notifications?(If it exists)
or like:

You should contact docker's developers for this issue.
For the meantime you could try Portainer to manage your images, volumes ecc...

Related

Docker rolling updates on a single node

So I have been using docker with docker-compose for quite some time in a development environment, I love how easy it is.
Until now, I also used docker-compose to serve my apps on my own server as I could afford short down times like docker-compose restart.
But in my current project, we need rolling updates.
We still have one node, and it shall remain as we don't plan on having scalability issue for quite some time.
I read I need to use docker swarm, fine, but when I look for some tutorials on how to set it up, along with using my docker-compose.yml files, I can't find any developer-oriented (instead of devops) resources that would simply tell me the steps to achieve this, even though I don't understand everything it is ok, as I will along the way.
Are there any tutorials to learn how to set this up out there? If not, shouldn't we build it here?
I am definitely sure we are quite numerous to have the issue, as docker is now a must have for devs, but we (devs) still don't want to dive too deep into the devops world.
Cheers, hope it gets attention instead of criticism.
After giving multiple tries to docker swarm, I did struggle a lot with concurrency and orchestration issues, hence I decided to stick with docker-compose which I'm much more comfortable with.
Here's how I achieved rolling updates: https://github.com/docker/compose/issues/1786#issuecomment-579794865
It works actually pretty nice though observers told me it was a similar strategy to what swarm does by default.
So I guess most of my issues went away by removing replication of nodes.
When I get time, I'll give swarm another try. For now, compose does a great job for me.

How to spin up 'n' instances of an app / container with pre-loaded memory

Background:
I have a language processing java app that requires about 16MB memory and takes about 40 seconds to initialise resources into that memory before exposing a webservice. I am new to containers and related technologies so apologies if my question is obvious...
Objective:
I want to make available several hundred instances of my app on-demand and in a pre-loaded/ pre-configured state. (eg I could make a call to AWS to stand-up 'n' instances of my app and they would be ready in <10seconds.)
Question:
I'm anticipating that I 'may' be able to create a docker image of the app, initialise it and pause hence be able to clone that on demand and 'un-pause' ? Could you advise whether what I am looking to do is possible and if so, how you would approach it.
AWS is my platform of choice so any AWS flavoured specifics would be super helpful.
I'd split your question in two, if you don't mind:
1. Spinning up N containers (or, more likely, scale on demand)
2. Preloading memory.
#1 is Kubernetes's bread and butter and you can find a ton of resources about it online, so allow me to focus on #2.
The real problem is that you're too focused on a possible solution to see the bigger picture:
You want to "preload memory" in order to speed up launch time (well, what do you think Java is doing in those 40s that the magick memory preloader wouldn't?).
A different approach would be to launch the container, let Java eat up resources for 40s, but not make that container available to the world during that time.
Kubernetes provides tools to achieve exactly that, see here:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
Hope this helps!

Dockerizing simple webbapp: how to pick what goes in which container?

I have a very simple webapp. You can think that it's a webpage with an input box which sends the user input to the backend, the backend returns a json and then the front end plugs that json into a jinja2 template and serves some html. That's it. (Also there's a MySQL db on the backend)
I want to dockerize this. The reason for that is that this webapp happens to have gotten some traction and I've had before scares where I push something, the website breaks, I try and roll it back and it's still broken and I end up spending a couple of hours sweating to fix it as fast as possible. I'm hoping that Docker solves this.
Question: how should I split the whole thing into different containers? Given what I have planned for the future, the backend will have to be turned into an API to which the frontend connects to. So they will be two independent containers. My question is really how to connect them. Should the API container expose a http:80 endpoint, to which the frontend container GETs from? I guess my confusion comes from the fact that I will then have to have TWO python processes running: one for the API obviously, and then another one which does nothing but sending an input to the API and rendering the returned json into a jinja2 template. (and then one container for the MySQL db).
OR should I keep both the renderer and the API in the same container, but have two pages, for example /search.html which the user knows about and the api /api.html which is "secret" but which I will need in the future?
Does this picture make sense, or am I over complicating it?
There are no hard and fast rules for this, but a good rule of thumb is one process per container. This will allow you to reuse these containers across different applications. Conversely, some people are finding it useful to create "fat containers" where they have a single image for their whole app that runs in one container.
You have to also think about things like, "how will this affect my deploy process?" and "do I have a sufficient test feedback loop that allows me to make these changes easily?". This link seems useful: https://valdhaus.co/writings/docker-misconceptions/
If this really is a small application, and you're not operating in a SOA environment, one container will probably get you what you want.

AWS Spot Instances and ipcluster plugin

Currently what does the ipcluster plugin do when AWS shuts down one or more of the spot instance nodes? Is there any mechanism to re-start and then re-add these nodes back to the IPython cluster?
You need to use the loadbalance command in order to scale your cluster. If you know you want x nodes at all times, simply launch it with "--max_nodes x --min_nodes x" and it will try to add back the nodes as soon as they go away.
If your nodes go away, it's probably because of the spot price market fluctuations so your might have to wait for it to go below your SPOT_BID value before seeing them appear back.
I use the load balancer a lot with the SGE plugin, but never tried it with ipcluster, so I do not know how well it will behave.

Nagios: Make sure x out of y services are running

I'm introducing 24/7 monitoring for our systems. To avoid unnecessary pages in the middle of the night I want Nagios to NOT page me, if only one or two of the service checks fail, as this won't have any impact on users: The other servers run the same service and the impact on users is almost zero, so fixing the problem has time until the next day.
But: I want to get paged if too many of the checks fail.
For example: 50 servers run the same service, 2 fail -> I can still sleep.
The service fails on 15 servers -> I get paged because the impact is getting to high.
What I could do is add a lot (!) of notification dependencies that only trigger if numerous hosts are down. The problem: Even though I can specify to get paged if 15 hosts are down, I still have to define exactly which hosts need to be down for this alert to be sent. I rather want to specify that if ANY 15 hosts are down a page is made.
I'd be glad if somebody could help me with that.
Personally I'm using Shinken which has business rules just for that. Shinken is backward compatible with Nagios, so it's easy to drop your nagios configuration into shinken.
It seems there is a similar addon for nagios Nagios Business Process Intelligence Addon, but I'm not having experience with this addon.

Resources