Docker (Compose? Swarm?) How to run a health check before exposing cotainer - docker

I have a web app (netcore) running in a docker container. If I update it under load it won't be able to handle requests until there is a gap. This might be a bug in my app, or in the .net, I am looking for a workaround for now. If I hit the app with a single http request before exposing it to the traffic though, it works as expected.
I would like to get this behaviour:
In the running server get the latest release of the container.
Launch the container detached from network.
Run a health check on it, if health check fails - stop.
Remove old container.
Attach new container and start processing traffic.
I am using compose atm, and have somewhat limited knowledge of docker infrastructure and the problem should be something well understood, yet I've failed finding anything in the google on the topic.
It kind of sounds like Kubernetees at this stage, but I would like to keep it as simple as possible.

The thing I was looking for is the Blue/Green deployment and it is quite easy to search for it.
E.g.
https://github.com/Sinkler/docker-nginx-blue-green
https://coderbook.com/#marcus/how-to-do-zero-downtime-deployments-of-docker-containers/
Swarm has a feature which could be useful as well: https://docs.docker.com/engine/reference/commandline/service_update/

Related

Architectural question about user-controlled Docker instances

I got a website in Laravel where you can click on a button which sends a message to a Python daemon which is isolated in Docker. This works for an easy MVP to prove a concept, but it's not viable in production because a user would most likely want to pause, resume and stop that process as well because that service is designed to never stop otherwise considering it's a scanner which is looped.
I have thought about a couple of solutions for this, such as fixing it in the software layer but that would add complexity to the program. I have googled Docker and I have found that it is actually possible to do what I want to do with Docker itself with the commands pause, unpause, run and kill.
It would be optimal if I had a service which would interact with the Docker instances with the criteria of above and would be able to take commands from HTTP. Is Docker Swarm the right solution for this problem or is there an easier way?
There are both significant security and complexity concerns to using Docker this way and I would not recommend it.
The core rule of Docker security has always been, if you can run any docker command, then you can easily take over the entire host. (You cannot prevent someone from docker run a container, as container-root, bind-mounting any part of the host filesystem; so they can reset host-root's password in the /etc/shadow file to something they know, allow remote-root ssh access, and reboot the host, as one example.) I'd be extremely careful about connecting this ability to my web tier. Strongly coupling your application to Docker will also make it more difficult to develop and test.
Instead of launching a process per crawling job, a better approach might be to set up some sort of job queue (perhaps RabbitMQ), and have a multi-user worker that pulls jobs from the queue to do work. You could have a queue per user, and a separate control queue that receives the stop/start/cancel messages.
If you do this:
You can run your whole application without needing Docker: you need the front-end, the message queue system, and a worker, but these can all run on your local development system
If you need more crawlers, you can launch more workers (works well with Kubernetes deployments)
If you're generating too many crawl requests, you can launch fewer workers
If a worker dies unexpectedly, you can just restart it, and its jobs will still be in the queue
Nothing needs to keep track of which process or container belongs to a specific end user

Access rails console of an app deployed in Google Cloud Run

We deployed a rails app in Google Cloud Run using their managed platform. The app is working fine and it is able to serve requests.
Now we want to get access to the rails console of the deployed app. Can anyone suggest a way to achieve this?
I'm aware that currently, Cloud Run supports only HTTP requests. If no other way is possible I'll have to consider something like rails web console
I think you cannot.
I'm familiar with Cloud Run but I'm not familiar with rails.
I assume you'd need to be able to shell into a container in order to be able to run IRB. Generally, you'd do this by asking the runtime (Docker Engine, Kubernetes, Cloud Run) to connect you to the container so that you could do this.
Cloud Run does not (appear) to permit this. I think it's a potentially useful feature request for the service. For those containers that contain shells, this would be the equivalent of GCE's gcloud compute ssh.
Importantly, your app may be serviced by multiple, load-balanced containers and so you'd want to be able to console into any of these.
However, you may wish to consider alternatives mechanisms for managing your app: monitoring, logging, trace etc. These mechanisms should provide you with sufficient insight into your app's state. Errant container instances should be terminated.
This follows the concept of "pets vs. cattle" whereby, instead of nurturing individual containers (is one failing?), you nurture the containers holistically (is the service comprising many containers failing?)
For completeness, if you think that there's an issue with a container image that you're unable to resolve through other means, you could run the image elsewhere (e.g. locally) where you can use IRB. Since the same container image will behave consistently wherever it's run, you should be able to observe the issue using IRB locally too.

How to run WordPress on Google Cloud Run?

Google Cloud Run is new. Is it possible to run WordPress docker on it? Perhaps using gce as database for the mysql/mariadb. Can't find any discussion on this
Although I think this is possible, it's not a good use of your time to go through this exercise. Cloud Run might not be the right tool for the job.
UPDATE someone blogged a tutorial about this (use at your own risk): https://medium.com/acadevmy/how-to-install-a-wordpress-site-on-google-cloud-run-828bdc0d0e96
Here are a few points to consider;
(UPDATE: this is not true anymore) Currently Cloud Run doesn't support natively connecting to Cloud SQL (mysql). There's been some hacks like spinning up a cloudsql_proxy inside the container: How to securely connect to Cloud SQL from Cloud Run? which could work OK.
You need to prepare your wp-config.php beforehand and bake it into your container image. Since your container will be wiped away every now and then, you should install your blog (creates a wp-config.php) and bake the resulting file into the container image, so that when the container restarts, it doesn't lose your wp-config.php.
Persistent storage might be a problem: Similar to point #2, restarting a container will delete the files saved to the container after it started. You need to make sure stuff like installed plugins, image uploads etc SHOULD NOT write to the local filesystem of the container. (I'm not sure if wordpress lets you write such files to other places like GCS/S3 buckets.) To achieve this, you'd probably end up using something like the https://wordpress.org/plugins/wp-stateless/ plugin or gcs-media-plugin.
Any file written to local filesystem of a Cloud Run container also count towards your container's available memory, so your application may run out of memory if you keep writing files to it.
Long story short, if you can make sure your WP installation doesn't write/modify files on your local disk, it should be working fine.
I think Cloud Run might be the wrong tool for the job here since it runs "stateless" containers, and it's pretty damn hard to make WordPress stateless, especially if you're installing themes/plugins, configuring things etc. Not to mention, your Cloud SQL server won't be "serverless", and you'll be paying for it while it's not getting any requests as well.
(P.S. This would be a good exercise to try out and write a blog post about! If you do that, add it to the awesome-cloudrun repo.)

How to simply use docker for deployment?

Docker seems to be the incredible new tool to solve all developer headaches when it comes to packaging and releasing an application, yet i'm unable to find simple solutions for just upgrading a existing application without having to build or buy into whole "cloud" systems.
I don't want any kubernetes cluster or docker-swarm to deploy hundreds of microservices. Just simply replace an existing deployment process with a container for better encapsulation and upgradability.
Then maybe upgrade this in the future, if the need for more containers increases so manual handling would not make sense anymore
Essentially the direct app dependencies (Language and Runtime, dependencies) should be bundled up without the need to "litter" the host server with them.
Lower level static services, like the database, should still be in the host system, as well as a entry router/load-balancer (simple nginx proxy).
Does it even make sense to use it this way? And if so, is there any "best practice" for doing something like this?
Update:
For the application i want to use it on, i'm already using Gitlab-CI.
Tests are already run inside a docker environment via Gitlab-CI, but deployment still happens the "old way" (syncing the git repo to the server and automatically restarting the app, etc).
Containerizing the application itself is not an issue, and i've also used full docker deployments via cloud services (mostly Heroku), but for this project something like this is overkill. No point in paying hundreds of $$ for a cloud server environment if i need pretty much none of the advantages of it.
I've found several of "install your own heroku" kind of systems but i don't need or want to manage the complexity of a dynamic system.
I suppose basically a couple of remote bash commands for updating and restarting a docker container (after it's been pushed to a registry by the CI) on the server, could already do the job - though probably pretty unreliably compared to the current way.
Unfortunately, the "best practice" is highly subjective, as it depends entirely on your setup and your organization.
It seems like you're looking for an extremely minimalist approach to Docker containers. You want to simply put source code and dependencies into a container and push that out to a system. This is definitely possible with Docker, but the manner of doing this is going to require research from you to see what fits best.
Here are the questions I think you should be asking to get started:
1) Is there a CI tool that will help me package together these containers, possibly something I'm already using? (Jenkins, GitLab CI, CircleCI, TravisCI, etc...)
2) Can I use the official Docker images available at Dockerhub (https://hub.docker.com/), or do I need to make my own?
3) How am I going to store Docker Images? Will I host a basic Docker registry (https://hub.docker.com/_/registry/), or do I want something with a bit more access control (Gitlab Container Registry, Harbor, etc...)
That really only focuses on the Continuous Integration part of your question. Once you figure this out, then you can start to think about how you want to deploy those images (Possibly even using one of the tools above).
Note: Also, Docker doesn't eliminate all developer headaches. Does it solve some of the problems? Absolutely. But what Docker, and the accompanying Container mindset, does best is shift many of those issues to the left. What this means is that you see many of the problems in your processes early, instead of those problems appearing when you're pushing to prod and you suddenly have a fire drill. Again, Docker should not be seen as a solve-all. If you go into Docker thinking it will be a solve-all, then you're setting yourself up for failure.

Docker Hub Update Notifications

Are there any good methods/tools to get notifications on updates to containers on Docker Hub? Just to clarify, I don't want to automatically update, just somehow be notified of updates.
I'm currently running a Kubernetes cluster so if I could just specify a list of containers (as opposed to it using the ones on my system) that would be great.
Have you tried docker-notify? It runs on Node (perhaps imperfect).. but within it's own container. You'll need a mailserver or a webhook for it to trigger against.
I'm surprised this isn't supported by docker.exe or Docker-hub.. base image changes (eg Alpine) cause a lot of churn with both hub-dependent (Nginx, Postgres) and local dependent containers possibly needing rebuild.
I also found myself needing this, so I helped build image-watch.com, which is a subscription-based service that watches Docker images and sends notifications when they are updated. As a hosted service, it's not free, but we tried to make it as cheap as possible.

Resources