how to build and run a group of docker containers dynamically? - docker

I have an API server which writes some data to the DB and should eventually generate other containers - according to the different parameters it gets.
How should I do that? both in development and in production.

You need to work on a dockerfile generator. Like here.
Thats a lot of work for your cause, but worth doing it. Friendly advise, have a control on the number of containers you create by reusing them for similiar functions.

Related

Are the problems with using a big Docker container for multiple tasks?

I'm working on a scientific computing project. For this work, I need many Python modules as well as C++ packages. The C++ packages require specific versions of other software, so setting up the environment should be done carefully, and after the setup the dependencies should not be updated. So, I thought it should be good to make a Docker container and work inside it, in order to make the work reproducible in the future. However, I didn't understand why people in the internet recommend to use different Docker containers for different processes. For me it seems more natural that I setup the environment, which is a pain, and then use it for the entire project. Can you please explain what I have to be worried about in this case?
It's important that you differentiate between a Docker image and a Docker container.
People recommend using one process per container because this results in a more flexible, scalable environment: if you need to scale out your frontend web servers, or upgrade your database, you can do that without bringing down your entire stack. Running a single process per container also allows Docker to manage those processes in a sane fashion, e.g. by restarting things that have unexpectedly failed. If you're running multiple processes in a container, you end up having to hide this information from Docker by running some sort of process manager, which complicates your containers and can make it difficult to orchestrate a complex application.
On the other hand, it's quite common for someone to use a single image as the basis for a variety of containers all running different services. This is particularly true if you're build a project where a single source tree results in several commands; in that case, it makes sense to have bundle that all into a single image, and then choose which command to run when you start the container.
The only time this becomes a problem is when someone decides to do something like bundle, say, MySQL and Apache into a single image: which is a problem because there are already well maintained official images for those projects, and by building your own you've taking on the burden of properly configuring those services and maintaining the images going forward.
To summarize:
One process/service per container tends to make your life easier
Bundling things together in a single image can be okay

Docker rolling updates on a single node

So I have been using docker with docker-compose for quite some time in a development environment, I love how easy it is.
Until now, I also used docker-compose to serve my apps on my own server as I could afford short down times like docker-compose restart.
But in my current project, we need rolling updates.
We still have one node, and it shall remain as we don't plan on having scalability issue for quite some time.
I read I need to use docker swarm, fine, but when I look for some tutorials on how to set it up, along with using my docker-compose.yml files, I can't find any developer-oriented (instead of devops) resources that would simply tell me the steps to achieve this, even though I don't understand everything it is ok, as I will along the way.
Are there any tutorials to learn how to set this up out there? If not, shouldn't we build it here?
I am definitely sure we are quite numerous to have the issue, as docker is now a must have for devs, but we (devs) still don't want to dive too deep into the devops world.
Cheers, hope it gets attention instead of criticism.
After giving multiple tries to docker swarm, I did struggle a lot with concurrency and orchestration issues, hence I decided to stick with docker-compose which I'm much more comfortable with.
Here's how I achieved rolling updates: https://github.com/docker/compose/issues/1786#issuecomment-579794865
It works actually pretty nice though observers told me it was a similar strategy to what swarm does by default.
So I guess most of my issues went away by removing replication of nodes.
When I get time, I'll give swarm another try. For now, compose does a great job for me.

How to use a scheduler(cron) container to execute commands in other containers

I've spent a fair amount of time researching and I've not found a solution to my problem that I'm comfortable with. My app is working in a dockerized environment:
one container for the database;
one or more containers for the APP itself. Each container holds a specific version of the APP.
It's a multi-tenant application, so each client (or tenant) may be related to only one version at a time (migration should be handle per client, but that's not relevant).
The problem is I would like to have another container to handle scheduling jobs, like sending e-mails, processing some data, etc. The scheduler would then execute commands in app's containers. Projects like Ofelia offer a great promise but I would have to know the container to execute the command ahead of time. That's not possible because I need to go to the database container to discover which version the client is in, to figure it out what container the command should be executed in.
Is there a tool to help me here? Should I change the structure somehow? Any tips would be welcome.
Thanks.
So your question is you want to get the APP's version info in the database container before scheduling jobs,right?
I think this is relate to the business, not the dockerized environment,you may have ways to slove the problem:
Check the network ,make sure the network of the container can connect to each other
I think the database should support RPC function,you can use it to get the version data
You can use some RPC supported tools,like SSH

how to make two docker containers share sqllite db on kubernetes?

I am trying to build an application which in essence consists of two parts.
Django based api
SQLite database.
Api interacts with the SQLite database, but has only read only access. However, the SQLite database needs to be updated every N minutes. So my plan was to make two docker container. First one, for the api. The second one, for the script which is executed every N minutes using CRON (Ubuntu) to update the database.
I am using Kubernetes to serve my applications. So I was wondering if there is a way to achieve what I want here?
I've researched about Persistent Volumes in Kubernetes, but still do not see how I can make it work.
EDIT:
So I have figured that I can use one pod two share two containers on Kubernetes and this way make use of the emptyDir. My question is then, how do I define the path to this directory in my python files?
Thanks,
Lukas
Take into account that emptyDir is erased every time the pod is stopped/killed (you do a deployment, a node crash, etc.). See the doc for more details: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
Taking that into account, if that solves your problem, then you just need to put the mountPath to the directory you want, as in the link above shows the example.
Take into account that the whole directory will be empty, so if you have other things there they won't be visible if you set up and emptyDir (just typical unix mount semantics, nothing k8s specific here)

Dockerizing simple webbapp: how to pick what goes in which container?

I have a very simple webapp. You can think that it's a webpage with an input box which sends the user input to the backend, the backend returns a json and then the front end plugs that json into a jinja2 template and serves some html. That's it. (Also there's a MySQL db on the backend)
I want to dockerize this. The reason for that is that this webapp happens to have gotten some traction and I've had before scares where I push something, the website breaks, I try and roll it back and it's still broken and I end up spending a couple of hours sweating to fix it as fast as possible. I'm hoping that Docker solves this.
Question: how should I split the whole thing into different containers? Given what I have planned for the future, the backend will have to be turned into an API to which the frontend connects to. So they will be two independent containers. My question is really how to connect them. Should the API container expose a http:80 endpoint, to which the frontend container GETs from? I guess my confusion comes from the fact that I will then have to have TWO python processes running: one for the API obviously, and then another one which does nothing but sending an input to the API and rendering the returned json into a jinja2 template. (and then one container for the MySQL db).
OR should I keep both the renderer and the API in the same container, but have two pages, for example /search.html which the user knows about and the api /api.html which is "secret" but which I will need in the future?
Does this picture make sense, or am I over complicating it?
There are no hard and fast rules for this, but a good rule of thumb is one process per container. This will allow you to reuse these containers across different applications. Conversely, some people are finding it useful to create "fat containers" where they have a single image for their whole app that runs in one container.
You have to also think about things like, "how will this affect my deploy process?" and "do I have a sufficient test feedback loop that allows me to make these changes easily?". This link seems useful: https://valdhaus.co/writings/docker-misconceptions/
If this really is a small application, and you're not operating in a SOA environment, one container will probably get you what you want.

Resources