How to find a Docker image on Docker Hub? - docker

I am new to Docker. Using Kitematic, how can I setup a Docker container containing the following?
Apache, Memcached, MySQL, Nginx, PHP FPM
Should I find one single image with all these? If so, how do I find that on https://hub.docker.com? It doesn't seem possible to filter by above requirements.
Or should I install these as separate containers?

Bart,
I don't know anything about kitematic but I can give you some general information though to clear things up.
General concensus is to run only a single process per container. There are lot's of discussions and information around why this would be good or bad, one such discussion for example: https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container.
That said, these are the images I would choose for an environment with the software you described above:
Memcache: https://hub.docker.com/_/memcached
MySql: https://hub.docker.com/_/mysql
Nginx: https://hub.docker.com/_/nginx
PHP FPM: https://hub.docker.com/_/php
How do I get these images? I go to hub.docker.com and search for the software I want, I then start with the official images and see if they suite my needs. If they do, great! Otherwise, I would look for non-official images and eventually if I don't find what I want I will extend the existing images by creating a custom image, based on one from hub.docker.com
Some more explanation about the last one, PHP. PHP is distributed with multiple tags. By going to the docker hub page ('description'-tab) you can see the supported tags. Clicking the tag you are interested in will lead you to a github repo where the Dockerfile is hosted. This file contains the commands, used to construct the image you are researching. You can check all the tags to see which one installs the software you need. For example, there are PHP tags where apache is installed (i.e. 7-apache) and there are tags where FPM is installed (i.e. 7-fpm).
Hope this will help you with the research about what images to use!

You need to run those images within the same docker network, tough a docker-compose (and is associated docker-compose.yml) such as this one.
The docker-compose support in Kinematic UI though... is still an open issue.

you cant find all of these containers as one image.. all you can do is create a docker-compose file and add all those independent images into the compose file.
This way you can handle all your containers as a service in a single with there dependencies too..
For further info refer to https://docs.docker.com/compose/

Related

How to create a single project out of multiple docker images

I have been working on a project where I have had several docker containers:
Three OSRM routing servers
Nominatim server
Container where the webpage code is with all the needed dependencies
So, now I want to prepare a version that a user could download and run. What is the best practice to do such a thing?
Firstly, I thought maybe to join everything into one container, but I have read that it is not recommended to have several processes in one place. Secondly, I thought about wrapping up everything into a VM, but that is not really a "program" that a user can launch. And my third idea was to maybe, write a script, that would download each container from Docker Hub separately and launch the webpage. But, I am not sure if that is best practice, or maybe there are some better ideas.
When you need to deploy a full project composed of several containers.
You may use a specialized tool.
A well known for mono-server usage is docker-compose:
Compose is a tool for defining and running multi-container Docker applications
https://docs.docker.com/compose/
You could provide to your users :
docker-compose file
your application docker images (ex: through docker hub).
Regarding clusters/cloud, we talk more about orchestrator like docker swarm, Kubernetes, nomad
Kubernetes's documentation is the following:
https://kubernetes.io/

Should I create multiple Dockerfile's for parts of my webapp?

I cannot get the idea of connecting parts of a webapp via Dockerfile's.
Say, I need Postgres server, Golang compiler, nginx instance and something else.
I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
Is it correct that I can put everything in one Dockerfile or should I create a separate Dockerfile for each dependency?
If I need to create a Dockerfile for each dependency, what's the correct way to create a merged image from them all and make all the parts work inside one container?
The current best practice is to have a single container perform one function. This means that you would have one container for ngnix and another for your app.. Each could be defined by their own dockerfile. Then to tie them all together, you would use docker-compose to define the dependencies between them.
A dockerfile is your docker image. One dockerfile for each image you build and push to a docker register. There are no rules as to how many images you manage, but it does take effort to manage an image.
You shouldn't need to build your own docker images for things like Postgres, Nginx, Golang, etc.. etc.. as there are many official images already published. They are configurable, easy to consume and can be often be run as just a CLI command.
Go to the page for a docker image and read the documentation. It often examples what mounts it supports, what ports it exposes and what you need to do to get it running.
Here's nginx:
https://hub.docker.com/_/nginx/
You use docker-compose to connect together multiple docker images. It makes it easy to docker-compose up an entire server stack with one command.
How to use docker-compose is like trying to explain how to use docker. It's a big topic, but I'll address the key point of your question.
Say, I need Postgres server, Golang compiler, nginx instance and something else. I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
No, you don't describe those things with a dockerfile. Here's the problem in trying to answer your question. You might not need a dockerfile at all!.
Without knowing the specific details of what you're trying to build we can't tell you if you need your own docker images or how many.
You can for example; deploy a running LAMP server using nothing but published docker images from the docker hub. You would just mount the folder with your PHP source code and you're done.
So the key here is that you need to learn how to use docker-compose. Only after learning what it can not do will you know what work is left for you to do to fill in the gaps.
It's better to come back to stackoverflow with specific questions like "how do I run the Golang compiler on the command line via docker"

How do I setup a docker image to dynamically pull app code from a repository?

I'm using docker cloud at the moment. I'm trying to figure out a development to production workflow using docker with docker compose to pull application code for multiple applications of the same type, but simply changing the repository each pulls from. I understand the concept of mounting a volume, but all the examples show the source code in the same repo with the dockerfile and docker compose file. example. I want the app code from this example to come from a remote, dynamic repo. Would I set an environment variable in the docker image? If so how?
Any example or link to a workflow example is appreciated.
If done right, the code "baked" into Docker images should be immutable and the only thing that should change at runtime is configurable parameters like environment variables (e.g. to set the port the app will listen on).
Ideally, you should bake your code into the image. Otherwise you're losing a lot of the benefit of using Docker in the first place.
The problem is..
.. your use case does not match with the best practice. You want an image without any code embedded in it, but rather fetched at each update. If you browse the docker hub you'll find many image named as service:version. That's one of the benefit of Docker, offering different versions of the same service. If you want to always get the most up-to-date code your workflow may have some down sides.
One solution could be
Webhooks, especially if your code is versionned on GH. Or any tools of continuous integration.

Extending an existing Docker Image on Docker Hub

I'm new to Docker and trying to get my head around extending existing Images.
I understand you can extend an existing Docker image using the FROM command in a Dockerfile (e.g. How to extend an existing docker image?), but my question is -- in general, how can I install additional software / packages without knowing what the base operating system is of the base image or which package manager is available?
Or am I thinking about this the wrong way?
The best practice is to run the base image you want to start FROM (perhaps using docker exec) and see what package managers are available (if any). Then you can write your Dockerfile with the correct software installation procedure.
Think of it the same way you'd add software to any computer: you'd either log into it yourself and poke around, or write an installation program that can handle all of the expected variations.
In most cases, the source Dockerfile is provided and you can walk the chain backwards and gain a better understanding as you do.
For example, if we look at the official Redis image we see the information tab says
Supported tags and respective Dockerfile links
2.6.17, 2.6 (2.6/Dockerfile)
2.8.19, 2.8, 2, latest (2.8/Dockerfile)
So if you're interested in building off redis:latest you'd follow the second link and see that it in turn is built off of debian:wheezy.
Most user-created images will either include their Dockerfile on the hub page or from a link there.

How to bootstrap a docker container?

I created a docker image with pre-installed packages in it (apache, mysql, memcached, solr, etc). Now I want to run a command in a container made from this image, and this command relies on all my packages. I want to have all of them started when I start a new container.
I tried to use /sbin/init, but it doesn't work in docker.
The general opinion is to use a process manager to do this. I wont go into the details here, since I wrote a blog on that: http://blog.trifork.com/2014/03/11/using-supervisor-with-docker-to-manage-processes-supporting-image-inheritance/
Note that another rather general opinion is to split your containers. MySQL generally is on a different container, but you can try to get that to work later on as well of course :)
I see that this is an old topic, however, for someone who just came across it - docker-compose can be used to connect multiple containers, so most of the processes can be split up in different containers. Furthermore, as mentioned earlier, different process managers can be used in order to run processes simultaneously and the one that I would like to mention is Chaperone. I find it really easy to use and slightly better than supervisor!
docker compose and docker sync -> You can not go wrong applying this concept.
-Glynn

Resources