I have a web application XY which consists of nginx, php-fpm and mariadb. I successfully splitted everything up to it's own container using docker-compose and it's running like a charm. For development purposes I just mounted a local directory that contains the actual source/php code. When deploying this to a staging or production environment the Docker docs told me to bake the source code into the actual image. In this case I have to copy the source code to the nginx as well as to the php-fpm image when building it, because both of them need it.
When the application itself gets bigger (more assets and libaries) the nginx and php-fpm images grow both. In my opinion this somehow violates the "keep the image as small as possible" rule and this seems so deeply wrong to me. I've always learned not to repeat myself and store logic in one place, encouple things and so on.
Is this the right way to do it, or am I missing something?
In this case I'd probably create a new container which would contain the source code. This container can export the source code's directory in a volume that both the nginx and the php-fpmcan can mount.
There is an interesting writeup on Dockerise your PHP application with Nginx and PHP7-FPM. This example uses volumes to share code between PHP and Nginx.
Your point about not repeating yourself isn't a bad one, but consider you may not always want the same number of Nginx containers and PHP containers. Maybe the PHP part of your application will be under more load than the part that serves static assets and you'll want to scale that up independently. If you use something like Docker Swarm, you aren't even guaranteed all of your PHP containers will be on the same host.
Your images are deployment artifacts, there isn't anything wrong with having the same static content baked into multiple images.
Related
So, the idea is to dockerize an existing meteor app from 2015. The app is divided into two (backend and frontend). I already made a huge bash script to handle all the older dependencies...software dependencies...etc etc. I just need to run the script and we get the app running. But the idea now is to create a docker image for that project. How should I achieve this? Should I create an empty docker image and run my script there?. Thanks. I'm new to docker.
A bit more info about the stack, the script, the dependencies could be helpful.
Assuming that this app is not in development, you can simply use eg an nginx image, and give it the frontend files to serve.
For the backend there is a huge variety of options like php, node, etc.
The dockerfile of your backend image should contain the installation and setup of dependencies (except for other services like database. There are images to do those separated).
To keep things simple you should try out docker-compose to configure your containers to act as a service as a whole (and save you some configurations).
Later, to scale things up, you could look for orchestration tools such as kubernetes. But I assume, this service is not there yet (based on your question). :)
I cannot get the idea of connecting parts of a webapp via Dockerfile's.
Say, I need Postgres server, Golang compiler, nginx instance and something else.
I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
Is it correct that I can put everything in one Dockerfile or should I create a separate Dockerfile for each dependency?
If I need to create a Dockerfile for each dependency, what's the correct way to create a merged image from them all and make all the parts work inside one container?
The current best practice is to have a single container perform one function. This means that you would have one container for ngnix and another for your app.. Each could be defined by their own dockerfile. Then to tie them all together, you would use docker-compose to define the dependencies between them.
A dockerfile is your docker image. One dockerfile for each image you build and push to a docker register. There are no rules as to how many images you manage, but it does take effort to manage an image.
You shouldn't need to build your own docker images for things like Postgres, Nginx, Golang, etc.. etc.. as there are many official images already published. They are configurable, easy to consume and can be often be run as just a CLI command.
Go to the page for a docker image and read the documentation. It often examples what mounts it supports, what ports it exposes and what you need to do to get it running.
Here's nginx:
https://hub.docker.com/_/nginx/
You use docker-compose to connect together multiple docker images. It makes it easy to docker-compose up an entire server stack with one command.
How to use docker-compose is like trying to explain how to use docker. It's a big topic, but I'll address the key point of your question.
Say, I need Postgres server, Golang compiler, nginx instance and something else. I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
No, you don't describe those things with a dockerfile. Here's the problem in trying to answer your question. You might not need a dockerfile at all!.
Without knowing the specific details of what you're trying to build we can't tell you if you need your own docker images or how many.
You can for example; deploy a running LAMP server using nothing but published docker images from the docker hub. You would just mount the folder with your PHP source code and you're done.
So the key here is that you need to learn how to use docker-compose. Only after learning what it can not do will you know what work is left for you to do to fill in the gaps.
It's better to come back to stackoverflow with specific questions like "how do I run the Golang compiler on the command line via docker"
So, here is the problem, I need to do some development and for that I need following packages:
MongoDb
NodeJs
Nginx
RabbitMq
Redis
One option is that I take a Ubuntu image, create a container and start installing them one by one and done, start my server, and expose the ports.
But this can easily be done in a virtual box also, and it will not going to use the power of Docker. So for that I have to start building my own image with these packages. Now here is the question if I start writing my Dockerfile and if place the commands to download the Node js (and others) inside of it, this again becomes the same thing like virtualization.
What I need is that I start from Ubuntu and keep on adding the references of MongoDb, NodeJs, RabbitMq, Nginx and Redis inside the Dockerfile and finally expose the respective ports out.
Here are the queries I have:
Is this possible? Like adding the refrences of other images inside the Dockerfile when you are starting FROM one base image.
If yes then how?
Also is this the correct practice or not?
How to do these kind of things in Docker ?
Thanks in advance.
Keep images light. Run one service per container. Use the official images on docker hub for mongodb, nodejs, rabbitmq, nginx etc. Extend them if needed. If you want to run everything in a fat container you might as well just use a VM.
You can of course do crazy stuff in a dev setup, but why spend time setting up something that has zero value in a production environment? What if you need to scale up one of the services? How do set memory and cpu constraints on each service? .. and the list goes on.
Don't make monolithic containers.
A good start is to use docker-compose to configure a set of services that can talk to each other. You can make a prod and dev version of your docker-compose.yml file.
Getting into the right frame of mind
In a perfect world you would run your containers in clustered environment in production to be able to scale your system and have concurrency, but that might be overkill depending on what you are running. It's at least good to have this in the back of your head because it can help you to make the right decisions.
Some points to think about if you want to be a purist :
How do you have persistent volume storage across multiple hosts?
Reverse proxy / load balancer should probably be the entry point into the system that talks to the containers using the internal network.
Is my service even able run in a clustered environment (multiple instances of the container)
You can of course do dirty things in dev such as mapping in host volumes for persistent storage (and many people who use docker standalone in prod do that as well).
Ideally we should separate docker in dev and docker i prod. Docker is a fantastic tool during development as you can have redis, memcached, postgres, mongodb, rabbitmq, node or whatnot up and running in minutes sharing that compose setup with the rest of the team. Docker in prod can be a completely different beast.
I would also like to add that I'm generally against the fanaticism that "everything should be running in docker" in prod. Run services in docker when it makes sense. It's also not uncommon for larger companies to make their own base images. This can be a lot of work and will require maintenance to keep up with security fixes etc. It's not necessarily the first thing you jump on when starting with docker.
I'm new to Docker. I wanted to create a Dockerfile to start services like RabbitMQ, ftp server and elastic search. But I'm not able to think from where should I start ?
I have asked a similar question here : How should I create a Dockerfile to run more than one services in one instance?
There I got to know, to create different containers : one for rabbitmq, one for ftp server and other for elasticsearch and run them using docker-compose file. There you'll find my created Dockerfile code.
It will be great if someone can help me out with this thing. Thanks!
They are correct. Each container & by extension, each image should be responsible for one concern & that is typically mapped to a single process. So if you need to run more than one thing (or more than one process, generally speaking, not strictly) then you most probably require to build separate images. One of the easiest & recommended ways of creating an image is writing a Dockerfile. This is expected to be an extremely simple process and most of it will be a copy paste of the same commands you would have used to install that component.
One you write the Dockerfile's for each service, you must build them using docker build command, which will result in the images.
When you run an image you get what is known as a container. Think of it roughly like an iso file is the image & the actual vm or running machine is the container.
Now you can use docker-compose to orchestrate how these various containers so they can communicate (or be isolated from) with each other. A docker-compose.yml file is a plain text file in the yaml format that describes the relationship between the different components within the app. Apps can be made up of several services - like webserver, appserver, searchengine, database server, cache engine etc etc. Each of these is a service and runs as a container, but it is also not necessary to run everything as a container. Some can remain running in the traditional way, on VM's or on bare metal servers.
I'll check your other post and add if there is anything needed. But I hope this helps you get started at least.
Recently I was trying to figure out how a docker workflow looks like.
What I thought is, devs should push images locally and in other environments servers should just directly pull that image and run it.
But I could see a lot of public images allows people to put configurations outside the container.
For example, in official elasticsearch image, there is a command as follows:
docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch
So what is the point of putting configuration outside the container instead of running local containers quickly?
My argument is
if I put configuration inside a custom image, in testing environment or production, the server just need to pull that same image which is already built.
if I put configuration outside the image, in other environments, there will be another process to get that configuration from somewhere. Sure we could use git to source control that, but is this a tedious and useless effort to manage it? And installing third party libraries is also required.
Further question:
Should I put the application file (for example, war file) inside web server container or outside it?
When you are doing development, configuration files may change often; so rather than keep rebuilding the containers, you may use a volume instead.
If you are in production and need dozens or hundreds of the same container, all with slightly different configuration files, it is easy to have one single image and have diverse configuration files living outside (e.g. use consul, etcd, zookeeper, ... or VOLUME).