Docker - build an environment - docker

After hours of googling I just can't find what I am looking for.
I want to know if I can do the following with Docker: have separate Docker files that can be deployed and shared among a group of devs in my team.
For example:
Container 1 - install of lumen that can also have react installed along with it via command line. Most of our projects are lumen with some needing react/typescript installing too. (being able to use npm install too)
Container 2 - MYSQL/MariaDB with or without data. I understand you could have data in a separate container. I don't mind having MySQL etc in container 1 if I can have the data separate.
The idea is that container 1 can be swapped out as they will represent different projects but the DB data can be a separate container so we can all have our own copy of container 2 and if for whatever reason it get corrupt. we can just re use it.

In Docker, you have different object:
- Dockerfile, it's a file used for building an image
- Image, it's the result of a build using an image definition like a Dockerfile
- Container, it's basically a running image
- Registry, a place where you can store images and share them with other people
So, for your use case, you have different solution, but if I understand your needs, you want to share an image containing lumen and some npm packages to your team? If it's the case, you could build it and push it in a registry.
Your team could use it as is or build custom image based on this one.
For your mysql/mariadb container, you could use the base image and add initialization script (sql or sh files) in /docker-entrypoint-initdb.d.
If you screw the database, you could only remove the container and its volume and start it again.

Related

Should `docker-compose.yml` be in its own repository?

I'm building a small web app with Vue.js and an Express API, each with their own Dockerfile. I currently am able to build those images and publish them to a private Docker repository, then pull them onto a virtual machine and run them. I want to add Docker Compose, and I've often seen that together with the code for the services, such as
|--..
|__api/
|__client/
|__docker-compose.yml
but that seem then like you can't publish the images to a repository, since Docker Compose builds the images and runs the containers, and so my VM would need to pull all the code, when to my thinking it should just need the images and then know how to run them.
So am I thinking about Docker Compose wrong? I have very little experience with it; I'm just trying to figure out the best way to be able to run the containers and it seems like I should be able to do that on a VM without having to download all the source code to that VM.
You can use docker-compose and still publish the individual images.
I guess that the API and the client have their own Docker files respectively.
So basically you have three options:
Let docker-compose build the images via the build
option.
Just reference the images with the image
option and
make sure they are built before.
Do both so docker-compose will build those images and give them
the name and the tag that you put under the image option.
They are all valid options as far as I am concerned. If you go with
option two I would write a little Makefile or script that makes sure
the images are in place for convenience.

Sharing bind volume in Docker swarm

We use open-jdk image to deploy our jars. since we have multiple jars we simply attach them using bind mode and run them. I don't want to build separate images since our deployment will be in air gaped environments and each time I can't rebuild images as only the jars will be changing.
Now we are trying to move towards swarm. Since it is a bind mount, I'm unable to spread the replicas to other nodes.
If I use volumes how can I put these jars into that volume? One possibility is that I can run a dummy alpine image and mount the volume to host and then I can share it with other containers. But it possible to share that volume between the nodes? and is it an optimum solution? Also if I need to update the jars how can that be done?
I can create NFS drive but I'm trying to figure out a way of implementing without it. Since it is an isolated environment and may contain crucial data I can't use 3rd party plugins to finish the job as well.
So how docker swarm can be implemented in this scenario?
Use docker build. Really.
An image is supposed to be a static copy of your application and its runtime, and not the associated data. The statement "only the jars changed" means "we rebuilt the application". While you can use bind mounts to inject an application into a runtime-only container, I don't feel like it's really a best practice, and that's doubly true in a language where there's already a significant compile-time step.
If you're in an air-gapped environment, you need to figure out how you're going to provide application updates (regardless of the deployment framework). The best solution, if you can manage it, is to set up a private Docker registry on the isolated network, docker save your images (with the tars embedded), then docker load, docker tag, and docker push them into the registry. Then you can use the registry-tagged image name everywhere and not need to worry about manually pushing the images and/or jar files across.
Otherwise you need to manually distribute the image tar and docker load it, or manually push your updated jars on to each of the target systems. An automation system like Ansible works well for this; I'm partial to Ansible because it doesn't require a central server.

Should I create multiple Dockerfile's for parts of my webapp?

I cannot get the idea of connecting parts of a webapp via Dockerfile's.
Say, I need Postgres server, Golang compiler, nginx instance and something else.
I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
Is it correct that I can put everything in one Dockerfile or should I create a separate Dockerfile for each dependency?
If I need to create a Dockerfile for each dependency, what's the correct way to create a merged image from them all and make all the parts work inside one container?
The current best practice is to have a single container perform one function. This means that you would have one container for ngnix and another for your app.. Each could be defined by their own dockerfile. Then to tie them all together, you would use docker-compose to define the dependencies between them.
A dockerfile is your docker image. One dockerfile for each image you build and push to a docker register. There are no rules as to how many images you manage, but it does take effort to manage an image.
You shouldn't need to build your own docker images for things like Postgres, Nginx, Golang, etc.. etc.. as there are many official images already published. They are configurable, easy to consume and can be often be run as just a CLI command.
Go to the page for a docker image and read the documentation. It often examples what mounts it supports, what ports it exposes and what you need to do to get it running.
Here's nginx:
https://hub.docker.com/_/nginx/
You use docker-compose to connect together multiple docker images. It makes it easy to docker-compose up an entire server stack with one command.
How to use docker-compose is like trying to explain how to use docker. It's a big topic, but I'll address the key point of your question.
Say, I need Postgres server, Golang compiler, nginx instance and something else. I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
No, you don't describe those things with a dockerfile. Here's the problem in trying to answer your question. You might not need a dockerfile at all!.
Without knowing the specific details of what you're trying to build we can't tell you if you need your own docker images or how many.
You can for example; deploy a running LAMP server using nothing but published docker images from the docker hub. You would just mount the folder with your PHP source code and you're done.
So the key here is that you need to learn how to use docker-compose. Only after learning what it can not do will you know what work is left for you to do to fill in the gaps.
It's better to come back to stackoverflow with specific questions like "how do I run the Golang compiler on the command line via docker"

How should I create Dockerfile to run multiple services through docker-compose?

I'm new to Docker. I wanted to create a Dockerfile to start services like RabbitMQ, ftp server and elastic search. But I'm not able to think from where should I start ?
I have asked a similar question here : How should I create a Dockerfile to run more than one services in one instance?
There I got to know, to create different containers : one for rabbitmq, one for ftp server and other for elasticsearch and run them using docker-compose file. There you'll find my created Dockerfile code.
It will be great if someone can help me out with this thing. Thanks!
They are correct. Each container & by extension, each image should be responsible for one concern & that is typically mapped to a single process. So if you need to run more than one thing (or more than one process, generally speaking, not strictly) then you most probably require to build separate images. One of the easiest & recommended ways of creating an image is writing a Dockerfile. This is expected to be an extremely simple process and most of it will be a copy paste of the same commands you would have used to install that component.
One you write the Dockerfile's for each service, you must build them using docker build command, which will result in the images.
When you run an image you get what is known as a container. Think of it roughly like an iso file is the image & the actual vm or running machine is the container.
Now you can use docker-compose to orchestrate how these various containers so they can communicate (or be isolated from) with each other. A docker-compose.yml file is a plain text file in the yaml format that describes the relationship between the different components within the app. Apps can be made up of several services - like webserver, appserver, searchengine, database server, cache engine etc etc. Each of these is a service and runs as a container, but it is also not necessary to run everything as a container. Some can remain running in the traditional way, on VM's or on bare metal servers.
I'll check your other post and add if there is anything needed. But I hope this helps you get started at least.

Can you share Docker containers?

I have been trying to figure out why one might choose adding every "step" of their setup to a Dockerfile which will create your container in a certain state.
The alternative in my mind is to just create a container from a simple base image like ubuntu and then (via shell input) configure your container the way you'd like.
But can you share containers? If you can only share images with Docker then I'd understand why one would want every step of their container setup listed in a Dockerfile.
The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.
EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container's image base state.
But can you share containers? If you can only share images with Docker then I'd understand why one would want every step of their container setup listed in a Dockerfile.
Strictly speaking, no. However, you can create a new image from an existing container using the docker commit command:
$ docker commit <container-name> <image-name>
This command will create a new image from the existing container that you can push and pull from/to registries, export and import and create new containers from.
The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.
If you're already using some other mechanism for automated configuration, you can simply integrate your existing automation into the Docker build. For instance, if you are already configuring your images using shell scripts, simply add a build step in your Dockerfile in which to add your install scripts to the container and execute it. In theory, this can also work with configuration management utilities like Puppet, Salt and others.
EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container's image base state.
True. As mentioned in comments, there are clear advantages to have an automated and reproducible build of your image. If you build your containers manually and then create an image with docker commit, you don't necessarily know how to re-build this image at a later point in time (which may become necessary when you want to release a new version of your application or re-build the image on top of an updated base image).

Resources