Understanding docker principles - docker

I have made a quite simple golang server and I need to deploy it to a digitalocean droplet.
I know that there can be issues with cross-building go apps in case they use cgo, so to not to think about it in future I decided to use docker, so my app will be build and run always in same environment.
The first thing I dont get is about developing an app. When I create a Dockerfile I use commands to add files from my project directory into newly created docker image. Then I run the container created from this image. But what if I edit my code? - as I understood I must stop the container, remove an image and then build it again. This is a bit tricky for such a common situation - or am I doing things wrong?
Second one - I have created a docker droplet on a DO: Whats the way to deploy my app?
I have to push my image to any docker repository and pull it on to the droplet?
Or I can upload it directly?
Or I have to scp my source code to droplet and run same process as on my local machine, building image and then runnjng a container?

But what if I edit my code? - as I understood I must stop the container, remove an image and then build it again. This is a bit tricky for such a common situation - or am I doing things wrong?
Don't delete the image just rebuild it. It will be much faster than the first initial build. Also why is it tricky? It's just one or two commands, you can create a bash or .bat script if it gets annoying.
I have created a docker droplet on a DO: Whats the way to deploy my app?
All three options are a possibility. For the second one you would have to set up your VM as a docker-hub repo which might be more than you need. Using docker hub isn't bad. You could also just build the image on your server. I recommend using docker hub for it's ease and having watchtower set up on your server to restart your web app on new image pushes.
Edit: the above advice was for a VM not a docker droplet. I'm not familiar with DO but this article should help:
https://blog.machinebox.io/deploy-machine-box-in-digital-ocean-385265fbeafd

Related

How to load and run offline docker image built using docker-compose build?

I'm new to docker and have been dabbling with it for the past few days. I've managed to successfully use docker-compose for a multi-container deployment involving an app server (flask + gunicorn) and web server (nginx).
Now, I'd like to recreate the deployment on an offline machine. After doing research, it seems that most have mentioned use docker save and docker load to transfer over the base images. However, I'm wondering whether its possible to recreate the deployment from the image created by docker-compose build? Reason being I would like to skip the entire process of wheeling my python package dependencies for offline use, which I would have to do for the method starting from the base images.
I've tried to save that particular image (output of docker-compose build) and load it on the offline machine, and then tried docker run and docker-compose up but both don't seem to work. Would like to check with the community whether this method is even possible, and if so what's the right way to go about it?
Thanks!
To solve my issue, I ended up making an image of each individual container post pip install, then using docker-compose.yml simply to spin them up. As David mentioned, it doesn't seem possible to spin up the container from the single image output by docker-compose build.

How to automatically deploy and run docker images on server?

I'm trying to setup the deployment of docker images to Linux server (Debian 10).
I looked over the internet to find an easy solution to deploy images from docker repository onto a server automatically.
I know that Docker Hub has webhooks.
Also, there is an option to use Kubernetes, but it seems to be a bit too much for a simple application running on one server.
What I am looking for is a way for server to detect that docker image has been updated, so that it downloads it and runs the newest version.
Currently, I have setup automatic build of docker images on Azure DevOps that are pushed to private repository on Docker Hub (I will most likely move to privately hosted Nexus repository).
I am looking for suggestions on how to do it with relatively low complexity (e.g. should I use docker-compose for it or some sort of bash script on a server).
The closest thing to what I am looking for is this solution: How to auto deploy Docker Image on own server with GitLab?
I would like to know if this is the recommended way to do or are there any other, possibly easier ways to approach it.
I found this project that looks good as a solution for my case.
https://containrrr.github.io/watchtower/

Should `docker-compose.yml` be in its own repository?

I'm building a small web app with Vue.js and an Express API, each with their own Dockerfile. I currently am able to build those images and publish them to a private Docker repository, then pull them onto a virtual machine and run them. I want to add Docker Compose, and I've often seen that together with the code for the services, such as
|--..
|__api/
|__client/
|__docker-compose.yml
but that seem then like you can't publish the images to a repository, since Docker Compose builds the images and runs the containers, and so my VM would need to pull all the code, when to my thinking it should just need the images and then know how to run them.
So am I thinking about Docker Compose wrong? I have very little experience with it; I'm just trying to figure out the best way to be able to run the containers and it seems like I should be able to do that on a VM without having to download all the source code to that VM.
You can use docker-compose and still publish the individual images.
I guess that the API and the client have their own Docker files respectively.
So basically you have three options:
Let docker-compose build the images via the build
option.
Just reference the images with the image
option and
make sure they are built before.
Do both so docker-compose will build those images and give them
the name and the tag that you put under the image option.
They are all valid options as far as I am concerned. If you go with
option two I would write a little Makefile or script that makes sure
the images are in place for convenience.

Setting up a container from a users github source

Can be closed, not sure how to do it.
I am to be quite frank lost right now, the user whom published his source on github somehow failed to update the installation instructions when he released a new branch. Now, I am not dense, just uneducated when it comes to docker. I would really appreciate a push in the right direction. If I am missing any information from this post, please allow me to provide it in the comments.
Current Setup
O/S - Debian 8 Minimal (Latest kernel)
Hardware - 1GB VPS (KVM)
Docker - Installed with Compose (# docker info)
I am attempting to setup this (https://github.com/pboehm/ddns/tree/docker_and_rework), first I should clone this git to my working directory? Lets say /home for example. I will run the following command;
git clone -b docker_and_rework https://github.com/pboehm/ddns.git
Which has successfully cloned the source files into /home/ddns/... (working dir)
Now I believe I am supposed to go ahead and build something*, so I go into the following directory;
/home/ddns/docker
Inside contains a docker-compose.yml file, I am not sure what this does but by looking at it, it appears to be sending a bunch of instructions which I can only presume is to do with actually deploying or building the whole container/image or magical thing right? From here I go ahead and do the following;
docker-compose build
As we can see, I believe its building the container or image or whatever its called, you get my point (here). After a short while, that completes and we can see the following (docker images running). Which is correct, I see all of the dependencies in there, but things like;
go version
It does not show as a command, so I presume I need to run it inside the container maybe? If so I dont have a clue how, I need to run 'ddns.go' which is inside /home/ddns, the execution command is;
ddns --soa_fqdn=dns.stealthy.pro --domain=d.stealthy.pro backend
I am also curious why the front end web page is not showing? There should be a page like this;
http://ddns.pboehm.org/
But again, I believe there is some more to do I just do not know what??
docker-compose build will only build the images.
You need to run this. It will build and run them.
docker-compose up -d
The -d option runs containers in the background
To check if it's running after docker-compose up
docker-compose ps
It will show what is running and what ports are exposed from the container.
Usually you can access services from your localhost
If you want to have a look inside the container
docker-compose exec SERVICE /bin/bash
Where SERVICE is the name of the service in docker-compose.yml
The instructions it runs that you probably care about are in the Dockerfile, which for that repo is in the docker/ddns/ directory. What you're missing is that Dockerfile creates an image, which is a template to create an instance. Every time you docker run you'll create a new instance from the image. docker run docker_ddns go version will create a new instance of the image, run go version and output it, then die. Running long running processes like the docker_ddns-web image probably does will run the process until something kills that process. The reason you can't see the web page is probably because you haven't run docker-compose up yet, which will create linked instances of all of the docker images specified in the docker-compose.yml file. Hope this helps

Should I create multiple Dockerfile's for parts of my webapp?

I cannot get the idea of connecting parts of a webapp via Dockerfile's.
Say, I need Postgres server, Golang compiler, nginx instance and something else.
I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
Is it correct that I can put everything in one Dockerfile or should I create a separate Dockerfile for each dependency?
If I need to create a Dockerfile for each dependency, what's the correct way to create a merged image from them all and make all the parts work inside one container?
The current best practice is to have a single container perform one function. This means that you would have one container for ngnix and another for your app.. Each could be defined by their own dockerfile. Then to tie them all together, you would use docker-compose to define the dependencies between them.
A dockerfile is your docker image. One dockerfile for each image you build and push to a docker register. There are no rules as to how many images you manage, but it does take effort to manage an image.
You shouldn't need to build your own docker images for things like Postgres, Nginx, Golang, etc.. etc.. as there are many official images already published. They are configurable, easy to consume and can be often be run as just a CLI command.
Go to the page for a docker image and read the documentation. It often examples what mounts it supports, what ports it exposes and what you need to do to get it running.
Here's nginx:
https://hub.docker.com/_/nginx/
You use docker-compose to connect together multiple docker images. It makes it easy to docker-compose up an entire server stack with one command.
How to use docker-compose is like trying to explain how to use docker. It's a big topic, but I'll address the key point of your question.
Say, I need Postgres server, Golang compiler, nginx instance and something else. I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
No, you don't describe those things with a dockerfile. Here's the problem in trying to answer your question. You might not need a dockerfile at all!.
Without knowing the specific details of what you're trying to build we can't tell you if you need your own docker images or how many.
You can for example; deploy a running LAMP server using nothing but published docker images from the docker hub. You would just mount the folder with your PHP source code and you're done.
So the key here is that you need to learn how to use docker-compose. Only after learning what it can not do will you know what work is left for you to do to fill in the gaps.
It's better to come back to stackoverflow with specific questions like "how do I run the Golang compiler on the command line via docker"

Resources