I am using portainer to run my docker images that I build on a github action.
I've setup the docker in swarm mode, so when a new images is build the container would be recreated automatically.
Everything works just fine so far, my only problem is, how do I run the database migrations?
I want to run the database migrations command after the new container was created and I cannot figure it out how I could do this.
I know I could create a script an use it as entrypoint, but I read that it's not ok to run the migration command that way.
I am not using docker-compose file, and I would like to avoid using it, it's easier for me to run the containers like this
There is any solution I should look into?
May be RUN will help you?
"The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.
Layering RUN instructions and generating commits conforms to the core concepts of Docker where commits are cheap and containers can be created from any point in an image’s history, much like source control."
Related
I have made a quite simple golang server and I need to deploy it to a digitalocean droplet.
I know that there can be issues with cross-building go apps in case they use cgo, so to not to think about it in future I decided to use docker, so my app will be build and run always in same environment.
The first thing I dont get is about developing an app. When I create a Dockerfile I use commands to add files from my project directory into newly created docker image. Then I run the container created from this image. But what if I edit my code? - as I understood I must stop the container, remove an image and then build it again. This is a bit tricky for such a common situation - or am I doing things wrong?
Second one - I have created a docker droplet on a DO: Whats the way to deploy my app?
I have to push my image to any docker repository and pull it on to the droplet?
Or I can upload it directly?
Or I have to scp my source code to droplet and run same process as on my local machine, building image and then runnjng a container?
But what if I edit my code? - as I understood I must stop the container, remove an image and then build it again. This is a bit tricky for such a common situation - or am I doing things wrong?
Don't delete the image just rebuild it. It will be much faster than the first initial build. Also why is it tricky? It's just one or two commands, you can create a bash or .bat script if it gets annoying.
I have created a docker droplet on a DO: Whats the way to deploy my app?
All three options are a possibility. For the second one you would have to set up your VM as a docker-hub repo which might be more than you need. Using docker hub isn't bad. You could also just build the image on your server. I recommend using docker hub for it's ease and having watchtower set up on your server to restart your web app on new image pushes.
Edit: the above advice was for a VM not a docker droplet. I'm not familiar with DO but this article should help:
https://blog.machinebox.io/deploy-machine-box-in-digital-ocean-385265fbeafd
Can be closed, not sure how to do it.
I am to be quite frank lost right now, the user whom published his source on github somehow failed to update the installation instructions when he released a new branch. Now, I am not dense, just uneducated when it comes to docker. I would really appreciate a push in the right direction. If I am missing any information from this post, please allow me to provide it in the comments.
Current Setup
O/S - Debian 8 Minimal (Latest kernel)
Hardware - 1GB VPS (KVM)
Docker - Installed with Compose (# docker info)
I am attempting to setup this (https://github.com/pboehm/ddns/tree/docker_and_rework), first I should clone this git to my working directory? Lets say /home for example. I will run the following command;
git clone -b docker_and_rework https://github.com/pboehm/ddns.git
Which has successfully cloned the source files into /home/ddns/... (working dir)
Now I believe I am supposed to go ahead and build something*, so I go into the following directory;
/home/ddns/docker
Inside contains a docker-compose.yml file, I am not sure what this does but by looking at it, it appears to be sending a bunch of instructions which I can only presume is to do with actually deploying or building the whole container/image or magical thing right? From here I go ahead and do the following;
docker-compose build
As we can see, I believe its building the container or image or whatever its called, you get my point (here). After a short while, that completes and we can see the following (docker images running). Which is correct, I see all of the dependencies in there, but things like;
go version
It does not show as a command, so I presume I need to run it inside the container maybe? If so I dont have a clue how, I need to run 'ddns.go' which is inside /home/ddns, the execution command is;
ddns --soa_fqdn=dns.stealthy.pro --domain=d.stealthy.pro backend
I am also curious why the front end web page is not showing? There should be a page like this;
http://ddns.pboehm.org/
But again, I believe there is some more to do I just do not know what??
docker-compose build will only build the images.
You need to run this. It will build and run them.
docker-compose up -d
The -d option runs containers in the background
To check if it's running after docker-compose up
docker-compose ps
It will show what is running and what ports are exposed from the container.
Usually you can access services from your localhost
If you want to have a look inside the container
docker-compose exec SERVICE /bin/bash
Where SERVICE is the name of the service in docker-compose.yml
The instructions it runs that you probably care about are in the Dockerfile, which for that repo is in the docker/ddns/ directory. What you're missing is that Dockerfile creates an image, which is a template to create an instance. Every time you docker run you'll create a new instance from the image. docker run docker_ddns go version will create a new instance of the image, run go version and output it, then die. Running long running processes like the docker_ddns-web image probably does will run the process until something kills that process. The reason you can't see the web page is probably because you haven't run docker-compose up yet, which will create linked instances of all of the docker images specified in the docker-compose.yml file. Hope this helps
I have been trying to figure out why one might choose adding every "step" of their setup to a Dockerfile which will create your container in a certain state.
The alternative in my mind is to just create a container from a simple base image like ubuntu and then (via shell input) configure your container the way you'd like.
But can you share containers? If you can only share images with Docker then I'd understand why one would want every step of their container setup listed in a Dockerfile.
The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.
EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container's image base state.
But can you share containers? If you can only share images with Docker then I'd understand why one would want every step of their container setup listed in a Dockerfile.
Strictly speaking, no. However, you can create a new image from an existing container using the docker commit command:
$ docker commit <container-name> <image-name>
This command will create a new image from the existing container that you can push and pull from/to registries, export and import and create new containers from.
The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.
If you're already using some other mechanism for automated configuration, you can simply integrate your existing automation into the Docker build. For instance, if you are already configuring your images using shell scripts, simply add a build step in your Dockerfile in which to add your install scripts to the container and execute it. In theory, this can also work with configuration management utilities like Puppet, Salt and others.
EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container's image base state.
True. As mentioned in comments, there are clear advantages to have an automated and reproducible build of your image. If you build your containers manually and then create an image with docker commit, you don't necessarily know how to re-build this image at a later point in time (which may become necessary when you want to release a new version of your application or re-build the image on top of an updated base image).
Ideally everything will be sorted out with a Dockerfile and volumes, but sometimes that isn't practical or convenient.
For example, I found an image with Ghost already set up, and it seemed to work. So I added a few blog entries. Then I realized that I actually needed to modify the config.js to set up the mail.
So I stopped the container, committed, made some changes in bash, committed again, and then went to go start the container again running Ghost. But I had trouble getting it to work because the new image didn't have the configuration with the working directory and environment.
How can I copy the Docker container's configuration when I commit an image? Maybe I need to write a script that runs docker inspect on the container, pulls the config out, and then includes that in the docker commit command line?
This is a known issue: https://github.com/dotcloud/docker/issues/1141
The way you describe is still the best to achieve that I think, but I'd try using docker insert and see if that yield better results.
I'm playing with docker by creating a Dockerfile with some nodejs instructions. Right now, every time I make changes to the dockerfile I recreate the image by running sudo docker build -t nodejstest . in my project folder however, this creates a new image each time and swallows my ssd pretty soon.
Is there a way I can update an existing image when I change the dockerfile or I'm forced to create a new one each time I make changes to the file?
Sorry if it's a dumb question
Docker build support caching as long as there is no ADD instruction. If you are actively developing and changing files, only what is after the ADD will be rebuilt.
Since 0.6.2 (scheduled today), you can do docker build --rm . and it will remove the temporary containers. It will keep the images though.
In order to remove the orphan images, you can check them out with docker images, and perform a docker rmi <id> on one of them. As of now, there is an auto-prune and all untagged images (orphans, previous builds) will be removed.
According to this best practices guide if you keep the first lines of your dockerfile the same it'll also cache them and reuse the same images for future builds
During development, it makes less sense to re-build a whole container for every commit. Later, you can automate building a Docker container with your latest code as part of your QA/deployment process.
Basically, you can choose to make a minimal container that pulls in code (using git when starting the container, or using -v /home/myuser/mynode:/home/myuser/mynode with ENTRYPOINT to run node).
See my answer to this question:
Docker rails app and git