How to update my app when I deploy it using docker - docker

I'm deployed a nodejs app using docker, I don't know how to update the deploy after my nodejs app updated.
Currently, I have to remove the old docker container and image when updating the nodejs app each time.
I expect that it's doesn't need to remove the old image and container when I nodejs app updated.

You tagged this "production". The standard way I've done this is like so:
Develop locally without Docker. Make all of your unit tests pass. Build and run the container locally and run integration tests.
Build an "official" version of the container. Tag it with a time stamp, version number, or source control tag; but do not tag it with :latest or a branch name or anything else that would change over time.
docker push the built image to a registry.
On the production system, change your deployment configuration to reference the version tag you just built. In some order, docker run a container (or more) with the new image, and docker stop the container(s) with the old image.
When it all goes wrong, change your deployment configuration back to the previous version and redeploy. (...oops.) If the old versions of the images aren't still on the local system, they can be fetched from the registry.
As needed docker rm old containers and docker rmi old images.
Typically much of this can be automated. A continuous integration system can build software, run tests, and push built artifacts to a registry; cluster managers like Kubernetes or Docker Swarm are good at keeping some number of copies of some version of a container running somewhere and managing the version upgrade process for you. (Kubernetes Deployments in particular will start a copy of the new image before starting to shut down old ones; Kubernetes Services provide load balancers to make this work.)
None of this is at all specific to Node. As far as the deployment system is concerned there aren't any .js files anywhere, only Docker images. (You don't copy your source files around separately from the images, or bind-mount a source tree over the image contents, and you definitely don't try to live-patch a running container.) After your unfortunate revert in step 5, you can run exactly the failing configuration in a non-production environment to see what went wrong.
But yes, fundamentally, you need to delete the old container with the old image and start a new container with the new image.

Copy the new version to your container with docker cp, then restart it with docker restart <name>

Related

How to load and run offline docker image built using docker-compose build?

I'm new to docker and have been dabbling with it for the past few days. I've managed to successfully use docker-compose for a multi-container deployment involving an app server (flask + gunicorn) and web server (nginx).
Now, I'd like to recreate the deployment on an offline machine. After doing research, it seems that most have mentioned use docker save and docker load to transfer over the base images. However, I'm wondering whether its possible to recreate the deployment from the image created by docker-compose build? Reason being I would like to skip the entire process of wheeling my python package dependencies for offline use, which I would have to do for the method starting from the base images.
I've tried to save that particular image (output of docker-compose build) and load it on the offline machine, and then tried docker run and docker-compose up but both don't seem to work. Would like to check with the community whether this method is even possible, and if so what's the right way to go about it?
Thanks!
To solve my issue, I ended up making an image of each individual container post pip install, then using docker-compose.yml simply to spin them up. As David mentioned, it doesn't seem possible to spin up the container from the single image output by docker-compose build.

docker-compose up on exiting docker-compose

i have simple docker-copose.yml which builds 4 containers. The containers run's on EC2.
docker-compose change ~ twice a day on master branch, and each change we need to deploy the new containers on production
this is what i'm doing:
docker-compose down --rmi all
git pull origin master
docker-compose build -d
i'm removing images to avoid conflicts so that once i'm starting the service i have fresh images
This process takes me around ~ 1 minutes,
what is the best practice to spin up docker-compose, any suggestion to improve this ?
You can do the set of commands you show natively in Docker, without using git or another source-control tool as part of the deployment process.
Whenever you have a change to your source tree, build a new Docker image and push it to a Docker repository. This can be Docker Hub, or if you're on AWS already, Amazon ECR. Each build should have a unique image tag, such as a source control commit ID or a time stamp. You can set up a continuous-integration tool to do all of this for you automatically.
Once you have this, your docker-compose.yml file needs to be updated with the version number to deploy. If you only have a single image you're deploying, you can straightforwardly use Compose variable substitution to fill it in
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:${TAG:-latest}
If you have multiple images you can set multiple environment variables or produce an updated docker-compose.yml file with the values filled in, but you will need to know all of the image versions together at deployment time.
Now when you go to deploy it you only need to run
TAG=20200317.0412 docker-compose up -d
to set the environment variable and trigger Compose. Compose will see that the image you're trying to run for that container is different from what's already running, pull the updated image, and replace the container for you. You don't need to manually remove the old containers or stop the entire stack.
If git is part of your workflow now, it's probably because you're mounting application code into your container. You will also need to delete any volumes: that overwrite the content in the image. Also make sure you make this change in your CI system (so you're testing the actual image you're deploying to production) and in development (similarly).
This particular task becomes slightly easier with a cluster-management system like Kubernetes (or Amazon EKS), though it brings many other complexities elsewhere. In Kubernetes you need to send an updated Deployment spec to the Kubernetes API server, but you can do this without direct ssh access to the target system and only needing to know the specific version of the one image you're updating, and with multiple replicas you can get a zero-downtime upgrade. Both using a Docker repository and using a unique image tag per build are basically required in this setup: images are the only way code gets into the cluster, and changing the image tag string is what triggers code to be redeployed.

How can I save any changes of containers?

If I have one ubuntu container and I ssh to it and make one file after the container is destroyed or I reboot the container the new file was destroyed because the kubernetes load the ubuntu image that does not contain my changes.
My question is what should I do to save any changes?
I know it can be done because some cloud provider do that.
For example:
ssh ubuntu#POD_IP
mkdir new_file
ls
new_file
reboot
after reboot I have
ssh ubuntu#POD_IP
ls
ls shows nothing
But I want to it save my current state.
And I want to do it automatically.
If I use docker commit I can not control my images because it makes hundreds of images. because I should make images by every changes.
If I want to use storage I should mount /. but kubernetes does not allow me to mount /. and it gives me this error
Error: Error response from daemon: invalid volume specification: '/var/lib/kubelet/pods/26c39eeb-85d7-11e9-933c-7c8bca006fec/volumes/kubernetes.io~rbd/pvc-d66d9039-853d-11e9-8aa3-7c8bca006fec:/': invalid mount config for type "bind": invalid specification: destination can't be '/'
You can try to use docker commit but you will need to ensure that your Kubernetes cluster is picking up the latest image that you committed -
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
This is going to create a new image out of your container which you can feed to Kubernetes.
Ref - https://docs.docker.com/engine/reference/commandline/commit/
Update 1 -
In case you want to do it automatically, you might need to store the changed state or the files at a centralized file system like NFS etc & then mount it to all running containers whenever required with the relevant permissions.
K8s ref - https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Docker and Kubernetes don't work this way. Never run docker commit. Usually you have very little need for an ssh daemon in a container/pod and you need to do special work to make both the sshd and the main process both run (and extra work to make the sshd actually be secure); your containers will be simpler and safer if you just remove these.
The usual process involves a technique known as immutable infrastructure. You never change code in an existing container; instead, you change a recipe to build a container, and tell the cluster manager that you want an update, and it will tear down and rebuild everything from scratch. To make changes in an application running in a Kubernetes pod, you typically:
Make and test your code change, locally, with no Docker or Kubernetes involved at all.
docker build a new image incorporating your code change. It should have a unique tag, often a date stamp or a source control commit ID.
(optional but recommended) docker run that image locally and run integration tests.
docker push the image to a registry.
Change the image tag in your Kubernetes deployment spec and kubectl apply (or helm upgrade) it.
Often you'll have an automated continuous integration system do steps 2-4, and a continuous deployment system do the last step; you just need to commit and push your tested change.
Note that when you docker run the image locally in step 3, you are running the exact same image your production Kubernetes system will run. Resist the temptation to mount your local source tree into it and try to do development there! If a test fails at this point, reduce it to the simplest failing case, write a unit test for it, and fix it in your local tree. Rebuilding an image shouldn't be especially expensive.
Your question hints at the unmodified ubuntu image. Beyond some very early "hello world" type experimentation, there's pretty much no reason to use this anywhere other than the FROM line of a Dockerfile. If you haven't yet, you should work through the official Docker tutorial on building and running custom images, which will be applicable to any clustering system. (Skip all of the later tutorials that cover Docker Swarm, if you've already settled on Kubernetes as an orchestrator.)

Handling software updates in Docker images

Let's say I create a docker image called foo that contains the apt package foo. foo is a long running service inside the image, so the image isn't restarted very often. What's the best way to go about updating the package inside the container?
I could tag my images with the version of foo that they're running and install a specific version of the package inside the container (i.e. apt-get install foo=0.1.0 and tag my container foo:0.1.0) but this means keeping track of the version of the package and creating a new image/tag every time the package updates. I would be perfectly happy with this if there was some way to automate it but I haven't seen anything like this yet.
The alternative is to install (and update) the package on container startup, however that means a varying delay on container startup depending on whether it's a new container from the image or we're starting up an existing container. I'm currently using this method but the delay can be rather annoying for bigger packages.
What's the (objectively) best way to go about handling this? Having to wait for a container to start up and update itself is not really ideal.
If you need to update something in your container, you need to build a new container. Think of the container as a statically compiled binary, just like you would with C or Java. Everything inside your container is a dependency. If you have to update a dependency, you recompile and release a new version.
If you tamper with the contents of the container at startup time you lose all the benefits of Docker: That you have a traceable build process and each container is verifiably bit-for-bit identical everywhere and every time you copy it.
Now let's address why you need to update foo. The only reason you should have to update a dependency outside of the normal application delivery cycle is to patch a security vulnerability. If you have a CVE notice that ubuntu just released a security patch then, yep, you have to rebuild every container based on ubuntu.
There are several services that scan and tell you when your containers are vulnerable to published CVEs. For example, Quay.io and Docker Hub scan containers in your registry. You can also do this yourself using Clair, which Quay uses under the hood.
For any other type of update, just don't do it. Docker is a 100% fossilization strategy for your application and the OS it runs on.
Because of this your Docker container will work even if you copy it to 1000 hosts with slightly different library versions installed, or run it alongside other containers with different library versions installed. You container will continue to work 2 years from now, even if the dependencies can no longer be downloaded from the internet.
If for some reason you can't rebuild the container from scratch (e.g. it's already 2 years old and all the dependencies went missing) then yes, you can download the container, run it interactively, and update dependencies. Do this in a shell and then publish a new version of your container back into your registry and redeploy. Don't do this at startup time.

Docker updating image along when dockerfile changes

I'm playing with docker by creating a Dockerfile with some nodejs instructions. Right now, every time I make changes to the dockerfile I recreate the image by running sudo docker build -t nodejstest . in my project folder however, this creates a new image each time and swallows my ssd pretty soon.
Is there a way I can update an existing image when I change the dockerfile or I'm forced to create a new one each time I make changes to the file?
Sorry if it's a dumb question
Docker build support caching as long as there is no ADD instruction. If you are actively developing and changing files, only what is after the ADD will be rebuilt.
Since 0.6.2 (scheduled today), you can do docker build --rm . and it will remove the temporary containers. It will keep the images though.
In order to remove the orphan images, you can check them out with docker images, and perform a docker rmi <id> on one of them. As of now, there is an auto-prune and all untagged images (orphans, previous builds) will be removed.
According to this best practices guide if you keep the first lines of your dockerfile the same it'll also cache them and reuse the same images for future builds
During development, it makes less sense to re-build a whole container for every commit. Later, you can automate building a Docker container with your latest code as part of your QA/deployment process.
Basically, you can choose to make a minimal container that pulls in code (using git when starting the container, or using -v /home/myuser/mynode:/home/myuser/mynode with ENTRYPOINT to run node).
See my answer to this question:
Docker rails app and git

Resources