In my Bluemix project, the container has been deleted after migration.
Luckly, the image is still alive so I made a new container using that image, connected my site to that container IP address, set new DB etc.
However, although my bluemix site is up and running it does not show the correct page. It only shows the initial Wordpress site (I'm using wordpress on my site).
Is there something that I should be aware of in this situation?
Thank you
When you run a new container an image is being used as a starting point. If that image has already been pulled to your host it can be used directly, or else it will be pulled from docker hub automatically.
Then if you start doing work on your container (download libraries, add the source code of your application etc...) this affects only the container and not the image. If you delete the container, that state is lost.
You can save the state of you container as an image that you can later re-use by using docker commit. You can also add this image to docker hub with docker push and docker pull it to other hosts.
Containers are by design ephemeral. They're meant to be created and destroyed. If you need a long-running service with persistent data, make sure you use a volume. The volume will store your data across container restarts. Here's a blog post on running Wordpress with a volume in Bluemix that might help - http://blog.ibmjstart.net/2016/01/26/wordpress-on-bluemix-containers-update/
Related
I am creating a core library for Blazor server Apps creating a core DB automatically at runtime.
Until now, I create the database in Environment.SpecialFolder.LocalApplicationData which I got working on multiple platforms (OSX, Ubunt and Windows).
As I just discovered Docker's simplicity for deploying images, I am trying to make my library compatible with it.
So I face two issues:
Determine if the app is hosted on a Docker image or not
Persist data on a different volume that is NOT on the host if running Docker.
Of course, if in Docker, I shall not use Environment.SpecialFolder.LocalApplicationData as this is not a persistent location on the image itself. I can mount a volume when starting the image as described here.
So my natural idea is to assume users will mount a volumne with a specific path when starting the image, say
docker volume create MyAppDB
and run it with
docker run -dp 3000:3000 -v MyAppDB:/app/Data/MyAppDB myBuildDockerImage
can be verified by testing the existance of folder /app/Data/MyAppDB, and once 1. is verified, 2. become trivial.
If the folder does not exist, I am for sure on non-docker image... Well am I? What if users forgot to mount the volume? Or misspelled it? Maybe the folder does not exist because I am running on a non-docker environment...!
Is there a way to tweak my docker image when building it to force the mount volumes - i.e. created by ME and not the end-user? That seems safest... Alternatively, if not possible, can I add some specific element in Docker image to make absolutely sure I am running on the docker image I built or not?
Trying to make sure I understand the proper usage of docker volumes. If I have a container running MongoDB that I plan to start and stop do I need a volume configured with I "docker run" the first time? My understanding is that if use Docker run once, then docker stop/start my data is saved inside the container. The volume is more useful if multiple containers want access to the data. Is that accurate or am I misunderstanding something?
Starting and stopping a container will not delete the container specific data. However, you upgrade containers by replacing them with new containers. Any changes to the container specific read/write layer will be lost when that happens, and the new container will go back to it's initial state. If there are files inside your container that you want to preserve when the container is replaced, then you need to store those files in a volume, and then mount that same volume in the new container.
I have 2 machines(separate hosts) running docker and I am using the same image on both the machines. How do I keep both the images in sync. For eg. suppose I make changes to the image in one of the hosts and want the changes to reflect in the other host as well. I can commit the image and copy the image over to the other host. Is there any other efficient way of doing this??
Some ways I can think of:
1. with a Docker registry
the workflow here is:
HOST A: docker commit, docker push
HOST B: docker pull
2. by saving the image to a .tar file
the workflow here is:
HOST A: docker save
HOST B: docker load
3. with a Dockerfile and by building the image again
the workflow here is:
provide a Dockerfile together with your code / files required
everytime your code has changed and you want to make a release, use docker build to create a new image.
from the hosts that you want to take the update, you will have to get the updated source code (maybe by using a version control software like Git), and then docker build the image
4. CI/CD pipeline
you can see a video here: docker.com/use-cases/cicd
Keep in mind that containers are considered to be ephemeral. This means that updating an image inside another host will then require:
to stop and remove any old container (running with the outdated image)
to run a new one (with the updated image)
I quote from: Best practices for writing Dockerfiles
General guidelines and recommendations
Containers should be ephemeral
The container produced by the image your Dockerfile defines should be as ephemeral as possible. By “ephemeral,” we mean that it can be stopped and destroyed and a new one built and put in place with an absolute minimum of set-up and configuration.
You can perform docker push to upload you image to docker registry and perform a docker pull to get the latest image from another host.
For more information please look at this
I'm confused about common consensus that one shouldn't use data containers. I have specific use case that I want to accomplish.
I want to have docker nginx container and behind it some other container with application. To run newest version of my app I want to download ready container from my private docker registry. The application is for now purely static html, javascript something.
So my plan is to create docker image which will hold the files, and will specify a named volume in some /webapp folder. The nginx container will serve this volume. I do not see any other way how to move bunch of files to remote system the "docker containerized" way. Am I not actually creating cursed data container?
Anyway what happens during app containers exchange? When I stop the app container the volume remains accesible, as it is placed on host. When I pull and start new version of app container. The volume will be created again and prefiled with image files stored at the same location, replacing the content on host so the nginx container will server from now new version of the application.Right? What happens when I will reference volume that does not exist yet from the nginx container.
It seem that named values are not automatically filed with the content of the image. As well I'm not sure how to create named volume in docker file as this syntax taken from here doesn't work
FROM training/webapp
VOLUME webapp:/webapp
I think you might want what i have described here https://stackoverflow.com/a/41576040/3625317
The problem with volumes is, that when a container is recreated, not docker-compose down but rather docker-compose pull + up, the new container will not have your "new code stored in the volume" but rather, due to the recycled volume, still the old anon volume. The point is, you will need a anon-volume for the code anyway, since you want it redeployable, not a named volume since you want the code to be exchangeable.
On re-create the anon-volume is not removed, that said, lets say you have the image:v1 right now and you pull image:v2 and then do a docker-compose up. It will recreate your container based on image:v2 - when this finished, you will have a new container, but the code is still from the old container, which was based on image:v1, since the anon-volume has not been replaced, it was re-assigned. docker-compose down && docker-compose up will resolve that for you - but you have to keep this in mind when dealing with your idea. (down removes anon-volumes)
In general, there is a pro / con, see my other post.
Data-containers in general have a other meaning and have been replaced by so called named volumes. Data-containers have been used to establish a volume-mount which is "named" and not based on a anon-volume.
In the past, you had to create a container with a volume, and later use a container-name based mount of this volume ( the container would be the static / name part ), today, you just create a named volume name and mount by this volume-name, no need for a busybox killed after start based container-name based volume mount.
I'm still a newbie and trying to learning the docker concept. I want to read the JSON file present in one Ubuntu container from the another Ubuntu container. How to do this in docker? Note that, I have to send the JSON from the first container through HTTP. Any idea on how to implement this? Any explanation or sample code on this would be really great.
If your first docker container declare a VOLUME, the other can be run with --volumes-from=<first_container>.
That would mount the declared path of the first container into the second one, effectively sharing a file or folder from the first container in the second.
Note that a container which is just created (not docker run, but docker create) is effectively a data volume container, there only to be mounted (--volumes-from) by other containers.
With http, that means the second container must know about the first (and its EXPOSE'd ports)
You will run the second container with --link=alias:firstContainer: that will allow you to contact alias:port, which is actually the url+port of the first container.
See "Communication across links"