Due to a number of circumstances beyond my control that I cannot change, the task arose to update the java script in the running container. A .net core site is running in the container. I have successfully changed the script in the wwwroot folder, but these changes are not available to clients. I did "docker restart cont_id" and "kill-HUV 1" inside the container, but it didn't help. Can I somehow update the script without stopping the container? Here is the docker file:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2.4
COPY . /app
WORKDIR /app
ENV ASPNETCORE_URLS http://+:80
EXPOSE 80
CMD [ "dotnet", "ххх.WebUi.dll" ]
You need to docker build a new image, docker stop && docker rm the existing container, and docker run a new container on the new image. You can rebuild the image while the old container is still running, but there's no way to cause the existing container to somehow "switch" to a different image.
There are a number of options that can only be set when you initially build a container; not just the image to use (and the versions of language runtimes and support libraries embedded in that image) but also port mappings, environment variable settings, volume mounts, and others. Deleting and recreating a container in this way is extremely routine. Tools like Docker Compose will let you write the container's settings into a file, so it's straightforward to recreate a container when needed.
Related
I'm running containerized build agents using the linux-sudo image tag and, using a Dockerfile (here) I have successfully customized it to suit our needs. This is running very well and successfully running the builds I need it to.
I am running it in a Swarm cluster with a docker-compose file (here), and I have recently enabled the DOCKER_IN_DOCKER variable. I can successfully run docker run hello-world in this container and I'm happy with the results. However I am having an issue running a small utility container inside the agent with a bind mount volume.
I want to use this Dockerfile inside the build agent to run npm CLI commands against the files in a mounted directory. I'm using the following command to run the container with a custom command and a volume as a bind mount.
docker run -it -v $(pwd):/app {IMAGE_TAG} install
So in theory, running npm install against the local directory that is mounted in the container (npm is the command in the ENTRYPOINT so I can just pass install to the container for simplicity). I can run this on other environments (Ubuntu and WSL) and it works very well. However when I run it in the linux-sudo build agent image it doesn't seem to mount the directory properly. If I inspect the directory in the running utility container (the npm one), the /app folder is empty. Shouldn't I expect to see the contents of the bind mount here as I do in other environments?
I have inspected the container, and it confirms there is a volume created of type bind and I can also see it when I list out the docker volumes.
Is there something fundamental I am doing wrong here? I would really appreciate some help if possible please?
I am trying to setup a windows nanoserver container as a sidecar container holding the certs that I use for SSL. Because the SSL cert that I need changes in each environment, I need to be able to change the sidecar container (i.e. dev-cert container, prod-cert container, etc) at startup time. I have worked out the configuration problems, but am having trouble using the same pattern that I use for Linux containers.
On linux containers, I simply copy my files into a container and use the VOLUMES step to export my volume. Then, on my main application container, I can use volumes_from to import the volume from the sidecar.
I have tried to follow that same pattern with nanoserver and cannot get working. Here is my dockerfile:
# Building stage
FROM microsoft/nanoserver
RUN mkdir c:\\certs
COPY . .
VOLUME c:/certs
The container builds just fine, but I get the following error when I try and run it. The dockerfile documentation says the following:
Volumes on Windows-based containers: When using Windows-based
containers, the destination of a volume inside the container must be
one of:
a non-existing or empty directory
a drive other than C:
so I thought, easy, I will just switch to the D drive (because I don't want to export an empty directory like #1 requires). I made the following changes:
# Building stage
FROM microsoft/windowservercore as build
VOLUME ["d:"]
WORKDIR c:/certs
COPY . .
RUN copy c:\certs d:
and this container actually started properly. However, I missed in the docs where is says:
Changing the volume from within the Dockerfile: If any build steps
change the data within the volume after it has been declared, those
changes will be discarded.
so, when I checked, I didn't have any files in the d:\certs directory.
So how can you mount a drive for external use in a windows container if, #1 the directory must be empty to make a VOLUME on the c drive in the container, and use must use VOLUME to create a d drive, which is pointless because anything put in there will not be in the final container?
Unfortunately you cannot use Windows containers volumes in this way. Also this limitation is the reason why using database containers (like microsoft/mssql-server-windows-developer) is a real pain. You cannot create volume on non-empty database folder and as a result you cannot restore databases after container re-creation.
As for your use case, I would suggest you to utilize a reverse proxy (like Nginx for example).
You create another container with Nginx server and certificates inside. Then you let it handle all incoming HTTPS requests, terminate SSL/TLS and then pass request to inner application container using plain HTTP protocol.
With such deployment you don't have to copy and install HTTPS certificates to all application containers. There is only one place where you store certificates and you can change dev/test/etc certificates just by using different Nginx image versions (or by binding certificate folder using volume).
UPDATE:
Also if you still want to use sidecar container you can try one small hack. So basically you will move this operation
COPY . .
from build time to runtime (after container starts).
Something like this:
FROM microsoft/nanoserver
RUN mkdir c:\\certs_in
RUN mkdir c:\\certs_out
VOLUME c:/certs_out
CMD copy "C:\certs_in" *.* "D:\certs_out"
I have a docker-compose dev stack. When I run, docker-compose up --build, the container will be built and it will execute
Dockerfile:
RUN composer install --quiet
That command will write a bunch of files inside the ./vendor/ directory, which is then only available inside the container, as expected. The also existing vendor/ on the host is not touched and, hence, out of date.
Since I use that container for development and want my changes to be available, I mount the current directory inside the container as a volume:
docker-compose.yml:
my-app:
volumes:
- ./:/var/www/myapp/
This loads an outdated vendor directory into my container; forcing me to rerun composer install either on the host or inside the container in order to have the up to date version.
I wonder how I could manage my docker-compose stack differently, so that the changes during the docker build on the current folder are also persisted on the host directory and I don't have to run the command twice.
I do want to keep the vendor folder mounted, as some vendors are my own and I like being able to modifiy them in my current project. So only mounting the folders I need to run my application would not be the best solution.
I am looking for a way to tell docker-compose: Write all the stuff inside the container back to the host before adding the volume.
You can run a short side container after docker-compose build:
docker run --rm -v /vendor:/target my-app cp -a vendor/. /target/.
The cp could also be something more efficient like an rsync. Then after that container exits, you do your docker-compose up which mounts /vendor from the host.
Write all the stuff inside the container back to the host before adding the volume.
There isn't any way to do this directly, but there are a few options to do it as a second command.
as already suggested you can run a container and copy or rsync the files
use docker cp to copy the files out of a container (without using a volume)
use a tool like dobi (disclaimer: dobi is my own project) to automate these tasks. You can use one image to update vendor, and another image to run the application. That way updates are done on the host, but can be built into the final image. dobi takes care of skipping unnecessary operations when the artifact is still fresh (based on modified time of files or resources), so you never run unnecessary operations.
I'm running a docker container with a volume /var/my_folder. The data there is persistent: When I close the container it is still there.
But also want to have the data available on my host, because I want to work on code with an IDE, which is not installed in my container.
So how can I have a folder /var/my_folder on my host machine which is also available in my container?
I'm working on Linux Mint.
I appreciate your help.
Thanks. :)
Link : Manage data in containers
The basic run command you want is ...
docker run -dt --name containerName -v /path/on/host:/path/in/container
The problem is that mounting the volume will, (for your purposes), overwrite the volume in the container
the best way to overcome this is to create the files (inside the container) that you want to share AFTER mounting.
The ENTRYPOINT command is executed on docker run. Therefore, if your files are generated as part of your entrypoint script AND not as part of your build THEN they will be available from the host machine once mounted.
The solution is therefore, to run the commands that creates the files in the ENTRYPOINT script.
Failing this, during the build copy the files to another directory and then COPY them back in your ENTRYPOINT script.
I am learning Docker and I have doubts about when and where to use ADD and VOLUME. Here is what I think both of these do:
ADD
Copy files to the image at build time. The image has all the files so you can deploy very easily. On the other hand, needing to build every time doesn't look like a good idea in development because building requires the developer to run a command to rebuild the container; additionally, building the container can be time-consuming.
VOLUME
I understand that using docker run -v you can mount a host folder inside your container, this way you can easily modify files and watch the app in your container react to the changes. It looks great in development, but I am not sure how to deploy my files this way.
ADD
The fundamental difference between these two is that ADD makes whatever you're adding, be it a folder or just a file actually part of your image. Anyone who uses the image you've built afterwards will have access to whatever you ADD. This is true even if you afterwards remove it because Docker works in layers and the ADD layer will still exist as part of the image. To be clear, you only ADD something at build time and cannot ever ADD at run-time.
A few examples of cases where you'd want to use ADD:
You have some requirements in a requirements.txt file that you want to reference and install in your Dockerfile. You can then do: ADD ./requirements.txt /requirements.txt followed by RUN pip install -r /requirements.txt
You want to use your app code as context in your Dockerfile, for example, if you want to set your app directory as the working dir in your image and to have the default command in a container run from your image actually run your app, you can do:
ADD ./ /usr/local/git/my_app
WORKDIR /usr/local/git/my_app
CMD python ./main.py
VOLUME
Volume, on the other hand, just lets a container run from your image have access to some path on whatever local machine the container is being run on. You cannot use files from your VOLUME directory in your Dockerfile. Anything in your volume directory will not be accessible at build-time but will be accessible at run-time.
A few examples of cases where you'd want to use VOLUME:
The app being run in your container makes logs in /var/log/my_app. You want those logs to be accessible on the host machine and not to be deleted when the container is removed. You can do this by creating a mount point at /var/log/my_app by adding VOLUME /var/log/my_app to your Dockerfile and then running your container with docker run -v /host/log/dir/my_app:/var/log/my_app some_repo/some_image:some_tag
You have some local settings files you want the app in the container to have access to. Perhaps those settings files are different on your local machine vs dev vs production. Especially so if those settings files are secret, in which case you definitely do not want them in your image. A good strategy in that case is to add VOLUME /etc/settings/my_app_settings to your Dockerfile, run your container with docker run -v /host/settings/dir:/etc/settings/my_app_settings some_repo/some_image:some_tag, and make sure the /host/settings/dir exists in all environments you expect your app to be run.
The VOLUME instruction creates a data volume in your Docker container at runtime. The directory provided as an argument to VOLUME is a directory that bypasses the Union File System, and is primarily used for persistent and shared data.
If you run docker inspect <your-container>, you will see under the Mounts section there is a Source which represents the directory location on the host, and a Destination which represents the mounted directory location in the container. For example,
"Mounts": [
{
"Name": "fac362...80535",
"Source": "/var/lib/docker/volumes/fac362...80535/_data",
"Destination": "/webapp",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Here are 3 use cases for docker run -v:
docker run -v /data: This is analogous to specifying the VOLUME instruction in your Dockerfile.
docker run -v $host_path:$container_path: This allows you to mount $host_path from your host to $container_path in your container during runtime. In development, this is useful for sharing source code on your host with the container. In production, this can be used to mount things like the host's DNS information (found in /etc/resolv.conf) or secrets into the container. Conversely, you can also use this technique to write the container's logs into specific folders on the host. Both $host_path and $container_path must be absolute paths.
docker run -v my_volume:$container_path: This creates a data volume in your container at $container_path and names it my_volume. It is essentially the same as creating and naming a volume using docker volume create my_volume. Naming a volume like this is useful for a container data volume and a shared-storage volume using a multi-host storage driver like Flocker.
Notice that the approach of mounting a host folder as a data volume is not available in Dockerfile. To quote the docker documentation,
Note: This is not available from a Dockerfile due to the portability and sharing purpose of it. As the host directory is, by its nature, host-dependent, a host directory specified in a Dockerfile probably wouldn't work on all hosts.
Now if you want to copy your files to containers in non-development environments, you can use the ADD or COPY instructions in your Dockerfile. These are what I usually use for non-development deployment.