Docker image mounting the existing data volume for nexus - docker

I have added new repositories on nexus 3.17. There is no data, just repos.
Created .bak files
Created a docker image for nexus(call it newimage).
I have data from the current version(3.3.1) on a volume and I want the data to show up in the new nexus. When I try below the docker run command and go to nexus, new repositories do not show up, but data is there for old repos.
docker run -d -p 8081:8081 --name nexus -v <local-docker-volume>:/nexus-data newimage
The Dockerfile I use to create an image
FROM sonatype/nexus3:3.17.0
COPY path-to-bak-files/*.bak /nexus-data/restore-from-backup/
Any idea what I am doing wrong?
p.s: let me know if I am not clear.

As per your dockerfile you are copying the contents to /nexus-data/restore-from-backup/ but while running the container you are mounting the existing volume on /nexus-data which ends up masking the /nexus-data directory on file system inside the container (where you added the data while image creation).
Important thing to note in mount operation is that if you mount another disk/share (in this case volume) on an existing directory on your file system(FS), you can't access the dir from your FS anymore. Thus, when you created docker image, you added some files to /nexus-data/restore-from-backup/ but when you mounted volume on /nexus-data, you mounted it over dir in your FS so you can't see files now from your FS.
To address the issue, you can do the following:
add the data to a different location in dockerfile say location1
create container with volume mount as you are currently doing
use entrypoint OR command to copy the files from location1 to /nexus-data/restore-from-backup/

Related

How to copy from volume mapped opt to image opt folder in docker?

Assuming I have a docker image
FROM openjdk:8-jdk-slim
WORKDIR /opt
COPY localfile ../imagefile
I can create my docker image docker build -t my-image . and have my local localfile not the image as ../imagefile.
I can also do this interactively by
Run docker run -it --name my-container --volume=$(pwd):/opt --workdir=/opt openjdk:8-jdk-slim
Then cp localfile ../imagefile
Then exit
Then create the image by running docker commit my-container my-image
Both produce the equivalent my-image.
However, if I change my Dockerfile to below
FROM openjdk:8-jdk-slim
WORKDIR /opt
COPY localfile imagefile
I can build the image using the docker build -t my-image .. However, I cannot cp localfile imagefile, because the cp will only copy the file to the original disk volume folder mapped to opt and not the image actual opt folder.
Is there a way still copy my file to the image opt folder (and not the local one), when my opt is mapped to the local folder?
Or another way of asking, is there equivalent COPY command I can use when I'm running the container interactively to create the image?
There's two important details around this question:
If you mount something into a Docker container (or for that matter on your Linux host), it hides whatever was in that directory before; you can't directly see or access it.
In Docker in particular, you can't change mounts without deleting and recreating the container, and thereby removing the container filesystem.
So on the one hand, you can't copy from a mounted volume to the container filesystem at the same location; on the other, even if you could, there's (almost) no way to see the contents of the filesystem.
(As you note, docker commit will create an image of the container filesystem, ignoring volumes, so that will see this. Using docker commit isn't really a best practice, though; building images via Dockerfiles as you've shown and using docker build is almost always preferred.)
In general I've found volume mounts useful for three things, where hiding the container content is acceptable or even expected. You can use mounts to inject config files into containers (where you want to overwrite the image's copy). You can use mounts to read log files back out (where you don't care what the image started with). If you have a stateful workload like a database, you can use a mount to hold the data that needs to be persisted (so it outlives the container filesystem).

Docker add files to VOLUME

I have a Dockerfile which copies some files into the container and after that creates a VOLUME.
...
ADD src/ /var/www/html/
VOLUME /var/www/html/files
...
In the src folder is an files folder and in this files folder are some files I need to have copied to the VOLUME the first time the container gets started.
I thought the first time the container gets created it uses the content of the original dir specified in the volume but this is not the case.
So how can I get the files into this folder?
Do I need to create an extra folder and copy it with a runscript (I hope not)?
Whatever you put in your Dockerfile is just evaluated at build time (and not when you are creating a new container).
If you want to make file from the host available in your container use a data volume:
docker run -v /host_dir:/container_dir ...
In case you just want to copy files from the host to a container as a one-off operation you can use:
docker cp /host_dir mycontainer:/container_dir
The issue is with your ADD statement. Also you might not understand how volumes are accessed. Compare your efforts with the demo below:
FROM alpine #, or your favorite tiny image
ADD src/files /var/www/html/files
VOLUME /var/www/html/files
Build an image called 'dataimg':
docker build -t dataimg .
Use the dataimg image to create a data container named 'datacon':
docker run --name datacon dataimg /bin/cat
Mount the volume from datacon in your nginx container:
docker run --volumes-from datacon nginx ls -la /var/www/html/files
And you'll see the listing of /var/www/html/files reflects the contents of src/files

Docker volume content does not persist

I am trying to capture the state of a docker container as an image, in a way that includes files I have added to a volume within the container. So, if I run the original container in this way:
$ docker run -ti -v /cookbook ubuntu:14.04 /bin/bash
root#b78f3599d936:/# cd cookbook
root#b78f3599d936:/cookbook# touch foo.txt
Now, if I either export, or commit the container as a new docker image, and then run a container from the new image, then the file, foo.txt is never included in the /cookbook directory.
My question is whether there is a way to create an image from a container in a way that allows the image to include file content within its volumes.
whether there is a way to create an image from a container in a way that allows the image to include file content within its volumes?
No, because volume is designed to manage data inside and between your Docker containers, it's used to persist and share data. What's in image is usually your program(artifacts, executables, libs. e.g) with its whole environment, building/updating data to image does not make much sense.
And in docs of volumes, they told us:
Changes to a data volume will not be included when you update an image.
Also in docs of docker commit:
The commit operation will not include any data contained in volumes mounted inside the container.
Well, by putting the changes in a volume, you're excluding them from the actual container. The documentation for docker export includes this:
The docker export command does not export the contents of volumes associated with the container. If a volume is mounted on top of an existing directory in the container, docker export will export the contents of the underlying directory, not the contents of the volume.
Refer to Backup, restore, or migrate data volumes in the user guide for examples on exporting data in a volume.
This points to this documentation. Please follow the steps there to export the information stored in the volume.
You're probably looking for something like this:
docker run --rm --volumes-from <containerId> -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /cookbook
This would create a file backup.tar with the contents of the container's /cookbook directory and store it in the current directory of the host. You could then use this tar file to import it in another container.
Essentially, there are three ways to do persistence in Docker:
You can keep files in a volume, which is a filesystem managed by Docker. This is what happens in your example: because the /cookbook directory is part of a volume, your file does not get commited/exported with the image. It does however get stored in the volume, so if you remount the same volume in a different container, you will find your file there. You can list your volumes using docker volume ls. As you can see, you should probably give your volumes names if you plan to reuse them. You can mount an existing volume, or create a new one, if the name does not exist, with
docker run -v name:/directory ubuntu
You can keep files as part of the image. If you commit the container, all changes to its file hierarchy are stored in the new image except those made to mounted volumes. So if you just get rid of the -v flag, your file shows up in the commit.
You can bind mount a directory from the host machine to the container, by using the -v /hostdir:/targetdir syntax. The container then simply has access to a directory of the host machine.
Docker commit allows you to create an image from a container and its data (mounted volumes will be ignored)

Converting a mounted volume to a Docker Image

Eg:
I have a running container,with a volume mounted on it.
I want to convert the whole container along with the volume contents to a docker image.
I am tried using
docker commit container-name
docker push repo/imagename:tag
but it only pushed the container, no data from volume was preserved.
Is there any way to convert data on mounted docker volume to a docker image?
Use the following steps:
Use docker cp to copy the contents of the mount point to the docker host.
Create a new container using the same image.
Use docker cp to copy the content into the new container at desired location.
Commit the new container with content in it and push it to your repository.
Another way to do this is to create a DockerFile, then use From directive pointing to desired base image, and use COPY directive to copy the content to desired location (from dockerhost to image) at docker build time.
Neither export or commit will preserve your volume data. you have 2 options though.
First: covert it to image using commit or export it then move volumes manually
Second: copy the content of your volume to any location on your container then commit it and you have now all of the data inside your image. then after transferring. Move the volume data back to its original location for ex:-
cp /my-volume-dir /my-backup-dir
then after you transfer the image
mv /my-backup-dir /my-volume-dir

Can I populate the content of the Volume I created in Bluemix Containers?

I uploaded a Oracle11g DB image in my Bluemix Container registry.
I created a volume called oradbdata in IBM Containers using the CLI:
cf ic volume create oradbdata
Now I need to copy some content into this volume before running the container.
Is there anyway to access this volume and populate its content?
Lionel
When you start the container you can associate the volume to the desired container path; for example: volume oradbdata -> /var/lib/oradata. When the container then starts the /var/lib/oradata is mapped with your volume and, at that point, you can put data on it either at the start-up of the container or accessing the container via ssh.
I suggest adding your files into the container during the container build (e.g. into a /src directory). Then use a startup script for your app. In the script you would check if the mounted directory has the file(s) you need. If not then copy things over. Something like this:
#!/bin/bash
# Test if the volume is empty
if [ ! -f /mountpoint/testfile ]; then
# Copy the contents from the container image into the volume
cp -R /src/* /mountpoint
fi
# Now start the app here
/usr/bin/myapp

Resources