How to specify volume for docker container in CircleCI configuration? - docker

I did not manage to find out how to mount volume of docker image in config.yml for integrating with CircleCI.
Official document gives those variables for
container usage, entry point, command, etc., but none about volume mounting.
The scenario is, the building of my project requires two docker containers, the main container and the other container for service foo. To use the service foo, I need expose some artifacts generated in earlier steps to foo container and do some next steps.
Anyone has idea whether I can do that?

As taken from CircleCI documentation:
Mounting Folders
It’s not possible to mount a folder from your job space into a container in Remote Docker (and vice versa). But you can use docker cp command to transfer files between these two environments. For example, you want to start a container in Remote Docker and you want to use a config file from your source code for that:
- run: |
# creating dummy container which will hold a volume with config
docker create -v /cfg --name configs alpine:3.4 /bin/true
# copying config file into this volume
docker cp path/in/your/source/code/app_config.yml configs:/cfg
# starting application container using this volume
docker run --volumes-from configs app-image:1.2.3
In the same way, if your application produces some artifacts that need to be stored, you can copy them from Remote Docker:
- run: |
# starting container with our application
# make sure you're not using `--rm` option otherwise container will be killed after finish
docker run --name app app-image:1.2.3
- run: |
# once application container finishes we can copy artifacts directly from it
docker cp app:/output /path/in/your/job/space

Related

Docker NodeRed committed container does not maintain flows and modules

I'm working on a project using NodeRed deployed with docker and I would like to save the state of my deployment, including flows, settings and new added modules so that I can save the image and load it on another host replicating exactly the same NodeRed instance.
I created the container using:
docker run -itd --name my-nodered node-red
After implementing the flows and installing some custom modules, with the container running I used this command:
docker commit my-nodered my-project-nodered/my-nodered:version1
docker save my-project-nodered/my-nodered:version1 > tar-archive.tar.gz
And on another machine I'd imported the image using:
docker load < tar-archive.tar.gz
And run it using:
docker run -itd my-project-nodered/my-nodered:version1
And I obtain a vanilla NodeRed docker container with a default /data directory and just the files on the data directory maintained.
What am I missing? It could be possibile that my /data directory is overwrittenm as well as my settings.js file in the home directory? And in this case, which is the best practice to achieve my target?
Thank you a lot in advance
commit will not work, as you can see that there is volume defined in the Dockerfile.
# User configuration directory volume
VOLUME ["/data"]
That makes it impossible to create a derived image with any different content in that directory tree. (This is the same reason you can't create a mysql or postgresql image with prepopulated data.)
docker commit doesn't consider volumes at all, so you'll get an unchanged image with nothing preloaded in it.
You can see the offical documentation
Managing User Data
Once you have Node-RED running with Docker, we need to ensure any
added nodes or flows are not lost if the container is destroyed. This
user data can be persisted by mounting a data directory to a volume
outside the container. This can either be done using a bind mount or a
named data volume.
Node-RED uses the /data directory inside the container to store user
configuration data.
nodered-user-data-in-docker
one way is to restore the your config file on another machine, for example backup-config then
docker run -it -p 1880:1880 -v $PWD/backup-config/:/data --name mynodered nodered/node-red-docker
or if you want to full for some repo then you can try
docker run -it --rm -v "$PWD/$(wget https://raw.githubusercontent.com/openenergymonitor/oem_node-red/master/flows_emonpi.json)":/data/ nodered/node-red-docker

how to configure Cassandra.yaml which is inside docker image of cassandra at /etc/cassandra/cassandra.yaml

I am trying to edit cassandra.yaml which is inside docker container at /etc/cassandra/cassandra.yaml, I can edit it from logging inside the container, but how can i do it from host?
Multiple ways to achieve this from host to container. You can simple use COPY or RUN in Dockerfile or with basic linux commands as sed, cat, etc. to place your configuration into the container. Another way you can pass environment variables while running your cassandra image which will pass those environment variables to the spawning container. Also, can use the docker volume mount it from host to container and you can map the configuration you want into the cassandra.yaml as shown below,
$ docker container run -v ~/home/MyWorkspace/cassandra.yaml:/etc/cassandra/cassandra.yaml your_cassandra_image_name
If you are using Docker Swarm then you can use Docker configs to externally store the configuration files(Even other external services can be used as etcd or consul). Hope this helps.
To edit cassandra.yaml :
1) Copy your file from your Docker container to your system
From command line :
docker ps
(To get your container id)
Then :
docker cp your_container_id:\etc\cassandra\cassandra.yaml C:\Users\your_destination
Once the file copied you should be able to see it in your_destination folder
2) Open it and make the changes you want
3) Copy your file back into your Docker container
docker cp C:\Users\your_destination\cassandra.yaml your_container_id:\etc\cassandra
4) Restart your container for the changes to be effective

Move docker bind-mount to volume

Actually, I run my containers like this, for example :
docker run -v /nexus-data:/nexus-data sonatype/nexus3
^
After reading the documentation, I discover volumes that are completely managed by docker. For some reasons, I want to change the way to run my containers, to do something like this :
docker run -v nexus-data:/nexus-data sonatype/nexus3
^
I want to transfer my existing bind-mount to volumes.
But I don't want to lose the data into /nexus-data folder, is there a possibility to transfer this folder, to the new volume, whitout restart everything ? Because I've also Jenkins and Sonar containers for example, I just want to change the way to have persistent data. The is a proper way to do this ?
You can try out following steps so that you will not loose your current nexus-data.
#>docker run -v nexus-data:/nexus-data sonatype/nexus3
#>docker copy /nexus-data/. <container-name-or-id>:/nexus-data/
#>docker stop <container-name-or-id>
#>docker start <container-name-or-id>
docker copy will copy data from your host-machine's /nexus-data folder to container's FS /nexus-data folder which is your mounted volume.
Let me know if you face any issue while performing following steps.
Here's another way to do this, that I just used successfully with a Heimdall container. It's outlined in the documentation for the sonatype/nexus3 image:
Stop the running container (e.g. named nexus3)
Create a docker volume called nexus-data, creating it with the following command: docker volume create nexus-data)
By default, Docker will store the volume's content at /var/lib/docker/volumes/nexus-data/_data/
Simply copy the directory where you previously had been using a bind mount to the aforementioned volume directory (you'll need super user privileges to do this, or for the user to be part of the docker group): cp -R /path/to/nexus-data/* /var/lib/docker/volumes/nexus-data/_data/
Restart the nexus3 container with $ docker run -v nexus-data:/nexus-data sonatype/nexus3 --name=nexus3
Your container will be back up and running, with the files persisted in the directory /path/to/nexus-data/ now mirrored in the docker volume. Check if functionality is the same, of course, and if so, you can delete the /path/to/nexus-data/ directory
Q.E.D.

Docker compose best practices of packaged or bundled asset container?

i am in lots of confusion to how to achieve the following flow
i've two containers nginx and asset (will have only bundled asset)
so there cab be 2-3 nginx instance and few asset instances.
So from my local machine or build server i'll build assets using grunt and that bundled asset will be inside of image.
so if i use volumes for bundled asset how it will be pushed along side image.
or if i use no volumes then how nginx will get mount path from asset image or (stopped or running) container.
There are two main ways.
You add your assets to your nginx image. In that case, you simply create a Dockerfile and COPY your local assets to a location on the nginx image. Eg:
FROM: nginx
COPY: myassets/dir/* /var/lib/html
This is the simplest way to do it.
Mount a volume
If you need the same assets to be shared between containers, then you can create a volume and mount this volume in your nginx container. The volume needs to be created before you try to create your nginx containers.
docker volume create myassets
The next step is to copy your files to that newly created volume. If your docker host is local (e.g.: VirtualBox, Docker for Mac or Windows, Vmware Fusion, Parallel) then you can mount a local dir with your assets an copy them to the volume. Eg:
docker run --rm -v /Users/myname/html:/html -v myassets:/temp alpine sh -c "cp -rp /html/* /temp"
If your docker host is hosted elsewhere (AWS, Azure, Remote Servers, ...) then you can't rely on mounted local drives. You'll need to remote copy the files.
docker run -d --name foo -v myassets:/temp alpine tail -f /dev/null
docker cp myassets/* foo:/temp
docker rm -f foo
This creates a container named foo which keeps running (tail -f) in the background -d. We then docker copy files into it at the location where the myassets volume is mounted, and then kill the container when done.
When you mount your volume on a nginx container, it will overwrite whatever is in that container's location.
docker run -d -v myassets:/var/lib/html nginx
You can create multi container docker environments for each app with docker-compose, where multiple asset docker images are built, which is mounted as data volumes for the nginx container.
Refer the docker-compose reference for data volume mounting and stackoverflow question for mounting directory from one container to another.

How do you mount sibling containers volumes started from another container?

I am using docker for my dev environment: I have a dev image and I mount my source files as a volume.
But then I wanted to do the same on my continuous integration server (gitlab ci) and I carefully read docker doc's reference to https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ but the solution of bind-mounting docker's unix socket into a docker client container makes impossible mounting volumes from it.
So basically my question is how would you solve this (given I am in a docker ci server/runner): I need to run the following command from a container (a gitlab runner).
$ git clone ... my-sources && cd my-sources
$ docker run my-dev-image -v $PWD:$PWD -w $PWD gcc main.c
Because obviously, the volume is taken from docker's "native" host and not the current container.
Way I've solved this is making sure that build paths are SAME on host and CI container. e.g - starting container with -v /home/jenkins:/home/jenkins. This way we have volume mounted from host to CI container. You can change to whatever directory you like, just making sure that jenkins user's home is set to that directory.
Note: I'm using jenkins as example, but any CI will work with same principle
Make sure that your CI server is started with a volume (e.g. docker run --name gitlabci -v /src gitlabci …), then, when you start the other containers, start them with docker run --volumes-from gitlabci …. That way, /src will also be available in those containers, and whatever you put in this directory (from the CI server) will be available in the other containers.

Resources