Exporting a container created with docker-compose - docker

I have a series of containers created with docker-compose. Some of these containers communicate between each other with some rules defined in the docker-compose.yml file.
I need to move those containers from a serverA to serverB (same OS) but i'm having issues in understanding how this works.
I tried both with the export and the save methods following tutorials i've found on the web but I was not able to get the port configurations and networking rules after the export - import or save - load operations (there's a chance I didn't really get how they work...)
The only way I've found to succesfully do this is to copy the whole docker-compose folder and run docker-compose up in serverB.
The question:
Is there a way to preserve the whole configuration of the containers and move them from a server to another using the export or save function?
Thank you for any help you can provide

2 scenarios:
Copy via ssh
$ sudo docker saveĀ  myImage:tag | ssh user#IPhost:/remote/dir docker load -
Copy via scp
#Host A
$ docker save Image > myImage.tar
$ scp myImage.tar IPhostB:/tmp/myImage.tar
# Host B
$ docker load -i /tmp/myImage.tar
And then you need to copy the docker-compose.yml to the host B too.
The containers only have the original build's own configurations, but they don't save the environment that we generate with the file docker-compose.yml
Bye

Related

I want to write a docker file where my container can load db file from a directory and past it to application directory and on exit wrtie it back

I am working with Golang application that saves the information inside sqlite file and that resideds inside the data/sqlite.db same directory as docker file. My docker file is something like this
p.s: guys it's my very first docker file please be kind to me :(
FROM golang:1.16.4
ENV GIN_MODE=release
ENV PORT=8081
ADD . /go/src/multisig-svc
WORKDIR /go/src/multisig-svc
RUN go mod download
RUN go build -o bin/multisig-svc cmd/main.go
EXPOSE $PORT
ENTRYPOINT ./bin/multisig-svc
I deployed this application to the Google cloud plateform but somehow the container gets restarted there and after that my db is vanished. So i researched and try to use volumes.
I build the container using this command docker build -t svc . and then run it with docker run -p 8080:8081 -v data:/var/dump -it svc but i can not see the data folder is getting copied to /var/dump directory. My basic idea is , Whenever the container start it loads the db file from dump and then past it to data directory so application can use it and when it exits it copy it back to dump directory. I don't know if i am on right track any help would really be appreciated.
#Edit
The issue is when no request arrives for 15 minutes GPC shut down the container and start it when there comes a request again. Now the issue is to somehow fetch the db file from dump directory update it and write it back to the dump dir when container goes down for future use.
For a local run and if you are running on a VM, you need to specify the absolute path of the directory you want to mount as a bind mount into your directory. In this case something like that should work
docker run -p 8080:8081 -v $(pwd)/data:/var/dump -it svc
When you don't specify the absolute path, the volume you're mounting to your running container is a named volume manage by the docker daemon. And it is not located in a path related to your current working directory. You can find more information about how work docker volumes here https://docs.docker.com/storage/volumes/
However there are multiple environment on GCP (app engine, kubernetes, VMs), so depending on your environment you may need to adapt this solution.

How to work with the files from a docker container

I need to work with all the files from a docker container, my approach is to copy all the list of files from the container to my host.
I'm using the next docker commands, for example with the postgres image:
docker create -ti --name dummy_1 postgres bash
docker cp dummy_1:/. Documents/docker/dockerOne
With this I have all the container folders and files in my host.
And then the idea is to transverse all the files with the java API, and work with them and finally delete the files and folders from local, but I would like to know if is it a better approach, maybe with Java and access directly to the container files, instead of create a local copy of the container files in my host.
Any ideas?
You can build a small server app inside your docker container which feeds you the information you need at an exposed port. Thats how i would have done it.
Maybe I don't understand the question, but you can mount a volume when you run, not create the container
docker run -v /host/path:/container/path your_container
Any code in the container (e.g. Java) that modifies files at /container/path will be reflected on the host, and not need to be copied back in/out. Similarly, any modifications on the host filesystem will be seen in the container.
I don't think I can implement an API in the docker container
Yes you can. You bind a TCP port using -p flag

docker-compose build and up

I am not an advance user so please bear with me.
I am building a docker image using docker-compose -f mydocker-compose-file.yml ... on my machine.
The image then been pushed to a remote docker registry.
Then from a remote server I pull down this image.
To run this image; I have to copy mydocker-compose-file.yml from my machine to remote server and then run docker-compose -f mydocker-compose-file.yml up -d.
I find this very inefficient as why I need the same YAML file to run the docker image (should I?).
Is there a way to just spin up the container without this file from remote machine?
As of compose 1.24 along with the 18.09 release of docker (you'll need at least that client version on the remote host), you can run docker commands to a remote host over SSH.
# all docker commands in this shell will not talk to the remote host
export DOCKER_HOST=ssh://user#host
# you can verify that with docker info to see which engine you're talking to
docker info
# and now run your docker-compose up command locally to start/stop containers
docker-compose up -d
With previous versions, you could configure TLS certificates to allow specific clients to connect to the docker API over a network connection. See these docs for more details.
Note, if you have host volumes, the variables and paths will be expanded to your laptop directories, but the host mounts will happen on the remote server where those directories may not exist. This is a good situation to switch to named volumes.
Everything you can do with Docker Compose, you can do with plain docker commands.
Depending on how exactly you're interacting with the remote server, your tooling might have native ways to do this. One specific example I'm familiar with is the Ansible docker_container module. If you're already using a tool like Ansible, Chef, or Salt, you can probably use a tool like this to do the same thing your docker-compose.yml file does.
But otherwise there's more or less a direct translation between a docker-compose.yml file
version: '3'
services:
foo:
image: me/foo:20190510.01
ports: ['8080:8080']
and a command line
docker run -d --name foo -p 8080:8080 me/foo:20190510.01
My experience has been that the docker run commands quickly become unwieldy and you want to record them in a file; and once they're in a file, you start to wish they were in a more structured format, even if you need an auxiliary tool to run them; which brings you back to copying around the docker-compose.yml file. I think that's pretty routine. (Something needs to tell the server what to run.)

Drupal folders within docker

I succesfully installed drupal 7 with docker.
Using docker4drupal, now my question when I start editing my drupal site is, where are the folders containing drupal?
Let's say I installed a new theme and want to swap the images for the banner, how do I access the drupal folder containing the images, or would it be preciser to ask : Where does Docker storage them?
My docker compose line is :
-codebase : /var/www/html
I know that installing it using :
./:/var/www/html
Would install drupal in the same directory my docker-compose.yml is, but for some reason it doesn't work and still doesn't show me where the files are.
Any help is welcome!
If you are not using volumes to mount your existing code, the code resides inside the docker container. You can access it only by getting inside the container using docker exec. If you are using the default docker-compose.yml that came with the repo, then the name of the container will be "docker4drupal_nginx_1" (since nginx is the default).
Run this code to get inside the container:
docker exec -it docker4drupal_nginx_1 /bin/bash
exec allows you to execute commands inside the container.
-it allows you to start an interactive terminal
/bin/bash allows you to start the bash terminal inside the container
Once you are inside container run ls and you will see drupal files including "web".
MORE USEFUL
However, this is not a useful way if you want to work on the files and probably use an editor. Instead, mount a directory on host machine. First make a new directory where your docker-compose.yml file is with the name "codebase".
Then, update the docker-compose.yml so that:
- codebase:/var/www/html
becomes
- ./codebase:/var/www/html
Do this in both php and nginx service definisions. Of course, you should do this after you run docker-compose down with your previous set up. Then restart containers using docker-compose up -d.
Then, you will notice that the Drupal files are present in the codebase directory.
If you see at the bottom of the yml file, you will see that "codebase" is defined as a Docker volume. This implies the storage is managed by Docker and it will get stored somewhere in /var/lib/docker/ along with the container itself.
Hope this helps.

How to move a local volume onto a remote docker machine

I have my local docker machine and a remote docker machine, on the cloud. My docker-compose app has a webcontainer with this config:
web:
container_name: web
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
- ./data:/usr/src/app/data
env_file: .env
command: /usr/local/bin/gunicorn --workers 4 --timeout 120 --bind :8000 app:app
The important part is that second volume. I have this local folder called data with some 10GB of data in it. I made it a volume in the first place because otherwise building the container takes forever. Now that the app is production-ready, I'd like to deploy it. One problem: now my remote web container has an empty data folder mounted in it. So how do I move data from my local machine into a container on a remote docker machine? Where do I even move it to?
It seems like there are two tools for this:
docker cp which doesn't seem like it will work for remote docker machines
docker-machine scp which seems made for this, right?
I'm almost positive I need to use the second of these, but since I don't quite understand how docker machine works or where it keeps its data, I'm not sure what destination path to use:
$ dm scp -r /Users/alex/Documents/Project/data remote-machine:/usr/src/app/data
fails with error message:
scp: /usr/src/app/data: No such file or directory
Where should I be scp'ing this data in order to have it mount properly on my remote web container?
Local path vs. in-container path
Assuming you will use the same model remotely that you used locally, keep in mind that the path /usr/src/app/data is the path inside the container. When you are copying the files from one system to another, you just need to copy them from the current system to the remote system, then put them in a path where docker-compose knows how to find them, to mount into a new container.
So all you have to do is copy them from here to there, and use the same path relative to docker-compose.yml. It only knows your external volume as ./data, so if you put the directory in the same place (from docker-compose's perspective), everything should work the same.
How to copy the files
As for how to do the copy, these are just files, so it doesn't matter. scp -r should work, or make a zipfile, copy that, unzip into the correct place, etc. There are a ton of ways to copy files, so pick whatever is simplest for your case.
What exactly needs to be copied?
In the comments you expressed confusion about local vs. remote operations in docker-machine, and what else you needed to copy. Here's a bit more full of an explanation:
On your local system (which I'm assuming is your own PC or laptop), you have docker-machine installed, and you've been using that for all of this development. Completely separate from that is your new cloud instance where you would like to deploy.
To run what you have locally already, up on your cloud instance, the cloud instance will need to have the following.
The docker-compose.yml file.
As long as you plan to use docker-compose to run this, that must be available.
Your .env file.
Since you are using an environment file in this setup, it must be available or docker-compose can't make use of it.
Your web image.
You have a build parameter for this container, but not an image parameter. So currently the only thing you can do is docker-compose build web which will locally generate an image, which docker-compose then knows how to run.
Another option is to add an image parameter, with a repository:tag, such as myuser/myapp_web:1.0, and push that up to Docker Hub. Then, on your cloud instance, the image can be retrieved from Docker Hub instead of building it locally.
In that case, you can add an image parameter to the web container in docker-compose.yml, then build it and push it up.
docker-compose build web
docker-compose push web
Then on the cloud instance, you can fetch it:
docker-compose pull web
docker-compose will know to use that image because of the image parameter in docker-compose.yml (which is also present on the cloud server).
Ref: Creating a new repository on Docker Hub
Which of these options is preferable depends on how you want to manage things. Either one would work, but the "local build" option would require you copy any required source files to your cloud instance too (anything that is used during the build process).
I don't see in your question where the postgres container comes from. If you are also custom-building this one, then the same goes as for web. If you are using a public image for this, then you shouldn't need to copy anything; docker-compose will know how to fetch it, i.e. you can do this:
docker-compose pull postgres
What about docker cp and docker-machine scp?
You mentioned docker cp and docker-machine scp in your question.
As you already determined, docker cp is not a solution here. That command is for copying files between a container and the host filesystem. It has nothing to do with copying over a network.
As far as I know, docker-machine scp is to copy files between your local host and a docker-machine-managed VM. To copy files to your cloud instance you can likely use a more generic tool like scp or sftp more easily.
Not sure as of which docker version, but contrary to the statements in the question and the #Dan_Lowe answer this works fine:
docker cp ./data container:/usr/src/app/
docker cp is a normal part of the API, so it works like any other command, even remotely.

Resources