Load a container - docker

Hello I am new to docker. I have installed my base file which is Wordpress on My PC. Since i use multiple systems i copied my current commits from pc to opensuse. Now i want to load my committed file on to opensuse. Is there any possible way to do it. I tried doing run and i cannot see any changes???
Base file : Wordpress
docker run -dtip 192.168.56.10:80:80 -p 192.168.56.10:2222:22 -h baseWordpress --name baseWordpress--restart unless-stopped mine/wordpress /usr/bin/supervisor
docker start baseWordpress
Commit files
what should i use for commit file to run and start on opensuse

You could try to have your own Dockerfile, where mine/wordpress is the based image, and you overwrite files with your commit files.
Or you could try to map a volume (-v option) in docker run command, and then copy your commit files to the mapped host folder. Next attach to the container, and move the files to correct location.

Related

Docker Oracle12c Enterprise image created from container symlink broken

We are trying to create a docker image from a container based on the Oracle 12c Enterprise Edition image from docker store (https://store.docker.com/images/oracle-database-enterprise-edition). We have the container working ok and then, after stopping the container we create an image based on that container with the following command.
docker commit Oracle_12 oracle/oradb:1
Then, we try to run a container using the commited image with the following command:
docker run -d -it --name oradb_cont -p 1512:1521 -p 5500:5500 oracle/oradb:1
This container fails with the following error:
Start up Oracle Database
Wed Nov 15 10:31:29 UTC 2017
start database
start listener
The database is ready for use .
tail: cannot open '/u01/app/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/alert_ORCLCDB.log' for reading: No such file or directory
tail: no files remaining
The container is "Exited" although the message "The database is ready for use".
We have attached a bash to the container to inspect where the missing file is. And the result seems to be that the "/diag" folder is a broken symlink:
Starting the original Oracle 12c container and attaching a bash, the folder is present. It seems symlink is broken or the file is not present only in the image created from the container.
The problem is that /ORCL is a data volume. The commit operation does not include any files that are inside volumes. You can check the commit documentation for more info.
Thus when starting the new instance, it appears that somehow the log file is being referenced and has not been yet created. Your current container is in an inconsistent state, as the files under '/ORCL' that were present in the commited container are missing from the new instance.
If you are running the new instance on a new machine you need to migrate the old volume into the new machine. You can find the volume of the old container by running docker inspect -f '{{ .Mounts }}' <old-container-name>, and migrate as specified in How to port data-only volumes from one host to another?
If you are running the new instance on the same machine, just mount the old volume using: <volume-name-or-id>:/ORCL
In general, as a best practice, you shouldn't rely on the commit command to get identical instances of a container. Rather build a DockerFile which extends the base image, and then add customizations by selecting only the necessary files to copy over on the new instance.

Change volume configuration in docker-compose without losing the data

My docker-compose has a data container which isn't mapped to a local directory in the host machine, and I want to change it from:
volumes:
- /var/www/html
to
volumes:
- /html:/var/www/html
But when I will restart the container, it will remove the current data container and replace it with a new one.
I know that the container is actually still there, but is there an easy way to do it without the creation of a new data container.
My docker-compose version is 1.7.1 (under boot2docker).
Thanks.
Try at your own risk:
create your host directory /htmlas you wish
docker inspect {container_name} | grep Source and grab your volume path on the host system. It'll be something like /var/lib/docker/volumes/abdb15a2eff[...]/_data
copy the content of that directory to your host directory
recreate the container as you wish.
One safe way to do this is to create a backup of the data from inside the Docker image. Then restore that backup to the directory on your host machine. The Docker Volumes Tutorial mentions a process like this near the bottom.
Here's how you'd do it:
First, mount a directory from your host machine into the container if you don't already have one mounted in. Maybe a volume like ./:/backup. Next, run a backup command like this:
docker-compose run service-name tar czvf /backup/html_data.tar.gz /var/www/html
Now you have html_data.tar.gz in your current directory. Extract it wherever you want and be on your way!
(I'm assuming, based on the way you indicated your volumes, that you're using docker-compose. The process is similar for vanilla Docker.)
Alternate approach, with --volumes-from
Get the name (or hash) of the container with the data you want to copy. You can do this with docker ps. For this example, let's call it container1. Now run this command to back up its data:
docker run --rm --volumes-from container1 -v $(pwd):/backup ubuntu:latest tar czvf /backup/html_data.tar.gz /var/www/html
Note that the image you use (ubuntu:latest) is not important as long as it can tar things.

Why do the changes I make in my working directory not show up in my Docker container?

I would like to run a test a parse-dashboard via Docker, as documented in the readme.
I am getting the error message, "Parse Dashboard can only be remotely accessed via HTTPS." Normally, you can bypass this by adding the line "allowInsecureHTTP": true in your parse-dashboard-config.json file. But even if I have added this option to my config file, the same message is displayed.
I tried to edit the config file in the Docker container, whereupon I discovered that none of my local file changes where present in the container. It appeared as though my project was an unmodified version of the code from the github repository.
Why do the changes that I make to the files in my working directory on the host machine not show up in the Docker container?
But what it is upload to my docker, it's in fact the config file of my master branch.
It depends:
what that "docker" is: the official DockerHub or a private docker registry?
how it is uploaded: do you build an image and then use docker push, or do you simply do a git push back to your GitHub repo?
Basically, if you want to see the right files in your Docker container that you run, you must be sure to run an image you have built (docker build) after a Dockerfile which COPY files from your current workspace.
If you do a docker build from a folder where your Git repo is checked out at the right branch, you will get an image with the right files.
The Dockerfile from the parse-dashboard repository you linked uses ADD . /src. This is a bad practice (because of the problems you're running into). Here are two different approaches you could take to work around it:
Rebuild the Image Each Time
Any time you change anything in the working directory (which the Dockerfile ADDs to /src), you need to rebuild for the change to take effect. The exception to this is src/Parse-Dashbaord/parse-dashboard-config.json, which we'll mount in with a volume. The workflow would be nearly identical to the one in the readme:
$ docker build -t parse-dashboard .
$ docker run -d -p 8080:4040 -v ./src/Parse-Dashbaord/parse-dashboard-config.json:/src/Parse-Dashboard/parse-dashboard-config.json parse-dashboard
Use a Volume
If we're going to use a volume to do this, we don't even need the custom Dockerfile shipped with the project. We'll just use the official Node image, upon which the Dockerfile is based.
In this case, Docker will not run the build process for you, so you should do it yourself on the host machine before starting Docker:
$ npm install
$ npm run build
Now, we can start the generic Node Docker image, and ask it do serve our project directory.
$ docker run -d -p 8080:4040 -v ./:/src node:4.7.2 "cd /src && npm run dashboard"
Changes will take effect immediately because you mount ./ into the image as a volume. Because it's not done with ADD, you don't need to rebuild the image each time. We can use the generic node image because if we're not ADDing a directory and running the build commands, there's nothing our image will do differently than the official one.

How to keep changes inside a container on the host after a docker build?

I have a docker-compose dev stack. When I run, docker-compose up --build, the container will be built and it will execute
Dockerfile:
RUN composer install --quiet
That command will write a bunch of files inside the ./vendor/ directory, which is then only available inside the container, as expected. The also existing vendor/ on the host is not touched and, hence, out of date.
Since I use that container for development and want my changes to be available, I mount the current directory inside the container as a volume:
docker-compose.yml:
my-app:
volumes:
- ./:/var/www/myapp/
This loads an outdated vendor directory into my container; forcing me to rerun composer install either on the host or inside the container in order to have the up to date version.
I wonder how I could manage my docker-compose stack differently, so that the changes during the docker build on the current folder are also persisted on the host directory and I don't have to run the command twice.
I do want to keep the vendor folder mounted, as some vendors are my own and I like being able to modifiy them in my current project. So only mounting the folders I need to run my application would not be the best solution.
I am looking for a way to tell docker-compose: Write all the stuff inside the container back to the host before adding the volume.
You can run a short side container after docker-compose build:
docker run --rm -v /vendor:/target my-app cp -a vendor/. /target/.
The cp could also be something more efficient like an rsync. Then after that container exits, you do your docker-compose up which mounts /vendor from the host.
Write all the stuff inside the container back to the host before adding the volume.
There isn't any way to do this directly, but there are a few options to do it as a second command.
as already suggested you can run a container and copy or rsync the files
use docker cp to copy the files out of a container (without using a volume)
use a tool like dobi (disclaimer: dobi is my own project) to automate these tasks. You can use one image to update vendor, and another image to run the application. That way updates are done on the host, but can be built into the final image. dobi takes care of skipping unnecessary operations when the artifact is still fresh (based on modified time of files or resources), so you never run unnecessary operations.

How to save config file inside a running container?

I am new to docker. I want to run tinyproxy within docker. Here is the image I used to create a docker container: "https://hub.docker.com/r/dtgilles/tinyproxy/".
For some unknown reason, when I mount the log file to the host machine, I can see the .conf file, but I can't see log file and the proxy server seems doesn't work.
Here is the command I tried:
docker run -v $(pwd):/logs -p 8888:8888 -d --name tiny
dtgilles/tinyproxy
If I didn't mount the file, then every time when run a container, I need to change its config file inside container.
Does anyone has any ideas about saving the changes in container?
Question
How to save a change committed by/into a container?
Answer
The command docker commit creates a new image from a container's changes (from the man page).
Best Practice
You actually should not do this to save a configuration file. A Docker image is supposed to be immutable. This increases sharing, and image customization through mounted volume.
What you should do is create the configuration file on the host and share it at through parameters with docker run. This is done by using the option -v|--volume. Check the man page, you'll then be able to share files (or directories) between host and containers allowing to persists the data through different runs.

Resources