I am using docker datapower image for local development. I am using this image
https://hub.docker.com/layers/ibmcom/datapower/latest/images/sha256-35b1a3fcb57d7e036d60480a25e2709e517901f69fab5407d70ccd4b985c2725?context=explore
Datapower version: IDG.10.0.1.0
System: Docker for mac
Docker version 19.03.13
I am running the container with the following config
docker run -it \
-v $PWD/config:/drouter/config \
-v $PWD/local:/drouter/local \
-e DATAPOWER_ACCEPT_LICENSE=true \
-e DATAPOWER_INTERACTIVE=true \
-p 9090:9090 \
-p 9022:22 \
-p 5554:5554 \
-p 8000-8010:8000-8010 \
ibmcom/datapower
when I create files in file management or save a DP object configuration I do not see the changes reflected in the directory on my machine
also I would expect to be able to create files on my host directory and see them reflected in /drouter/config + /drouter/local in the container as well as in the management GUI
the volume mounts don't seem to be working correctly or perhaps I misunderstand something about Datapower or Docker
I have tried mounting volumes in other docker containers under the same path and that works fine so I don't think its an issue with file sharing settings in docker.
The file system structure changed in version 10.0. There is some documentation in the IBM Knowledge Center showing the updated locations for config:, local:, etc., but the Dockerhub page is not updated to reflect that yet.
mounting the volumes like this fixed it for me
-v $PWD/config:/opt/ibm/datapower/drouter/config \
-v $PWD/local:/opt/ibm/datapower/drouter/local \
It seems the container is persisting configuration here instead. This is different than the instructions on dockerHub
Related
I have installed Appwrite. But a new directory conating the docker-compose.yml and .env has not been created. The terminal is giving a sucess message. Docker is also working properly.
I installed appwrite through following commands:
docker run -it --rm \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume "$(pwd)"/appwrite:/usr/src/code/appwrite:rw \
--entrypoint="install" \
appwrite/appwrite:0.13.4
Have a look of my terminal screen:
Have you installed Appwrite before on this machine? If you have, then it isn't something to be concerned about. Notice how your docker compose is pointing to /usr/src/code/appwrite/docker-compose.yml? This is probably your past installation, and your local Docker volumes still point there.
Everything will still work correctly :)
OS: Windows server 2016
I have an App wrote in Go and put in a docker container. The App has to access "D:\test.db". How can I do that?
Using docker volumes and by using the -v or --mount flag when you start your container.
A modified example from the Docker docs:
$ docker run -d \
--mount source=myvol2,target=/app \
nginx:latest
you just need to replace nginx:latext with your image name and adapt source and target as you need.
Another example (also from the docs) using -v and mounting in read-only mode:
$ docker run -d \
-v nginx-vol:/usr/share/nginx/html:ro \
nginx:latest
Ive been given a docker container which is run via a bash script. The container should set up a php web app, it then goes on to call other scripts and containers. It seems to work fine for others, but for me its throwing an error.
This is the code
sudo docker run -d \
--name eluci \
-v ./config/eluci.settings:/mnt/eluci.settings \
-v ./config/elucid.log4j.settings.xml:/mnt/eluci.log4j.settings.xml \
--link eluci-database:eluci-database \
/opt/eluci/run_eluci.sh
This is the error
docker: Error response from daemon: create ./config/eluci.settings:
"./config/eluci.settings" includes invalid characters for a local
volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to
pass a host directory, use absolute path.
Im running docker on a centos VM using virtualbox on a windows 7 host.
From googling it seems to be something to do with the mount, however I dont want to change it in case the setting it breaks or is relied upon in another docker container. I still have a few more bash scripts to run, which should orchestrate the rest of the build process. As a complete newb to Docker, this has got me stumped.
The command docker run -v /path/to/dir does not accept relative paths, you should provide an absolute path. The command can be re-written as:
sudo docker run -d \
--name eluci \
-v "/$(pwd)/config/eluci.settings:/mnt/eluci.settings" \
-v "/$(pwd)/config/elucid.log4j.settings.xml:/mnt/eluci.log4j.settings.xml" \
--link eluci-database:eluci-database \
/opt/eluci/run_eluci.sh
I would like to know if it is recommended to use that image in production environment. Or should I install Openshift Natively?
If I can use the docker image in production how should I upgrade it when a new version of image is released? I know I lose all configuration and application definition when starting a new docker container. Is there a way to keep them? Mapping volumes? Which volumes should be mapped?
The command line I am using is:
$ sudo docker run -d --name "origin" \
--privileged --pid=host --net=host \
-v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes \
openshift/origin start
PS. There is a relative question I asked yesterday but not focusing on the same problem.
Update on 20/01/2016
I have tried #Clayton's suggestion of mapping folder /var/lib/origin which worked well before 17th Jan 2016. Then I started getting Failed to mount issue when deploying router and some other applications. When I change it back to mapping /var/lib/origin/openshift.local.volumes, it seems OK until now.
If you have the /var/lib/origin directory mounted, when your container reboots you will still have all your application data. That would be the recommended way to run in a container.
TL;DR My docker save/export isn't working and I don't know why.
I'm using boot2docker for Mac.
I've created a Wordpress installation proof of concept, and am using BusyBox as both the MySQL container as well as the main file system container. I created these containers using:
> docker run -v /var/lib/mysql --name=wp_datastore -d busybox
> docker run -v /var/www/html --name=http_root -d busybox
Running docker ps -a shows two containers, both based on busybox:latest. SO far so good. Then I create the Wordpress and MySQL containers, pointing to their respective data containers:
>docker run \
--name mysql_db \
-e MYSQL_ROOT_PASSWORD=somepassword \
--volumes-from wp_datastore \
-d mysql
>docker run \
--name=wp_site \
--link=mysql_db:mysql \
-p 80:80 \
--volumes-from http_root \
-d wordpress
I go to my url (boot2docker ip) and there's a brand new Wordpress application. I go ahead and set up the Wordpress site by adding a theme and some images. I then docker inspect http_root and sure enough the filesystem changes are all there.
I then commit the changed containers:
>docker commit http_root evilnode/http_root:dev
>docker commit wp_datastore evilnode/wp_datastore:dev
I verify that my new images are there. Then I save the images:
> docker save -o ~/tmp/http_root.tar evilnode/http_root:dev
> docker save -o ~/tmp/wp_datastore.tar evilnode/wp_datastore:dev
I verify that the tar files are there as well. So far, so good.
Here is where I get a bit confused. I'm not entirely sure if I need to, but I also export the containers:
> docker export http_root > ~/tmp/http_root_snapshot.tar
> docker export wp_datastore > ~/tmp/wp_datastore_snapshot.tar
So I now have 4 tar files:
http_root.tar (saved image)
wp_datastore.tar (saved image)
http_root_snapshot.tar (exported container)
wp_datastore_snapshot.tar (exported container)
I SCP these tar files to another machine, then proceed to build as follows:
>docker load -i ~/tmp/wp_datastore.tar
>docker load -i ~/tmp/http_root.tar
The images evilnode/wp_datastore:dev and evilnode/http_root:dev are loaded.
>docker run -v /var/lib/mysql --name=wp_datastore -d evilnode/wp_datastore:dev
>docker run -v /var/www/html --name=http_root -d evilnode/http_root:dev
If I understand correctly, containers were just created based on my images.
Sure enough, the containers are there. However, if I docker inspect http_root, and go to the file location aliased by /var/www/html, the directory is completely empty. OK...
So then I think I need to import into the new containers since images don't contain file system changes. I do this:
>cat http_root.snapshot.tar | docker import - http_root
I understand this to mean that I am importing a file system delta from one container into another. However, when I go back to the location aliased by /var/www/html, I see the same empty directory.
How do I export the changes from these containers?
Volumes are not exported with the new image. The proper way to manage data in Docker is to use a data container and use a command like docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata or docker cp to backup data and transfer it around. https://docs.docker.com/userguide/dockervolumes/#backup-restore-or-migrate-data-volumes