Is the Openshift Origin Docker image production ready? - docker

I would like to know if it is recommended to use that image in production environment. Or should I install Openshift Natively?
If I can use the docker image in production how should I upgrade it when a new version of image is released? I know I lose all configuration and application definition when starting a new docker container. Is there a way to keep them? Mapping volumes? Which volumes should be mapped?
The command line I am using is:
$ sudo docker run -d --name "origin" \
--privileged --pid=host --net=host \
-v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes \
openshift/origin start
PS. There is a relative question I asked yesterday but not focusing on the same problem.
Update on 20/01/2016
I have tried #Clayton's suggestion of mapping folder /var/lib/origin which worked well before 17th Jan 2016. Then I started getting Failed to mount issue when deploying router and some other applications. When I change it back to mapping /var/lib/origin/openshift.local.volumes, it seems OK until now.

If you have the /var/lib/origin directory mounted, when your container reboots you will still have all your application data. That would be the recommended way to run in a container.

Related

Appwrite succesfullyu installed but a new directory isnt created

I have installed Appwrite. But a new directory conating the docker-compose.yml and .env has not been created. The terminal is giving a sucess message. Docker is also working properly.
I installed appwrite through following commands:
docker run -it --rm \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume "$(pwd)"/appwrite:/usr/src/code/appwrite:rw \
--entrypoint="install" \
appwrite/appwrite:0.13.4
Have a look of my terminal screen:
Have you installed Appwrite before on this machine? If you have, then it isn't something to be concerned about. Notice how your docker compose is pointing to /usr/src/code/appwrite/docker-compose.yml? This is probably your past installation, and your local Docker volumes still point there.
Everything will still work correctly :)

volumes not working with Datapower and docker

I am using docker datapower image for local development. I am using this image
https://hub.docker.com/layers/ibmcom/datapower/latest/images/sha256-35b1a3fcb57d7e036d60480a25e2709e517901f69fab5407d70ccd4b985c2725?context=explore
Datapower version: IDG.10.0.1.0
System: Docker for mac
Docker version 19.03.13
I am running the container with the following config
docker run -it \
-v $PWD/config:/drouter/config \
-v $PWD/local:/drouter/local \
-e DATAPOWER_ACCEPT_LICENSE=true \
-e DATAPOWER_INTERACTIVE=true \
-p 9090:9090 \
-p 9022:22 \
-p 5554:5554 \
-p 8000-8010:8000-8010 \
ibmcom/datapower
when I create files in file management or save a DP object configuration I do not see the changes reflected in the directory on my machine
also I would expect to be able to create files on my host directory and see them reflected in /drouter/config + /drouter/local in the container as well as in the management GUI
the volume mounts don't seem to be working correctly or perhaps I misunderstand something about Datapower or Docker
I have tried mounting volumes in other docker containers under the same path and that works fine so I don't think its an issue with file sharing settings in docker.
The file system structure changed in version 10.0. There is some documentation in the IBM Knowledge Center showing the updated locations for config:, local:, etc., but the Dockerhub page is not updated to reflect that yet.
mounting the volumes like this fixed it for me
-v $PWD/config:/opt/ibm/datapower/drouter/config \
-v $PWD/local:/opt/ibm/datapower/drouter/local \
It seems the container is persisting configuration here instead. This is different than the instructions on dockerHub

Docker basics, how to keep installed packages and edited files?

Do I understand Docker correctly?
docker run -it --rm --name verdaccio -p 4873:4873 -d verdaccio/verdaccio
gets verdaccio if it does not exist yet on my server and runs it on a specific port. -d detaches it so I can leave the terminal and keep it running right?
docker exec -it --user root verdaccio /bin/sh
lets me ssh into the running container. However whatever apk package that I add would be lost if I rm the container then run the image again, as well as any edited file. So what's the use of this? Can I keep the changes in the image?
As I need to edit the config.yaml that is present in /verdaccio/conf/config.yaml (in the container), my only option to keep this changes is to detach the data from the running instance? Is there another way?
V_PATH=/path/on/my/server/verdaccio; docker run -it --rm --name
verdaccio -p 4873:4873 \
-v $V_PATH/conf:/verdaccio/conf \
-v $V_PATH/storage:/verdaccio/storage \
-v $V_PATH/plugins:/verdaccio/plugins \
verdaccio/verdaccio
However this command would throw
fatal--- cannot open config file /verdaccio/conf/config.yaml: ENOENT: no such file or directory, open '/verdaccio/conf/config.yaml'
You can use docker commit to build a new image based on the container.
A better approach however is to use a Dockerfile that builds an image based on verdaccio/verdaccio with the necessary changes in it. This makes the process easily repeatable (for example if a new version of the base image comes out).
A further option is the use of volumes as you already mentioned.

Docker volumes not mounting

I am using this docker image ( https://hub.docker.com/r/karai17/lapis-centos/~/dockerfile/ ) and the following docker run script and it's not setting up my volumes. When I open the container with bash, both /var/www and /var/data are empty.
docker run -dti -p 8888:2808
-v "C:\Users\karai\Documents\GitHub\project\data:/var/data"
-v "C:\Users\karai\Documents\GitHub\project\www:/var/www"
--name project
karai17/lapis-centos:latest
This was working just the other day, the only changes I've made to the Docker image was to add a few more Lua Rocks. All of the data is 100% definitely there, so I am not sure what is going on.
Not sure what happened but after doing a factory reset of Docker for Windows, the issue has been resolved.

Docker build project from Docker Hub

I'd like to set up OpenProject using Docker. There are several decent options in the Hub, but so far I've tried this one as the best possible option. I'd like to clone it, change the database default password (because I find it unsafe) and then build it and run it. How should I proceed?
I've tried docker build -t myrepo/openproject dockerfile_location. Then I get an error that git does not exist. I know that I could add RUN apt-get install git, but afterwards I encounter an error checking for pg_config... no. In order to fix that, I need to install postgres, but this means that I have to put the code and data in the same container. This is the situation that I'm trying to avoid.
How can I solve the problem?
You don't have to put the postgres binaries and data in the same container. pg_config is basically configuring your postgres.
pg_config is in postgresql-devel (libpq-dev in Debian/Ubuntu)
In essence:
# container were your data is
docker run -d --name openproject-postgres-data -v /data busybox true
# container were postgres runs
docker run -d --name openproject-postgres --volumes-from openproject-postgres-data -e USER=super -e PASS=password paintedfox/postgresql
# container that actually runs your application and links to your db container
docker run -d --name openproject --link openproject-postgres:postgres -p 8080:80 abevoelker/openproject

Resources