ownCloud Docker File Synchronization - docker

installed ownCloud with docker as following:
docker pull owncloud
docker run -v /var/www/owncloud:/var/www/html -d -p 80:80 owncloud
that works. Setup a client with access to server, works also.
Following issue: when i copy a file by command line to the volume it is copied to the container (also good), BUT MY CLIENTS ARE NOT SYNCED. It looks that clients are only synced, if the webinterface is used.
Any idea, how to fix this?
thanks ralfg

Related

Running Docker Tomcat in Google Cloud Compute instance

I am trying a basic docker test in GCP compute instance. I pulled a tomcat image from the official repo. then ran a command to run the container. Command is :
docker run -te --rm -d -p 80:8080 tomcat
It created a container for me with below id.
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
If I do docker ps, I get below
docker run -te --rm -d -p 80:8080 tomcat
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
However the tomcat admin console does not open. The reason is tomcat image is trying to create the config files under /usr/local. However, it is a read only file system. So the config files are not created.
Is there a way to ask Docker to create the files in a different location? Or, is there any other way to handle it?
Thanks in advance.

How to run sitespeed.io in apache/ngnix server?

I have recently heard about sitespeed.io and started using it to measure performance of my site.
I am running it in a docker container on my gcp cloud instance.
The problem is everytime i run the command it stores the result in a particular directory sitespeed-result and then I need to copy the whole thing on my local windows machine to view index.html file.
Is it possible to run this on a server like apache? I mean for example I can run an apache container on my docker host but how do i map this sitespeed io result so that it can be available using http://my-gcp-instance:80 where my apache container is running on port 80.
sudo docker run -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:13.3.0 https://mywebsite.com
Sorry for posting thr question this but I got it working.
sudo docker run -dit --name my-apache -p 8080:80 -v "$(pwd)":/usr/local/apache2/htdocs/ httpd:2.4
(pwd) is where i am storing the sitespeed results.

How to scp files from local machine directly to a docker container on a remote machine (without having to repeatedly copy)?

I'm new to Docker and I want to copy files to/from my local machine directly to a docker container that's on a remote machine without having to scp files from my local to my remote and then using docker cp to copy those files to the container. My container does not have an SSH server installed on it nor do I want to rebuild my image to include it.
I tried following solution given by the second answer here:How to SSH into Docker? . I ran the following command on my remote machine that hosts Docker:
docker run -d -p 2222:22 -v /var/run/docker.sock:/var/run/docker.sock -e CONTAINER=kind_tu -e AUTH_MECHANISM=noAuth jeroenpeeters/docker-ssh
Where kind_tu is the name of my running container.
On my local machine I then used: ssh -L 2222:localhost:2222 remote_account_name#remote_ip and then scp -P 2222 test_file remote_account_name#remote_ip:/destination/path (I'm also not familiar with port forwarding so I'm not sure if my notation is correct). When doing this, I get the following:
ssh: connect to host remote_ip port 2222: Connection refused
lost connection
Could this be an issue with the firewall since the remote machine is on my school's campus?
In all, I'm not sure if what I'm doing is even remotely correct.
According to your comment as a reply to David's, here is the explanation how to bind-mount the directory for your visualization files to your container:
On the host system create a directory, e.g. mkdir /home/sarah/viz/. Then, mount it to your docker container, using e.g.
docker run -v /home/sarah/viz:/data/viz … kind_tu …
Your viz software inside the kind_tu container should place the files in the directory /data/viz – which then lands in /home/sarah/viz/ on the host system, where you can download them to your local computer with scp or rsync or however you can connect to the remote machine.
You can also use docker-compose to have a more persistent environment. Write a file docker-compose.yml with the bind-mount and all the other configuration of the kind_tu container:
version: '3'
services:
kind_tu:
image: your_viz_software_image:latest
volumes:
- /home/sarah/viz:/data/viz:rw
…
Then, instead of docker run … you can just do docker-compose up -d and everything acts according to the config in the compose-file.

How to debug persistent data volume mount for Docker Odoo container?

I followed the standard Odoo container instructions on Docker to start the required postgres and odoo servers, and tried to pass host directories as persistent data storage for both as indicated in those instructions:
sudo mkdir /tmp/postgres /tmp/odoo
sudo docker run -d -v /tmp/postgres:/var/lib/postgresql/data/pgdata -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name db postgres:10
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The Odoo container shows messages that it starts up fine, but when I point my web browser at http://localhost:8069 I get no response from the server. By contrast, if I omit the -v argument from the Odoo docker run command, my web browser connects to the Odoo server fine, and everything works great.
I searched and see other people also struggling with getting the details of persistent data volumes working, e.g. Odoo development on Docker, Encountered errors while bringing up the project
This seems like a significant gap in Docker's standard use-case that users need better info on how to debug:
How to debug why the host volume mounting doesn't work for the odoo container, whereas it clearly does work for the postgres container? I'm not getting any insight from the log messages.
In particular, how to debug whether the container requires the host data volume to be pre-configured in some specific way, in order to work? For example, the fact that I can get the container to work without the -v option seems like it ought to be helpful, but also rather opaque. How can I use that success to inspect what those requirements actually are?
Docker is supposed to help you get a useful service running without needing to know the guts of its internals, e.g. how to set up its internal data directory. Mounting a persistent data volume from the host is a key part of that, e.g. so that users can snapshot, backup and restore their data using tools they already know.
I figured out some good debugging methods that both solved this problem and seem generally useful for figuring out Docker persistent data volume issues.
Test 1: can the container work with an empty Docker volume?
This is a really easy test: just create a new Docker volume and pass that in your -v argument (instead of a host directory absolute path):
sudo docker volume create hello
sudo docker run -v hello:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The odoo container immediately worked successfully this way (i.e. my web browswer was able to connect to the Odoo server). This showed that it could work fine with an (initially) empty data directory. The obvious question then is why it didn't work with an empty host-directory volume. I had read that Docker containers can be persnickety about UID/GID ownership, so my next question was how do I figure out what it expects.
Test 2: inspect the running container's file system
I used docker exec to get an interactive bash shell in the running container:
sudo docker exec -ti odoo bash
Inside this shell I then looked at the data directory ownership, to get numeric UID and GID values:
ls -dn /var/lib/odoo
This showed me the UID/GID values were 101:101. (You can exit from this shell by just typing Control-D)
Test 3: re-run container with matching host-directory UID:GID
I then changed the ownership of my host directory to 101:101 and re-ran the odoo container with my host-directory mount:
sudo chown 101:101 /tmp/odoo
sudo docker stop odoo
sudo docker rm odoo
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
Success! Finally the odoo container worked properly with a host-directory mount. While it's annoying the Odoo docker docs don't mention anything about this, it's easy to debug if you know how to use these basic tests.

Jenkins with publish over ssh - unable to migrate server configuration

I am using Jenkins (2.32.2) Docker container with the Publish over ssh plugin (1.17) and I have added a new server manually.
The newly added server is another Docker container (both running with docker-compose) and I am using a password to connect to it, and everything works just fine when doing it manually, but the problem is when I'm rebuilding the image.
I am already using a volume for the jenkins gone directory and it works just fine. The problem is only on the initial installation (e.g. image build, not a container restart).
It seems like the problem is with the secret key, and I found out that I also need to copy some keys when creating my image.
See the credentials section at Publish over ssh documentation
I tried to copy all the "secrets" directory and the following files: secret.key, secret.key.not-so-secret, identity.key.enc - but I still can't connect after a fresh install.
What am I missing?
Edited:
I just tried to copy the whole jenkins_home directory on my DOCKERFILE and it works, so I guess that the problem is with the first load or something? maybe Jenkins changes the key / salt on the first load?
Thanks.
try to push out jenkins config to docker host of to os where docker host is being installed
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
or
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v ./local/conf:/var/jenkins_home jenkins

Resources