I am trying to get InterActor Community Edition running on my ubuntu 16.04 machine by following the instructions on this page
http://docs.graphileon.com/interactor/Getting_started/Setup_InterActor/Installation.html
I am fine up to the point I do the docker run...-command.
Also, I have neo4j CE 3.2.5 already running at that time....
When I open the Startpage http://localhost:8000 it shows following
login page
Contrary to the description no settings page is shown.
I also opened it directly by typing localhost:8000/settings in the browser and I got the settings page obviously with missing form-inputs and no styles rendered (compared to picture on installation instruction page)
I thought files were missing, therefore did a docker exec -it interactor /bin/bash to check sources (especially js an css-files) within the running container under the directory /var/www/html/interactor/, seemed to be ok, but i do not know how the permissions have to be set, so here they are in the runing container:
root#ffb64b944023:/var/www/html/interactor# ll
total 168
drwxr-xr-x 21 www-data www-data 4096 Oct 2 18:32 ./
drwxr-xr-x 5 root root 4096 Oct 2 18:32 ../
-rw-r--r-- 1 www-data www-data 446 Sep 6 14:26 .htaccess
-rw-r----- 1 www-data www-data 90804 Sep 6 14:26 INTERACTOR_END_USER_LICENSE_AGREEMENT.pdf
drwxr-x--- 4 www-data www-data 4096 Sep 6 14:27 css/
drwxr-x--- 4 www-data www-data 4096 Sep 6 14:27 dashboard/
-rw-r----- 1 www-data www-data 5430 Jun 21 15:19 favicon.ico
-rw-r--r-- 1 www-data www-data 5738 Sep 6 14:26 favicon.png
drwxr-x--- 2 www-data www-data 4096 Sep 6 14:27 images/
-rw-r--r-- 1 www-data www-data 6929 Sep 6 14:26 index.php
drwxr-x--- 4 www-data www-data 4096 Sep 6 14:27 js/
drwxr-x--- 3 www-data www-data 4096 Oct 1 19:31 persistent/
drwxr-x--- 14 www-data www-data 4096 Sep 6 14:27 php/
drwxr-x--- 2 www-data www-data 4096 Sep 6 14:27 scripts/
drwxr-x--- 11 www-data www-data 4096 Oct 2 18:32 settings/
drwxr-x--- 2 www-data www-data 4096 Sep 6 14:27 templates/
-rw-r----- 1 www-data www-data 98 Sep 6 14:26 version.json
thus I tried to open the linked sources in the pagesource within the browser and got some of the files opened and some not, due to 403-Error.
I wonder if the provided image is somehow misconfigured regarding the permissions or if I am doing something wrong.
The strange thing is that I already had it running after I clicked all of the linked sources in the pagesource and reloaded the settings page.
When I finished playing around in interactor I used a docker stop interactor. neither a docker start interactor nor a docker restart interactor gave me a working interactor-instance back and I cannot get it working anymore...what I am doing wrong?
I can reproduce this issue but so far i cannot find a permanent solution.
The only thing to make it work is to connect to the container:
sudo docker exec -i -t interactor /bin/bash
And the run:
chown www-data:www-data /var/www/html/interactor -R
even if all the files are already set like that.
You have to do this everytime you start the container.
We are working on a fix for this.
Disclaimer: i am a developer for Graphileon.
Related
The relevant documentation
https://docs.docker.com/storage/bind-mounts/
says
For some development applications, the container needs to write into
the bind mount, so changes are propagated back to the Docker host. At
other times, the container only needs read access.
This example modifies the one above but mounts the directory as a
read-only bind mount, by adding ro to the (empty by default) list of
options, after the mount point within the container. Where multiple
options are present, separate them by commas.
I expect that to mean that there is no way to write to the folder that is mounted in that way from within the container. But if I minimally modify the example to give me a shell session and mount the root filesystem
~ $ docker run \
-it \
--name devtest2 \
--mount type=bind,source=/,target=/app,readonly \
ubuntu:latest
I see that I have write access as root from to the entirety of the host filesystem from within the container.
root#bde1f19c1de2:/# cd /app/home/
# Creates directory in the host /home folder
root#bde1f19c1de2:/app/home# mkdir patata
What does then mean that the mount is "readonly".
How do I make it actually read-only?
I observe this behavior with docker 17.05 as it comes with Ubuntu trusty:
$ docker --version
Docker version 17.05.0-ce, build 89658be
I don't know how you can use --mount option as it is only available for standalone container from Docker 17.06 and yours is 17.05.
Ref
Originally, the -v or --volume flag was used for standalone containers
and the --mount flag was used for swarm services. However, starting
with Docker 17.06, you can also use --mount with standalone
containers. In general, --mount is more explicit and verbose. The
biggest difference is that the -v syntax combines all the options
together in one field, while the --mount syntax separates them. Here
is a comparison of the syntax for each flag.
That said i tried it out on docker 17.09 and still saw the same result as your decribed. Only to realise that the "readonly" option is working but your linux permissions are as such which is allowing anyone to write on it!
Since you are mounting / and writing to home directory which has 0755 permission by default
0755 means public (anyone) read and execute. The execute is allowing you to execute the mkdir command
If you mount paths or folder which doesn't have public access then you will see the readonly option works irrective of it being a root user or not inside the container, which is you wont be allowed to write!
example I am mounting home directory which has a 0770 which is public doesn't have any access!
[root#jakku-admin-1 ~]# pwd
/root
[root#jakku-admin-1 ~]# ll
total 8
drwxr-xr-x. 2 root root 4096 Feb 7 21:19 archive
drwxrwx---. 2 root root 4096 Feb 7 20:38 home
[root#jakku-admin-1 ~]# docker run -it --name devtest --mount type=bind,source=`pwd`/home,target=/app,readonly ubuntu:latest
root#3ce55bba8904:/# ll
total 16
drwxr-xr-x. 22 root root 253 Feb 7 21:20 ./
drwxr-xr-x. 22 root root 253 Feb 7 21:20 ../
-rwxr-xr-x. 1 root root 0 Feb 7 21:20 .dockerenv*
drwxrwx---. 2 root root 4096 Feb 7 20:38 app/
drwxr-xr-x. 2 root root 4096 Jan 12 21:10 bin/
drwxr-xr-x. 2 root root 6 Apr 24 2018 boot/
drwxr-xr-x. 5 root root 360 Feb 7 21:20 dev/
drwxr-xr-x. 29 root root 4096 Feb 7 21:20 etc/
drwxr-xr-x. 2 root root 6 Apr 24 2018 home/
drwxr-xr-x. 8 root root 96 May 23 2017 lib/
drwxr-xr-x. 2 root root 34 Jan 12 21:10 lib64/
drwxr-xr-x. 2 root root 6 Jan 12 21:09 media/
drwxr-xr-x. 2 root root 6 Jan 12 21:09 mnt/
drwxr-xr-x. 2 root root 6 Jan 12 21:09 opt/
dr-xr-xr-x. 592 root root 0 Feb 7 21:20 proc/
drwx------. 2 root root 37 Jan 12 21:10 root/
drwxr-xr-x. 5 root root 58 Jan 16 01:20 run/
drwxr-xr-x. 2 root root 4096 Jan 16 01:20 sbin/
drwxr-xr-x. 2 root root 6 Jan 12 21:09 srv/
dr-xr-xr-x. 13 root root 0 Jan 29 22:42 sys/
drwxrwxrwt. 2 root root 6 Jan 12 21:10 tmp/
drwxr-xr-x. 10 root root 105 Jan 12 21:09 usr/
drwxr-xr-x. 11 root root 139 Jan 12 21:10 var/
root#3ce55bba8904:/# cd app/
root#3ce55bba8904:/app# ll
total 4
drwxrwx---. 2 root root 4096 Feb 7 20:38 ./
drwxr-xr-x. 22 root root 253 Feb 7 21:20 ../
root#3ce55bba8904:/app# mkdir test
mkdir: cannot create directory 'test': Read-only file system
root#3ce55bba8904:/app# touch test
touch: cannot touch 'test': Read-only file system
I am trying to learn docker. So i am trying to create an archlinux image. Presently i am not worried of size. But i am stuck up how to further go ahead to use this as my development for a project. My goal is to create and use different archlinux images for my different projects separately.
1) shift to root in terminal
2) mkdir archlinux
3) pacstrap -i -c -d ./archlinux base
4) echo 'en_US.UTF-8 UTF-8' > ./archlinux/etc/locale.gen
5) arch-chroot ./archlinux locale-gen
6) echo 'LANG=en_US.UTF-8' > ./archlinux/etc/locale.conf
Now the total size of the folder archlinux is 899 MB.
Now i am trying to import it as an docker image
cd archlinux
tar -c . | docker import - example_archlinux
tar: ./etc/pacman.d/gnupg/S.gpg-agent: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent.extra: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent.ssh: socket ignored
tar: ./etc/pacman.d/gnupg/S.scdaemon: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent.browser: socket ignored
sha256:2b3ed6536389a1184f402ff5a9d20380a3f4aa2c49bdee31df9c7c10186eb889
Now I run the docker image
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
example_archlinux latest 2b3ed6536389 About a minute ago 881MB
Now i try to run the image:
# docker run -ti example_archlinux:latest /bin/bash
[root#3863ba31186b /]#
# docker run -ti example_archlinux:latest ls -al
total 52
drwxr-xr-x 1 root root 4096 Oct 16 08:32 .
drwxr-xr-x 1 root root 4096 Oct 16 08:32 ..
-rwxr-xr-x 1 root root 0 Oct 16 08:32 .dockerenv
lrwxrwxrwx 1 root root 7 Jan 5 2018 bin -> usr/bin
drwxr-xr-x 2 root root 4096 Oct 16 08:01 boot
drwxr-xr-x 5 root root 360 Oct 16 08:32 dev
drwxr-xr-x 1 root root 4096 Oct 16 08:32 etc
drwxr-xr-x 2 root root 4096 Jan 5 2018 home
lrwxrwxrwx 1 root root 7 Jan 5 2018 lib -> usr/lib
lrwxrwxrwx 1 root root 7 Jan 5 2018 lib64 -> usr/lib
drwxr-xr-x 2 root root 4096 Jan 5 2018 mnt
drwxr-xr-x 2 root root 4096 Jan 5 2018 opt
dr-xr-xr-x 275 root root 0 Oct 16 08:32 proc
drwxr-x--- 3 root root 4096 Oct 16 08:01 root
drwxr-xr-x 2 root root 4096 Oct 16 08:01 run
lrwxrwxrwx 1 root root 7 Jan 5 2018 sbin -> usr/bin
drwxr-xr-x 4 root root 4096 Oct 16 08:01 srv
dr-xr-xr-x 13 root root 0 Oct 16 08:32 sys
drwxrwxrwt 2 root root 4096 Oct 16 08:01 tmp
drwxr-xr-x 8 root root 4096 Oct 16 08:10 usr
drwxr-xr-x 12 root root 4096 Oct 16 08:01 var
Its great. Its working
Q1 : Will docker not ask for login and password of root, assuming i have set root passwd
I want to create my Django + ngingx + postgresql + redis + git. I will install and setup the required packages.
.
So i am testing whether run command will save the folders craeted
# docker run -ti example_archlinux:latest /bin/bash
[root#9f4e56ce38c5 /]# mkdir hare
[root#9f4e56ce38c5 /]# exit
# docker run -ti example_archlinux:latest ls /hare
ls: cannot access '/hare': No such file or directory
I have the main question:
Q2 Since i created a folder and if i exit its not there anymore.
Now what is the best way to use a docker image for my development.
I cant afford that my files are not there after i exit.
So is there any way that the container is permanently created and i can work in it for my development.
OR
Where to create my source code on host or docker. I want everything at one place.
Q1: I never tried setting the root password. But usually, when running the container, you'll be logged in as root except if you use the USER Dockerfile command, which is the more secure approach. More about it here
Q2: Everytime you remove your container, everything inside of it will be destroyed. So, you'll lose the files you've created, unless you bound a volume to your host. Volumes are the standard way to go. You can define a volume, for instance, on your docker run command:
docker run -ti -v /host/source/folder:/desired/guest/folder example_archlinux:latest ls -al
Now you can add/remove/change files both from container or host and it will be persisted. There wont be duplicated files. It's just that both have access to it.
more details here
I am trying to copy file from docker to host using the below command,
docker cp <container_name>:<file FQN> ./
But getting the below error,
Error response from daemon: not a directory
As verified, the file name and container name are valid.
Note: Using Docker in Mac
Thanks for all the answers. After a bit of struggle found out that the error message was not actually directly related to the docker cp command.
The scenario was, I ran the docker with the link to a local file. When the docker was running I deleted it. Then the file got created as a folder somehow (Probably, when I restarted the docker).
And whenever I am executing some command, the docker was giving me that error. Then once I created the file the error disappeared.
It seems your command is correct. You please try like the below from your local machine not from inside the container. sometimes unfortunately if we run this command with in the container we will get this kind of errors.
docker cp [container_name]:[docker dir abs path] [host dir path]
Hope it will help you.
Here is a full example on how to copy a file:
$ docker run -it ubuntu /bin/bash
root#9fc8a1af7f23:/#
root#9fc8a1af7f23:/# ll
total 72
drwxr-xr-x 34 root root 4096 Jul 13 21:51 ./
drwxr-xr-x 34 root root 4096 Jul 13 21:51 ../
-rwxr-xr-x 1 root root 0 Jul 13 21:51 .dockerenv*
drwxr-xr-x 2 root root 4096 Feb 14 23:29 bin/
drwxr-xr-x 2 root root 4096 Apr 12 2016 boot/
drwxr-xr-x 5 root root 360 Jul 13 21:51 dev/
drwxr-xr-x 45 root root 4096 Jul 13 21:51 etc/
drwxr-xr-x 2 root root 4096 Apr 12 2016 home/
drwxr-xr-x 8 root root 4096 Sep 13 2015 lib/
drwxr-xr-x 2 root root 4096 Feb 14 23:29 lib64/
drwxr-xr-x 2 root root 4096 Feb 14 23:28 media/
drwxr-xr-x 2 root root 4096 Feb 14 23:28 mnt/
drwxr-xr-x 2 root root 4096 Feb 14 23:28 opt/
dr-xr-xr-x 288 root root 0 Jul 13 21:51 proc/
drwx------ 2 root root 4096 Feb 14 23:29 root/
drwxr-xr-x 6 root root 4096 Feb 27 19:41 run/
drwxr-xr-x 2 root root 4096 Feb 27 19:41 sbin/
drwxr-xr-x 2 root root 4096 Feb 14 23:28 srv/
dr-xr-xr-x 13 root root 0 Jul 13 21:51 sys/
drwxrwxrwt 2 root root 4096 Feb 14 23:29 tmp/
drwxr-xr-x 11 root root 4096 Feb 27 19:41 usr/
drwxr-xr-x 13 root root 4096 Feb 27 19:41 var/
root#9fc8a1af7f23:/# cd tmp/
root#9fc8a1af7f23:/tmp# ls
root#9fc8a1af7f23:/tmp# echo "hello docker" > docker_test.txt
root#9fc8a1af7f23:/tmp# cat docker_test.txt
hello docker
root#9fc8a1af7f23:/tmp#
Then, in another terminal
dali#dali-X550JK:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9fc8a1af7f23 ubuntu "/bin/bash" 2 minutes ago Up 2 minutes fervent_hodgkin
dali#dali-X550JK:~$ docker cp fervent_hodgkin:/tmp/docker_test.txt /tmp/
dali#dali-X550JK:~$ cat /tmp/docker_test.txt
hello docker
dali#dali-X550JK:~$
Please follow these instruction, make sure your don't have typo in the file paths, otherwise share a reproducible error.
This error also appears when trying to copy a file that is actually a volume in the container, but the file has been deleted on the host.
This is simply an error in the path you want to copy.
You may not believe it, but that it is.
I have a docker image https://github.com/carnellj/spmia-chapter1 which does not find its CMD ./run.sh executable although it is there in the file system.
I was able to run /bin/sh in the container, and I can ls -l:
D:\Dokumente\ws\spring-microservices\spmia-chapter1 (master)
λ docker run -i -t johncarnell/tmx-simple-service:chapter1 /bin/sh
/ # ls -l
total 56
drwxr-xr-x 2 root root 4096 Mar 3 11:20 bin
drwxr-xr-x 5 root root 360 Apr 22 07:10 dev
drwxr-xr-x 1 root root 4096 Apr 22 07:10 etc
drwxr-xr-x 2 root root 4096 Mar 3 11:20 home
drwxr-xr-x 1 root root 4096 Apr 22 06:01 lib
drwxr-xr-x 5 root root 4096 Mar 3 11:20 media
drwxr-xr-x 2 root root 4096 Mar 3 11:20 mnt
dr-xr-xr-x 123 root root 0 Apr 22 07:10 proc
drwx------ 1 root root 4096 Apr 22 07:10 root
drwxr-xr-x 2 root root 4096 Mar 3 11:20 run
-rwxr-xr-x 1 root root 245 Apr 22 06:50 run.sh
drwxr-xr-x 2 root root 4096 Mar 3 11:20 sbin
drwxr-xr-x 2 root root 4096 Mar 3 11:20 srv
dr-xr-xr-x 13 root root 0 Apr 22 07:10 sys
drwxrwxrwt 2 root root 4096 Mar 3 11:20 tmp
drwxr-xr-x 1 root root 4096 Mar 7 01:04 usr
drwxr-xr-x 1 root root 4096 Mar 7 01:04 var
/ # ./run.sh
/bin/sh: ./run.sh: not found
/ # ls run.sh
run.sh
/bin/sh does not find ./run.sh although it is there in the file system, as proven by ls run.sh. Also, cat shows the content of run.sh:
/ # cat run.sh
#!/bin/sh
echo "********************************************************"
echo "Starting simple-service "
echo "********************************************************"
java -jar /usr/local/simple-service/simple-service-0.0.1-SNAPSHOT.jar
When I run vi from sh and copy the content of run.sh into a new file myrun.sh and make myrun.sh executable, I can execute ./myrun.sh and the spring service starts.
What is going on here? Why would sh not see an executable which is there in the filesystem? Executables from PATH or executables which I add manually run fine.
I am running Docker on Windows 10.
OK the reason is, run.sh is created with Windows line endings in the docker image if you check out with automatic lf->crlf conversion. One possible solution is to tell git not to convert line endings.
I have a strange problem in jenkins, I cannot copy files in a job, however, with the user jenkins on the command line, I can do that without problem.
I am using jenkins on debian running under the user "jenkins".
I added the user "jenkins" to the group "www-data", so that I can copy files to the www-folder of apache.
The folder rights of the target folder look like this:
drwxrwxr-x 9 www-data www-data 4096 Jun 23 16:19 .
drwxrwxr-x 4 www-data www-data 4096 Jun 23 12:45 ..
-rw-rw-r-- 1 volker www-data 368 Jun 23 17:10 about.php
-rw-rw-r-- 1 volker www-data 366 Jun 23 17:10 bio.php
-rw-rw-r-- 1 volker www-data 370 Jun 23 17:10 contact.php
drwxrwxr-x 3 volker www-data 4096 Jun 23 16:19 content
drwxrwxr-x 3 volker www-data 4096 Jun 23 16:19 css
drwxrwxr-x 8 volker www-data 4096 Jun 23 16:19 default
drwxrwxr-x 3 volker www-data 4096 Jun 23 16:19 fonts
drwxrwxr-x 2 volker www-data 4096 Jun 23 13:40 image
drwxrwxr-x 3 volker www-data 4096 Jun 23 16:19 images
-rw-rw-r-- 1 volker www-data 372 Jun 23 17:10 impressum.php
-rw-rw-r-- 1 volker www-data 367 Jun 23 17:10 index.php
-rw-rw-r-- 1 volker www-data 296 Jun 23 13:52 kontakt.php
drwxrwxr-x 3 volker www-data 4096 Jun 23 16:19 layout
-rw-rw-r-- 1 volker www-data 367 Jun 23 17:10 news.php
-rw-rw-r-- 1 volker www-data 370 Jun 23 17:10 termine.php
-rw-rw-r-- 1 volker www-data 369 Jun 23 17:10 videos.php
So everything is writable for group www-data.
If I am using the jenkins user to copy the files in the shell, I get no error:
jenkins#rootserver:~/jobs/deploy_notundellende/workspace$ whoami
jenkins
jenkins#rootserver:~/jobs/deploy_notundellende/workspace$ cp -R * /var/www/nue
jenkins#rootserver:~/jobs/deploy_notundellende/workspace$
But if I use the same command in jenkins itself, it fails with permission error:
pwd
/var/lib/jenkins/jobs/deploy_notundellende/workspace
whoami
jenkins
cp -R about.php bio.php contact.php content css fonts images impressum.php index.php layout news.php termine.php videos.php /var/www/nue
cp: cannot create regular file `/var/www/nue/about.php': Permission denied
cp: cannot create regular file `/var/www/nue/bio.php': Permission denied
cp: cannot create regular file `/var/www/nue/contact.php': Permission denied
cp: cannot create regular file `/var/www/nue/content/videos.php': Permission denied
How is that possible? Does anyone have an idea?
OK, I got it to work, I restarted the jenkins server and it worked. I assume it did not work before, because the jenkins server was already running when I changed its permissions. Makes sense to me now, come to think of it :) Anyway, thanks for anybody reading and thinking about this!
Solution 1: Restart Jenkins
(jenkins_url)/safeRestart - Allows all running jobs to complete. New jobs will remain in the queue to run after the restart is complete.
(jenkins_url)/restart - Forces a restart without waiting for builds to complete.
Solution 2: Check the user and permission for the same user
Check user: whoami
change permission: sudo chmod -R 777 /var/www/html/* or sudo chmod a+rwx /var/szDirectoryName
Solution 3:
If you get error/warning like Linux: 'Username' is not in the sudoers file. This incident will be reported ref the link