Can't save file on remote Jupyter server running in docker container - docker

I'm trying to work in Jupyter Lab run via Docker on a remote machine, but can't save any of the files I open.
I'm working with a Jupyter Docker Stack. I've installed docker on my remote machine and successfully pulled the image.
I set up port forwarding in my ~/.ssh/config file:
Host mytunnel
HostName <remote ip>
User root
ForwardAgent yes
LocalForward 8888 localhost:8888
When I fire up the container, I use the following script:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
The container is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8fc3c720af1 jupyter/tensorflow-notebook "tini -g -- start-no…" 8 minutes ago Up 8 minutes 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp adoring_khorana
I get the regular Jupyter url back:
http://127.0.0.1:8888/lab?token=<token>
But when I access the server in my browser, the Save option is disabled.
I've tried some of the solutions proposed elsewhere in SO, but no luck.
Is this something about connecting over SSH? The Jupyter server thinks it's not a secure connection?

It is possible that the problem is related to the SSH configuration, but I think is more probably related to a permission issue with your volume mount.
Please, try reviewing your docker container logs looking for permissions related errors. You can do that using the following:
docker container logs <container id>
See the output provided by your docker run command too.
In addition, try opening a shell in the container:
docker exec -it <container id> /bin/bash
and see if you are able to create a file in the default work directory:
touch /home/jovyan/work/test_file
Finally, the Jupyter docker stacks repository has a troubleshooting page almost entirely devoted to permissions issues.
Consider especially the solutions provided in the Additional tips and troubleshooting commands for permission-related errors and, as suggested, try providing launching the container with your OS user:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
--user "$(id -u)" --group-add users \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
After that, as suggested in the mentioned documentation as well, see if the container is properly mounted using the following command:
docker inspect <container_id>
In the obtained result note the value of the RW field which indicates whether the volume is writable (true) or not (false).

Related

Docker Image on Google Cloud Exiting (No detached mode)

I have a docker image on the container registry of google. The issue i'm facing is that it I do not see an option to add docker run-type arguments like:
--detached
I would run my container by calling
docker run -t -d -p 3333:3333 -p 3000:3000 --name <name> <image_ID>
Im using a VM instance on Gcloud and the container option seems to not have this detached argument (which is killing my ubuntu-based container from stopping when not used). Both using the Computing Engine OS and Google Cloud Run service option eventually results in an error.
Your question lacks detail. Questions benefit from details including the specific steps that were taken, the errors that resulted or the steps that were taken to diagnose the error etc.
I assume from your question that you're using Cloud Console to create a Compute Engine instance and your're selecting "Container" to deploy a container image to it.
The default configuration is to run the container detached i.e. equivalent to docker run --detach.
You can prove this to yourself by SSH'ing in to the instance and running e.g. docker container ls to see the running containers or docker container ls --all to see all containers (stopped too).
You can also run the container directly from here too as you would elsewhere although you may prefer to docker run --interactive --stdin or docker container logs ... to determine why it's not starting correctly :
docker run \
--stdin \
--detach \
--publish=3333:3333 \
--publish=3000:3000 \
--name=<name> \
<image_ID>

why can i not run a X11 application?

So, as the title states, I'm a docker newbie.
I downloaded and installed the archlinux/base container which seems to work great so far. I've setup a few things, and installed some packages (including xeyes) and I now would like to launch xeyes. For that I found out the CONTAINER ID by running docker ps and then used that ID in my exec command which looks now like:
$ docker exec -it -e DISPLAY=$DISPLAY 4cae1ff56eb1 xeyes
Error: Can't open display: :0
Why does it still not work though? Also, how can I stop my running instance without losing its configured state? Previously I have exited the container and all my configuration and software installations were gone when I restarted it. That was not desired. How do I handle this correctly?
Concerning the X Display you need to share the xserver socket (note: docker can't bind mount a volume during an exec) and set the $DISPLAY (example Dockerfile):
FROM archlinux/base
RUN pacman -Syyu --noconfirm xorg-xeyes
ENTRYPOINT ["xeyes"]
Build the docker image: docker build --rm --network host -t so:57733715 .
Run the docker container: docker run --rm -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY so:57733715
Note: in case of No protocol specified errors you could disable host checking with xhost + but there is a warning to that (man xhost for additional information).

My container is not running while attach with MOUNT VOLUME

I've create volume like docker volume create my-vol in my machine. But when I run my container as follow:
docker run -d \
--name=ppshein-test \
--mount source=my-vol,destination=/var/www/ -p 3000:3000 \
ppshein:latest
and found that my container is not working, that's why I've tried to logs
> sample-docker#1.0.0 start /var/www
> node index.js
and found as above. That's why I've tried to run that same image without attaching specific volume as follow:
docker run -d --restart=always -p 3001:3000 ppshein:latest
and found it's working smoothly. But I check its container logs and found as follow:
> sample-docker#1.0.0 start /var/www
> node index.js
Example app listening on port 3000!
Oddly, what I've found Example app listening on port 3000! of that last container even not found that same message on previous container.
Please let me know why. Thanks much.
I think that can be something you are looking for,
(from docker docs)
If you use --mount to bind-mount a file or directory that does not yet exist on the Docker host, Docker does not automatically create it for you, but generates an error.

How do you block console, root or other users, access to a docker container?

I tried installing puppet and changing the root user's shell to '/sbin/nologin' but I can still get right into the console?
It is a centOS 7 container.
Is Docker using a socket for the connection? Could I use selinux to block the socket? If I do I fear that I will also disable docker from being able to communicate with the container at all? I have been reading through Docker Security articles but have not found a good solution.
My end goal is for the container to be an ephemeral 'black box' when it comes up. My particular user case is a local web app, so no console access will be required.
You could try to remove all terminal commands (bash, sh, and so on) from the container:
docker exec [container-id] -it /bin/rm -R /bin/*
At that point you will not be able to use docker exec [container-id] -it bash to get a console to the container.
If you want to be more gentle about it you can only remove the shells you have (and leave all the other commands available (like the rm command):
docker exec [container-id] -it /bin/rm -R /bin/bash
docker exec [container-id] -it /bin/rm -R /bin/sh
... and so on

Restarting Docker Container after reboot

I start a docker container like this:
docker run -ti --restart="always" --name "lizmap" -p 80:80 -d -t \
-v /home/lizmap_project:/home \
-v /home/lizmap_var:/var/www/websig/lizmap/var \
-v /home/lizmap_tmp:/tmp \
jancelin/docker-lizmap
which makes the GIS server works like charm.
After the first two reboots the container comes up by self as expected. After some several reboots it keeps on telling me that it is restarting and restarting and restarting.
Logs are
apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
httpd (pid 7) already running
The workaround I do is to docker stop lizmap and docker rm lizmap and then start the container again with the code above.
Does anyone has an idea about how to avoid this workaround and make the container's restart working not only for the first two time.
The docker files come from this
Github

Resources