Jupyter Lab application features nice Terminas with in-browser terminal shell that support colours, navigation keys, and pretty much all standard features of a terminal application. In this question I mean /lab app, not classic Notebook (/tree) app.
If I launch Jupyter server using this Docker image it works great. I need to build my own image, and preferably not based on that. I do it simply as documented:
docker run -it --rm -p 8888:8888 -v "$PWD":/jupyter python:3.8 bash
# pip install jupyterlab
# jupyter lab --config=/jupyter/jupyter_notebook_config.py
The above is assuming I have a jupyter_notebook_config.py in the current directory:
c.NotebookApp.ip = '0.0.0.0'
c.NotebookApp.port = 8888
c.NotebookApp.password = 'sha1:<salt>:<hash>'
c.NotebookApp.allow_password_change = False
c.NotebookApp.allow_root = True
c.NotebookApp.open_browser = False
Everything works, but Terminal performs very poor, it does not support colours and send codes (like ^[[A, ^[[B) instead of arrow keys. Line-by-line investigation of the Dockerfile is not so exciting endeavour, may be somebody can point me to what I am mising?
EDIT: I was little bit wrong about colours (was confused by the default green prompt in the jupyter/base-notebook image) and the overall issue description. The root cause was that the shell that is started in my image is sh while in the official image it is bash. But nevertheless Terminal is not fully functional, e.g. if I launch nano, it starts only in 80x25 characters area and does not stretch to the actual size of the terminal). Arrows work in nano though.
So I figured out the reason. Apparently the Terminal web app just replicates the behaviour of the default shell of the user under which Jupyter is run. In this image they enable colouring in .bashrc template and then create a new user specifying a shell for him (lines 52 and 59).
EDIT: also SHELL=/bin/bash must be set in environment.
Related
I'm new to Docker, and I'm not sure how to quite deal with this situation.
So I'm trying to run a docker container in order to replicate some results from a research paper, specifically from here: https://github.com/danhper/bigcode-tools/blob/master/doc/tutorial.md
(image link: https://hub.docker.com/r/tuvistavie/bigcode-tools/).
I'm using a windows machine, and every time I try to run the docker image (via: docker run -p 80:80 tuvistavie/bigcode-tools), it instantly closes. I've tried running other images, such as the getting-started, but that image doesn't close instantly.
I've looked at some other potential workarounds, like using -dit, but since the instructions require setting an alias/doskey for a docker run command, using the alias and chaining it with other commands multiple times results in creating a queue for the docker container since the port is tied to the alias.
Like in the instructions from the GitHub link, I'm trying to set an alias/doskey to make api calls to pull data, but I am unable to get any data nor am I getting any errors when performing the calls on the command prompt.
Sorry for the long question, and thank you for your time!
Going in order of the instructions:
0. I can run this, it added the image to my Docker Desktop
1.
Since I'm using a windows machine, I had to use 'set' instead of 'export'
I'm not exactly sure what the $ is meant for in UNIX, and whether or not it has significant meaning, but from my understanding, the whole purpose is to create a directory named 'bigcode-workspace'
Instead of 'alias,' I needed to use doskey.
Since -dit prevented my image from instantly closing, I added that in as well, but I'm not 100% sure what it means. Running docker run (...) resulted in the docker image instantly closing.
When it came to using the doskey alias + another command, I've tried:
(doskey macro) (another command)
(doskey macro) ^& (another command)
(doskey macro) $T (another command)
This also seemed to be using github api call, so I also added a --token=(github_token), but that didn't change anything either
Because the later steps require expected data pulled from here, I am unable to progress any further.
Looks like this image is designed to be used as a command-line utility. So it should not be running continuously, but you run it via alias docker-bigcode for your tasks.
$BIGCODE_WORKSPACE is an environment variable expansion here. So on a Windows machine it's %BIGCODE_WORKSPACE%. You might want to set this variable in Settings->System->About->Advanced System Settings, because variables set with SET command will apply to the current command prompt session only. Or you can specify the path directly, without environment variable.
As for alias then I would just create a batch file with the following content:
docker run -p 6006:6006 -v %BIGCODE_WORKSPACE%:/bigcode-tools/workspace tuvistavie/bigcode-tools %*
This will run the specified command appending the batch file parameters at the end. You might need to add double quotes if BIGCODE_WORKSPACE path contains spaces.
I have been trying http://predictionio.apache.org/install/install-docker/ this tutorial. I have successfully built Docker image however when I try to run docker run i get the Can't open /etc/predictionio/pio-env.sh error.
docker build -t predictionio/pio pio
docker run -ti predictionio/pio
PS: If I comment out the last line CMD ["sh", "/usr/bin/pio_run"] I can build and run docker image successfully. I can open the file too from docker bash.
I think you need to grant permissions to execute this file. add the following line at the end of your Dockerfile
RUN chmod +x pio_run.sh
also, you might need to change CMD to ENTRYPOINT like following:
ENTRYPOINT ["sh","/usr/bin/pio_run.sh"]
Your output states you are running Windows. Did you use the default command prompt or did you use docker terminal? I had the same error messages in the past on Windows but mysteriously it disappeared after trying the tutorial again. I am not sure what I did different except I might possibly used docker instead of the default command prompt...
Could you also try using docker-compose instead of plain docker commands as described in the tutorial?
Ensure your storage (Postgres, MySQL or ElasticSearch) is running before starting PIO as well.
Just resolved it on my machine.
When you cloned repository on Windows, git converted end of line symbols from Unix-style (\n) to Windows style (\r\n).
You need to open file C:\wherever-you-cloned-pio-repository\predictionio\docker\pio\pio_run and change it back (for e.g. using Visual Studio Code, or Notepad++). Then you need to rebuild the image and it should work.
Also for the future you may want to disable automatic conversion Disable git EOL Conversions
I am unable to run repo non-interactively inside a container as part of a freestyle job.
It prompts for the user-name and email. I got round that by doing a git config --global inside the job.
But then it does the color test, and that hangs indefinitely.
Looking at the source code for repo I see this
if os.isatty(0) and os.isatty(1) and not self.manifest.IsMirror:
if opt.config_name or self._ShouldConfigureUser():
self._ConfigureUser()
self._ConfigureColor()
So, I ran the following inside the container:
python -C "import os; print os.isatty(0), os.isatty(1)"
and, sure enough, it printed out True True
Looking at the Jenkins log, it launches the container with --tty specified, and there seems no way to configure that option.
I can't find a bash option to force a script to be run in a non-interactive shell. If I put the above python line in a file and execute it with almost any combination of commands and options, it still prints out True True
The only way I see something different is if I use I/O redirection
bash <a.sh
which prints out False True - i.e. stdin is not a tty, and
bash <a.sh >a.log
which prints False False.
For a complex script, are there any problems using the bash <script approach?
Does anyone know any jenkins magic to prevent docker being launched using --tty?
I know that the --tty is the culprit. I built the container locally and ran the following
$ docker run repotest python -c "import os;print os.isatty(0), os.isatty(1)"
False False
$ docker run --tty repotest python -c "import os;print os.isatty(0), os.isatty(1)"
True True
Running Versions:
repo: 1.12.37 (per Ubuntu 16.04 apt-get)
Jenkins: 2.149
Cloudbees Docker Plugin: 1.7.3
Container base is ubuntu:xenial
I'm using the "Build inside a docker container" option.
To run bash script repo_script.sh "non-interactively", or more exactly speaking without having terminals associated with standard streams, you could run your script simply as
repo_script.sh < /dev/null 2>&1 | cat
assuming you want to see the output the way you would see it running simply as repo_script.sh. By piping the standard output and error to a different process the file descriptor appears as a pipe and not TTY to repo_script.sh. You could also direct output to a file, or even to /dev/null if you do not care about the output:
log_file=/dev/null
repo_script.sh < /dev/null > "${log_file}" 2>&1
Running the script as
bash < repo_script.sh | cat
might would work too, though it is very unorthodox and to my mind hackish way of running a script just to break the association of TTY to the standard input. From script engine point of view, it is different to read a script program from a file than from standard input (which typically, if it is a terminal, is not seekable), so there might be some subtle differences that could possibly bite you in unexpected ways. This way does not as clearly communicate your intention to the next person that need to understand your code, and may lead to partial hair loss in that person due to extraneous head scratching.
There is no need for any bash options, just using the output directions from within the interpreting shell as above described is an easy-to-comprehend, multi-platform compatible standard convention for changing the standard stream associations.
P.S. I think it should be enough for your repo script to just test if the standard input is a TTY. It looks to me like the author of that script did not think deeply enough there. There is simply no use waiting for input if you do not have terminal device associated with standard input, and you could determine that everything needs to run without user interaction from there or stop with an error if that is not possible.
According to the documentation at bazelbuild/rules_docker, it should be possible to work with these container images on OSX, and it also claims that it's possible to do so without docker.
These rules do not require / use Docker for pulling, building, or pushing images. This means:
They can be used to develop Docker containers on Windows / OSX without boot2docker or docker-machine installed.
They do not require root access on your workstation.
How do I do that? Here's a simple rule:
go_image(
name = "helloworld_image",
importpath = "github.com/nictuku/helloworld",
library = ":go_default_library",
visibility = ["//visibility:public"],
)
I can build the image with bazel build :helloworld_image. It produces a tar ball in blaze-bin, but it won't run it:
INFO: Running command line: bazel-bin/helloworld_image
Loaded image ID: sha256:08d312b529d30431c68741fd3a31468a02533f27a8c2c29eedc969dae5a39852
Tagging 08d312b529d30431c68741fd3a31468a02533f27a8c2c29eedc969dae5a39852 as bazel:helloworld_image
standard_init_linux.go:185: exec user process caused "exec format error"
ERROR: Non-zero return code '1' from command: Process exited with status 1.
It's trying to run the linux this is OSX, which is silly.
I also tried doing a "docker load" on the .tar content but it doesn't seem to like that format.
$ docker load -i bazel-bin/helloworld_image-layer.tar
open /var/lib/docker/tmp/docker-import-330829602/app/json: no such file or directory
Help? Thanks!
You are building for your host platform by default so you need to build for the container platform if you want to do that.
Since you are using a go binary, you can do cross compilation by specifying --cpu=k8 on the command line. Ideally we would be able to just say that the docker image needs a linux binary (so no need to specify the --cpu command-line flag) but this is still a work in progress in Bazel.
So I'm trying set run OpenAI gym in a docker container, but it looks like this:
Notice the pong window has a weird render issue where it's repeating things and the colors are off. Here is space invaders:
NOTE FOR "NOT A PROGRAMMING ISSUE" PEOPLE: The solution involves the correct bash script code to call the right API methods to render the arrays of pixels correctly. Also only a graphics programmer is likely to "recognize the render glitch".
My setup is very simple.
- I'm on a local ubuntu 16.04 install with an Nvidia gtx1060 and corei7
- I installed nvida runfile driver with --no-opengl-files (as per instructions from Nvidia and many place).
- Specifically, I'm running floydhub/pytorch docker image.
Does anyone recognize the particular render glitch and what it could mean? It almost looks like a StackOverflow of a frame buffer! What can I do to track down the bug?
EDIT: I have eliminated all the extra dependencies I had been installing and am just doing simple x-forwarding according to the ROS GUI guide.
You can easily reproduce this as follows:
docker run -it --user=$(id -u) --env="DISPLAY" --workdir="/home/$USER" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" floydhub/pytorch:0.1.11-gpu-py3.6 bash
Now in the image, type python and then the following:
import gym
gym.make('Pong-v0').render()
That should open up an x-forwarded window on your machine, but the display is corrupt (at least for me)
Above I actually used SpaceInvaders-v0