Since I prefer using bash (and use git anyway), I tried running docker run -it ubuntu bash (after a successful hello-world), which unfortunately resulted in a invalid handle error. Using cmd.exe instead, it works fine.
Turns out the problem is my using ConEmu to host mintty.exe. Using mingw64.exe (or mintty.exe) directly instead, the error reads
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
instead and provides the helpful information to prefix winpty, which then also works from within ConEmu. Note however that winpty also messes up your command line parameters, e.g. winpty echo yes /no yields yes C:/yourmsyspath/no...
Related
I'm new to Docker, and I'm not sure how to quite deal with this situation.
So I'm trying to run a docker container in order to replicate some results from a research paper, specifically from here: https://github.com/danhper/bigcode-tools/blob/master/doc/tutorial.md
(image link: https://hub.docker.com/r/tuvistavie/bigcode-tools/).
I'm using a windows machine, and every time I try to run the docker image (via: docker run -p 80:80 tuvistavie/bigcode-tools), it instantly closes. I've tried running other images, such as the getting-started, but that image doesn't close instantly.
I've looked at some other potential workarounds, like using -dit, but since the instructions require setting an alias/doskey for a docker run command, using the alias and chaining it with other commands multiple times results in creating a queue for the docker container since the port is tied to the alias.
Like in the instructions from the GitHub link, I'm trying to set an alias/doskey to make api calls to pull data, but I am unable to get any data nor am I getting any errors when performing the calls on the command prompt.
Sorry for the long question, and thank you for your time!
Going in order of the instructions:
0. I can run this, it added the image to my Docker Desktop
1.
Since I'm using a windows machine, I had to use 'set' instead of 'export'
I'm not exactly sure what the $ is meant for in UNIX, and whether or not it has significant meaning, but from my understanding, the whole purpose is to create a directory named 'bigcode-workspace'
Instead of 'alias,' I needed to use doskey.
Since -dit prevented my image from instantly closing, I added that in as well, but I'm not 100% sure what it means. Running docker run (...) resulted in the docker image instantly closing.
When it came to using the doskey alias + another command, I've tried:
(doskey macro) (another command)
(doskey macro) ^& (another command)
(doskey macro) $T (another command)
This also seemed to be using github api call, so I also added a --token=(github_token), but that didn't change anything either
Because the later steps require expected data pulled from here, I am unable to progress any further.
Looks like this image is designed to be used as a command-line utility. So it should not be running continuously, but you run it via alias docker-bigcode for your tasks.
$BIGCODE_WORKSPACE is an environment variable expansion here. So on a Windows machine it's %BIGCODE_WORKSPACE%. You might want to set this variable in Settings->System->About->Advanced System Settings, because variables set with SET command will apply to the current command prompt session only. Or you can specify the path directly, without environment variable.
As for alias then I would just create a batch file with the following content:
docker run -p 6006:6006 -v %BIGCODE_WORKSPACE%:/bigcode-tools/workspace tuvistavie/bigcode-tools %*
This will run the specified command appending the batch file parameters at the end. You might need to add double quotes if BIGCODE_WORKSPACE path contains spaces.
When I try to run a shell command in ipython or the julia repl it just says
shell> ls
zsh:1: command not found: ls
Not sure if it matters, but I have my path set in zshenv instead of zshrc so that emacs shell works.
Any ideas?
Edit:
I'm on macOS 10.14.6
For Julia, The shell> REPL prompt does in fact use a shell to execute its commands (on non-Windows systems). It effectively does something like run(`$shell -c ls`), and for most shells (including zsh) this means "non-interactive" mode and limits the number of init files that get loaded. You want to make sure your shell is working in this mode; I'd guess that if you type zsh -c ls at your terminal it'll be similarly broken.
Alternatively, you can customize which shell Julia uses through an environment variable. Setting JULIA_SHELL=/bin/sh is probably a safe bet — Julia uses that environment variable if it is set, otherwise it uses SHELL, and finally it falls back to /bin/sh if neither is set.
I'm not as familiar with ipython, but I'd wager it's doing something similar.
I have been trying http://predictionio.apache.org/install/install-docker/ this tutorial. I have successfully built Docker image however when I try to run docker run i get the Can't open /etc/predictionio/pio-env.sh error.
docker build -t predictionio/pio pio
docker run -ti predictionio/pio
PS: If I comment out the last line CMD ["sh", "/usr/bin/pio_run"] I can build and run docker image successfully. I can open the file too from docker bash.
I think you need to grant permissions to execute this file. add the following line at the end of your Dockerfile
RUN chmod +x pio_run.sh
also, you might need to change CMD to ENTRYPOINT like following:
ENTRYPOINT ["sh","/usr/bin/pio_run.sh"]
Your output states you are running Windows. Did you use the default command prompt or did you use docker terminal? I had the same error messages in the past on Windows but mysteriously it disappeared after trying the tutorial again. I am not sure what I did different except I might possibly used docker instead of the default command prompt...
Could you also try using docker-compose instead of plain docker commands as described in the tutorial?
Ensure your storage (Postgres, MySQL or ElasticSearch) is running before starting PIO as well.
Just resolved it on my machine.
When you cloned repository on Windows, git converted end of line symbols from Unix-style (\n) to Windows style (\r\n).
You need to open file C:\wherever-you-cloned-pio-repository\predictionio\docker\pio\pio_run and change it back (for e.g. using Visual Studio Code, or Notepad++). Then you need to rebuild the image and it should work.
Also for the future you may want to disable automatic conversion Disable git EOL Conversions
I am unable to run repo non-interactively inside a container as part of a freestyle job.
It prompts for the user-name and email. I got round that by doing a git config --global inside the job.
But then it does the color test, and that hangs indefinitely.
Looking at the source code for repo I see this
if os.isatty(0) and os.isatty(1) and not self.manifest.IsMirror:
if opt.config_name or self._ShouldConfigureUser():
self._ConfigureUser()
self._ConfigureColor()
So, I ran the following inside the container:
python -C "import os; print os.isatty(0), os.isatty(1)"
and, sure enough, it printed out True True
Looking at the Jenkins log, it launches the container with --tty specified, and there seems no way to configure that option.
I can't find a bash option to force a script to be run in a non-interactive shell. If I put the above python line in a file and execute it with almost any combination of commands and options, it still prints out True True
The only way I see something different is if I use I/O redirection
bash <a.sh
which prints out False True - i.e. stdin is not a tty, and
bash <a.sh >a.log
which prints False False.
For a complex script, are there any problems using the bash <script approach?
Does anyone know any jenkins magic to prevent docker being launched using --tty?
I know that the --tty is the culprit. I built the container locally and ran the following
$ docker run repotest python -c "import os;print os.isatty(0), os.isatty(1)"
False False
$ docker run --tty repotest python -c "import os;print os.isatty(0), os.isatty(1)"
True True
Running Versions:
repo: 1.12.37 (per Ubuntu 16.04 apt-get)
Jenkins: 2.149
Cloudbees Docker Plugin: 1.7.3
Container base is ubuntu:xenial
I'm using the "Build inside a docker container" option.
To run bash script repo_script.sh "non-interactively", or more exactly speaking without having terminals associated with standard streams, you could run your script simply as
repo_script.sh < /dev/null 2>&1 | cat
assuming you want to see the output the way you would see it running simply as repo_script.sh. By piping the standard output and error to a different process the file descriptor appears as a pipe and not TTY to repo_script.sh. You could also direct output to a file, or even to /dev/null if you do not care about the output:
log_file=/dev/null
repo_script.sh < /dev/null > "${log_file}" 2>&1
Running the script as
bash < repo_script.sh | cat
might would work too, though it is very unorthodox and to my mind hackish way of running a script just to break the association of TTY to the standard input. From script engine point of view, it is different to read a script program from a file than from standard input (which typically, if it is a terminal, is not seekable), so there might be some subtle differences that could possibly bite you in unexpected ways. This way does not as clearly communicate your intention to the next person that need to understand your code, and may lead to partial hair loss in that person due to extraneous head scratching.
There is no need for any bash options, just using the output directions from within the interpreting shell as above described is an easy-to-comprehend, multi-platform compatible standard convention for changing the standard stream associations.
P.S. I think it should be enough for your repo script to just test if the standard input is a TTY. It looks to me like the author of that script did not think deeply enough there. There is simply no use waiting for input if you do not have terminal device associated with standard input, and you could determine that everything needs to run without user interaction from there or stop with an error if that is not possible.
I use Java S2I image for a container running in Openshift (on premise). My problem is that the output of the image is page-buffered and oc logs ... does not show me the last logs.
I could probably spin up my docker image that would do stdbuf -oL -e0 java ... but I would prefer to stick to the 'official' image (just adding the jar to /deployments). Is there any way to reduce buffering (use line-buffering instead of page-buffering), or flush the output on demand?
EDIT: It seems that I could update deployment config and pass stdbuf in there, but that means that I'd have to compose all the args myself. Ideal solution would be passing --tty do Docker, but I can't see how a custom arguments could be passed that way in Openshift.
In your repo, try creating the file .s2i/bin/run. In it add:
#/bin/bash
exec stdbuf -oL -e0 /usr/local/s2i/run
I always forget where the S2I assemble and run scripts are in the Java S2I image, so you may need to replace /usr/local/s2i with the correct path.
What adding this file does is that it will be run as the startup command instead of the original run script. You can then run the original script with stdbuf. Ensure you use exec so that the sub process replaces the current one, else signals will not be propagated through properly.
Even though this might work, am surprised logging isn't working in an unbuffered mode already. I expect there would be a better way of controlling it through some Java config instead.