I'm running a gremlin server using the official docker container:
docker run --rm -it -p 8182:8182 --name gremlin tinkerpop/gremlin-server
I then try to run the following script from the host machine:
from gremlin_python.process.anonymous_traversal import traversal
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
if __name__ == "__main__":
g = traversal().withRemote(DriverRemoteConnection('ws://localhost:8182', 'g'))
g.V().drop()
g.V().addV('person')
l = g.V().hasLabel('person')
print(l.toList())
The connection seems to work (no errors), but the queries don't seem to be actually executed (the gremlin server statistics show no calls whatsoever).
The even more bizarre part is that the toList() call blocks execution, and returns nothing. If then I stop the docker container, the connection on the python side drops.
I'm using the default settings for the gremlin server.
Could someone help me understand what's going on?
EDIT: I also tried changing the gremlin configuration host to 0.0.0.0.
EDIT: so the reason why it would appear that only the toList waits for an answer is because the other queries aren't actually executed yet, you need .next().
It turns out there were two errors:
the address must end with /gremlin, so in my case 'ws://localhost:8182/gremlin'
when trying this, an exception appears which looks like a connection error at first :
RuntimeError: Event loop is closed
Exception ignored in: <function ClientResponse.__del__ at 0x7fb532031af0>
Traceback (most recent call last):
[..]
File "/usr/lib/python3.8/asyncio/selector_events.py", line 692, in close
File "/usr/lib/python3.8/asyncio/base_events.py", line 719, in call_soon
File "/usr/lib/python3.8/asyncio/base_events.py", line 508, in _check_closed
RuntimeError: Event loop is closed
this is actually not a connection error, but a warning that the connection was not properly closed. If you investigate, you would notice that the queries were in fact executed. The correct way to handle this is to write something along the lines of:
conn = DriverRemoteConnection('ws://localhost:8182/gremlin', 'g')
g = traversal().withRemote(conn)
[do your graph operations]
conn.close()
and with this, no exceptions, life is good. I am quite surprised this appears in no documentation.
Related
I have a web app in a Docker container that works perfectly if I run it locally, but when I deploy it to Azure it just won't work, whatever I try. I keep getting 502/503 errors. In the log file it says:
Traceback (most recent call last):
[ERROR] File "app.py", line 22, in
[ERROR] app.run(use_reloader=use_reloader, debug=use_reloader, host=host, port=port)
[ERROR] File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 922, in run
[ERROR] run_simple(t.cast(str, host), port, self, **options) [ERROR] File "/usr/local/lib/python3.6/site-packages/werkzeug/serving.py", line 982, in run_simple
[ERROR] s.bind(server_address)
[ERROR] socket.gaierror: [Errno -2] Name or service not known
The configuration I have:
Dockerfile: EXPOSE 80, application settings: see picture, app runs with this python code (this is just a snippet to show how environment variables are called):
if __name__ == '__main__':
# Get environment variables
use_reloader = os.environ.get('use_reloader')
debug = os.environ.get('debug')
port = os.environ.get('port')
# Run the app
app.run(use_reloader=use_reloader, debug=use_reloader, host=host, port=port)
What am I missing here? I looked at other answers related to this on here but that didn't help me with this. Anyone any suggestions? Thanks!
EDIT:
I tried another attempt, now with in Dockerfile: EXPOSE 8000, and in application settings: port 80 (see code snippet app.py above) and WEBSITES_PORT 8000. But now I get: Waiting for response to warmup request for container. And then after many of these messages it times out and restarts... I think I still don't understand quite how it works with the port settings, would someone be able to explain this to me? So what I need to know: how don the environment variable 'port' in app.py, the EXPOSE in the Dockerfile and the settings 'port' and 'WEBSITES_PORT' in the application settings in the web app need to be aligned/configured? I just can't find clear information about this.
I resolved the issue myself: the reason for the errors was that I had a huge image (with BERT model) but using a basic app plan. I upgraded to P1V3 and now it runs like a charm, with WEBSITES_PORT=8000 and WEBSITES_CONTAINER_START_LIMIT=1200. Please allow 2 minutes to warm up.
I'm trying to run a function in my lisp program. It is a bot that is connected to an IRC channel and with a special command you can query the bot to evaluate a simple lisp command. Because it is extremely dangerous to execute arbitrary code from people on the internet I want the actual evaluation to happen in a VM that is running a docker for every evaluation query the bot gets.
My function looks like this:
(defun run-command (cmd)
(uiop:run-program (list "docker" "run" "--rm" "-it" "my/docker" "sbcl" "--noinform" "--no-sysinit" "--no-userinit" "--noprint" "--disable-debugger" "--eval" (string-trim '(#\^M) (format nil "~A" cmd))) "--eval" "'(quit)'") :output '(:string :stripped t))
The idea behind this function is to start a docker that contains SBCL, runs the command via SBCL --eval and prints it to the docker std-out. And this printed string should be the result of run-command.
If I call
docker run --rm -it my/docker sbcl --noinform --no-sysinit --no-userinit --noprint --disable-debugger --eval "(print (+ 2 3))" --eval "(quit)"
on my command line I just get 5 as an result, what is exactly what I want.
But if I run the same command within lisp, with the uiop:run-program function I get
Subprocess #<UIOP/LAUNCH-PROGRAM::PROCESS-INFO {1004FC3923}>
with command ("docker" "run" "--rm" "-it"
"my/docker" "sbcl" "--noinform"
"--no-sysinit" "--no-userinit" "--noprint"
"--disable-debugger" "--eval" "(+ 2 3)")
as an error message, which means the process failed somehow. But I don't know what exactly could be wrong here. If I just execute for example "ls" I get the output, so the function seems to work properly.
Is there some special knowledge I need about uiop:run-program or am I doing something completely wrong?
Thanks in advance.
Edit: So it turns out that the -it flag caused issues. After removing the flag a new error emerges. Now the bot has not the permissions to execute docker. Is there a way to give it the permissions without granting it sudo rights?
There's, probably, something wrong with the way docker is invoked here (or SBCL). To get the error message, invoke uiop:run-program with :error-output :string argument, and then choose the continue restart to, actually, terminate execution and get the error output printed (if you're running from SLIME or some other REPL). If you call this in a non-interactive environment, you can wrap the call in a handler-bind:
(handler-bind ((uiop/run-program:subprocess-error
(lambda (e) (invoke-restart (find-restart 'continue)))))
(run-command ...))
It turned out the -it did cause trouble. After removing it and elevating the correct permissions to the bot everything worked out fine.
We have a Dask pipeline in which we basically use a LocalCluster as a process pool. i.e. we start the cluster with LocalCluster(processes=True, threads_per_worker=1). Like so:
dask_cluster = LocalCluster(processes=True, threads_per_worker=1)
with Client(dask_cluster) as dask_client:
exit_code = run_processing(input_file, dask_client, db_state).value
Our workflow and task parallelization works great when run locally. However when we copy the code into a Docker container (centos based), the processing completes and we sometimes get the following error as the container exits:
Traceback (most recent call last):^M
File "/opt/rh/rh-python36/root/usr/lib64/python3.6/multiprocessing/queues.py", line 240, in _feed^M
send_bytes(obj)^M
File "/opt/rh/rh-python36/root/usr/lib64/python3.6/multiprocessing/connection.py", line 200, in send_bytes^M
self._send_bytes(m[offset:offset + size])^M
File "/opt/rh/rh-python36/root/usr/lib64/python3.6/multiprocessing/connection.py", line 404, in _send_bytes^M
self._send(header + buf)^M
File "/opt/rh/rh-python36/root/usr/lib64/python3.6/multiprocessing/connection.py", line 368, in _send^M
n = write(self._handle, buf)^M
BrokenPipeError: [Errno 32] Broken pipe^M
Furthermore, we get multiple instances of this error which makes me think that the error is coming from abandoned worker processes. Our current working theory is that this is related somehow to the "Docker zombie reaping problem" but we don't know how to fix it without starting from a completely different docker image and we don't want to do that.
Is there a way to fix this using only Dask cluster/client cleanup methods?
You should create the cluster as a context manager. It is actually the thing that launches processes, not the Client.
with LocalCluster(...):
...
My goal is to import data from CSV-files into OrientDB.
I use the OrientDB 2.2.22 Docker image.
When I try to execute the /orientdb/bin/oetl.sh config.json script within Docker, I get the error: "Can not open storage it is acquired by other process".
I guess this is, because the OrientDB - service is still running. But, if I try to stop it i get the next error.
./orientdb.sh stop
./orientdb.sh: return: line 70: Illegal number: root
or
./orientdb.sh status
./orientdb.sh: return: line 89: Illegal number: root
The only way for to use the ./oetl.sh script is to stop the Docker instance and restart it in the interactive mode running the shell, but this is awkward because to use the "OrientDB Studio" I have to stop docker again and start it in the normal mode.
As Roberto Franchini mentioned above setting the dbURL parameter in the Loader to use a remote URL fixed the first issue "Can not open storage it is acquired by other process".
The issues with the .orientdb.sh still exists, but with the remote-URL approach I don't need to shutdown and restart the service anymore.
I have googled for three hours but to no avail.
I have an ejabberd installation which is not installed using apt. It is installed from source and there is no program called ejabberd in it. Start and Stop and everything is through ejabberdctl.
It was running perfectly for a month and all of a sudden one day it stopped with the infamous
kernel pid terminated error
Anytime i do
sudo ejabberdctl start --node ejabberd#MasterService
A erl_crash file gets generated and when i try
ejabberdctl
i get
Failed to connect to RPC at node ejabberd#MasterService
Now what have i tried
Tried killing all running process of ejabberd, beam, epmd and starting fresh - DID NOT WORK
Checked /etc/hosts and hostname and all is well. Hostname is provided in hosts file with the IP
Checked the ejabberdctl.conf file to ensure teh host name is indeed right and the node name is right
checked .erlange.cookie file is being created with content in it
In all of web one way or another the search led me to either one of the above.
I have nowhere else to go and dont know where else to look. Any help would be much appreciated.
You'll have to analyze the crash dump to try to guess why it failed.
To carry out this task, Erlang has a special webtool (called, uh, webtool) from which a special application — Crash Dump Viewer — might be used to load a dump and inspect the stack traces of the Erlang processes at the time of the crash.
You have to
Install the necessary packages:
# apt-get install erlang-webtool erlang-observer
Start an Erlang interpreter:
$ erl
(Further actions are taken there.)
Run the webtool. In a simplest case, it will listen on the local host:
webtool:start().
(Notice the period.) It will print back an URL to navigate in your browser to reach the running tool.
If this happens on a server, and you'd rather like to have the webtool listen on some non-local-host interface, the call encantation would be trickier:
webtool:start(standard_path, [{port, 8888}, {bind_address, {0, 0, 0, 0}}, {server_name, "server.example.com"}]).
The {0, 0, 0, 0} IP spec will make it listen everywhere, and you might as well specify some more sensible octets, like {192, 168, 0, 1}. The server_name clause might use arbitrary name — this is what will be printed in the generated URL, the server's hostname.
Now connect to the tool with your browser, navigate to the "Start tools" menu entry, start crash dump viewer and make a link to it appear in the tool's top menu. Proceed there and find a link to load the crash dump.
After loading a crash dump try to mess around with the tool's interface to look at the stack traces of the active Erlang processes. At least one of them should contain something fishy, which should include an error message — that's what you're looking after to refine your question (or ask another at the ejabberd mailing list).
To quit the tool, run
webtool:stop().
in the running Erlang interpreter. And then quit it either by running
q().
and waiting a bit or pressing Ctrl-g and then entering the letter q followed by pressing the Return key.
The relevant links are: the crash dump viewer manual and the webtool manual.