Cannot start dask cluster over SSH - dask

I'm trying to start a dask cluster over SSH, but I am encountering a strange errors like these:
Exception in thread Thread-6:
Traceback (most recent call last):
File "/home/localuser/miniconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/localuser/miniconda3/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/localuser/miniconda3/lib/python3.6/site-packages/distributed/deploy/ssh.py", line 57, in async_ssh
banner_timeout=20) # Helps prevent timeouts when many concurrent ssh connections are opened.
File "/home/localuser/miniconda3/lib/python3.6/site-packages/paramiko/client.py", line 329, in connect
to_try = list(self._families_and_addresses(hostname, port))
File "/home/localuser/miniconda3/lib/python3.6/site-packages/paramiko/client.py", line 200, in _families_and_addresses
hostname, port, socket.AF_UNSPEC, socket.SOCK_STREAM)
File "/home/localuser/miniconda3/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
I'm starting the cluster like this:
$ dask-ssh --ssh-private-key ~/.ssh/cluster_id_rsa \
--hostfile ~/dask-hosts.txt \
--remote-python "~/miniconda3/bin/python3.6"
My dask-hosts.txt looks like this:
localuser#127.0.0.1
remoteuser#10.10.4.200
...
remoteuser#10.10.4.207
I get the same error with/without the localhost line.
I have checked the ssh setup, I can login to all the nodes using a public key setup (the key is unencrypted, to avoid decryption prompts). What am I missing?

The error indicates that name resolution is the culprit. Most likely this is happening because of the inclusion of usernames in your dask-hosts.txt. According to its documentation, the host file should contain only hostnames/IP addresses:
–hostfile PATH Textfile with hostnames/IP addresses
You can use --ssh-username to set a username (although only a single one).

Related

Permission error accessing USB from homeassistant docker

I am running homeassistant in a docker container on a RPi 4 with Raspbian. I am using tributs scripts to elevate the need of running the docker image as root. This all works dandy. But now I am trying to add the dsmr integration but I am not succeeding. The integration requires to connect to the "Slimme meter" via USB. However, I get a permission error. my knowledge of both docker and linux privelages is too limited to know where to start debugging this. Does anyone have some pointers for me?
This is the error message homeassistant is throwing at me:
Logger: homeassistant.components.dsmr
Source: components/dsmr/config_flow.py:93
Integration: dsmr (documentation, issues)
First occurred: 21:18:17 (1 occurrences)
Last logged: 21:18:17
Error connecting to DSMR
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/serial/serialposix.py", line 322, in open
self.fd = os.open(self.portstr, os.O_RDWR | os.O_NOCTTY | os.O_NONBLOCK)
PermissionError: [Errno 13] Permission denied: '/dev/ttyUSB0'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/dsmr/config_flow.py", line 93, in validate_connect
transport, protocol = await asyncio.create_task(reader_factory())
File "/usr/local/lib/python3.9/site-packages/serial_asyncio/__init__.py", line 445, in create_serial_connection
serial_instance = serial.serial_for_url(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/serial/__init__.py", line 90, in serial_for_url
instance.open()
File "/usr/local/lib/python3.9/site-packages/serial/serialposix.py", line 325, in open
raise SerialException(msg.errno, "could not open port {}: {}".format(self._port, msg))
serial.serialutil.SerialException: [Errno 13] could not open port /dev/ttyUSB0: [Errno 13] Permission denied: '/dev/ttyUSB0'
After some researching I figured that I needed to add the user to an extra group named dialout because only members of that group are allowed to access the USB ports (as well as other devices).
First I figured out the group id of the dialout group in the host machine (the machine running the docker container) by running
cat /etc/group | grep dialout
it returned 20 in my case. Luckily, the script of tributs has the possibility to add the user to an extra group via the environment variable EXTRA_GID. So all the relevant lines in the docker-compose file for accessing USB are (when using the script of tribut) are:
devices:
- /dev/ttyUSB0:/dev/ttyUSB0
environment:
- PUID=1000
- PGID=1000
- EXTRA_GID=20 # this is the ID of the group 'dialout'

Connecting to external networks from inside minikube VM behind proxy in docker container

I have an active kubernetes cluster inside Minikube VM (using VirtualBox as driver), so for deploying new containers I am able to download the images as this connection is already laid out using istio service, now if I ssh into my minikube VM first of all I am not able to wget https content but http contents are connected after setting proxies and no_proxies but if I want to access any link from inside of my containers, say simple pod with python image and urllib library and I want to connect from inside this pod and then print the contents from any link (eg.http://python.org) I am not able to do so, all I am getting is no route to host error in logs which points to some problem with the connection due to proxies.
def basic():
import urllib.request
print("inside basic funtion")
with urllib.request.urlopen('http://python.org/') as response:
html = response.read()
print(html)
this is the python code I am running from inside my container as a pipeline component.
Most recent error I got-
Traceback (most recent call last):
File "/usr/local/lib/python3.7/urllib/request.py", line 1317, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/usr/local/lib/python3.7/http/client.py", line 1229, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1275, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1224, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1016, in _send_output
self.send(msg)
File "/usr/local/lib/python3.7/http/client.py", line 956, in send
self.connect()
File "/usr/local/lib/python3.7/http/client.py", line 928, in connect
(self.host,self.port), self.timeout, self.source_address)
File "/usr/local/lib/python3.7/socket.py", line 727, in create_connection
raise err
File "/usr/local/lib/python3.7/socket.py", line 716, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 11, in <module>
File "<string>", line 3, in basic
File "/usr/local/lib/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/local/lib/python3.7/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/usr/local/lib/python3.7/urllib/request.py", line 543, in _open
'_open', req)
File "/usr/local/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/usr/local/lib/python3.7/urllib/request.py", line 1345, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/usr/local/lib/python3.7/urllib/request.py", line 1319, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 110] Operation timed out>
I have started minikube as-
minikube start --cpus 6 --memory 12288 --disk-size=80g --extra-config=apiserver.service-account-issuer=api --extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/apiserver.key --extra-config=apiserver.service-account-api-audiences=api --kubernetes-version v1.14.0
after setting the env variables as well.
Update:
I created a different container just to check the curl from inside the component as- (I am using kfp libraries for creating containers)
def curl_op(text):
return dsl.ContainerOp(
name='curl',
image='tutum/curl',
command=['sh', '-c'],
arguments=['curl -x http://<proxy-server>:<proxy-port> "$0"', text]
)
so using the above argument I am able to connect to external links, which again makes it certain that i need to create the containers with proxies set.
So for running the above python code I mentioned as pipeline component.
I added the environment variables using the os library and this individual piece was able to connect to external networks.
Updated python code-
def basic():
import urllib.request
import os
proxy = 'http://proxy-path:port'
os.environ['http_proxy'] = proxy
os.environ['HTTP_PROXY'] = proxy
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
print("inside basic funtion")
with urllib.request.urlopen('http://python.org/') as response:
html = response.read()
print(html)
And if the docker image is created from scratch without taking help of pipeline library function then we need to just add the env details into our dockerfile the usual way after the base image call-
ENV HTTP_PROXY http://proxy-path:port
ENV HTTPS_PROXY http://proxy-path:port

Celery with redis: instance state changed (master -> replica?)

Am using celery for scheduled tasks and redis server for data backup within docker containers. My jobs are running correctly sometimes. But I am get following error randomly and celery beat task can no longer progress.
[2020-09-16 21:01:07,863: CRITICAL/MainProcess] Unrecoverable error: ResponseError('UNBLOCKED force unblock from blocking operation, instance sta
te changed (master -> replica?)',)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/celery/worker/worker.py", line 205, in start
self.blueprint.start(self)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 369, in start
return self.obj.start()
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 318, in start
blueprint.start(self)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 599, in start
c.loop(*c.loop_args())
File "/usr/local/lib/python3.6/site-packages/celery/worker/loops.py", line 83, in asynloop
next(loop)
File "/usr/local/lib/python3.6/site-packages/kombu/asynchronous/hub.py", line 364, in create_loop
cb(*cbargs)
File "/usr/local/lib/python3.6/site-packages/kombu/transport/redis.py", line 1088, in on_readable
self.cycle.on_readable(fileno)
File "/usr/local/lib/python3.6/site-packages/kombu/transport/redis.py", line 359, in on_readable
chan.handlers[type]()
File "/usr/local/lib/python3.6/site-packages/kombu/transport/redis.py", line 739, in _brpop_read
**options)
File "/usr/local/lib/python3.6/site-packages/redis/client.py", line 892, in parse_response
response = connection.read_response()
File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 752, in read_response
raise response
redis.exceptions.ResponseError: UNBLOCKED force unblock from blocking operation, instance state changed (master -> replica?)
Any help is will be appreciated. Let me know in case you need more details
As I stated above the issue is happening randomly and perturb our app in production. So I decided to spend time on a solution. I came across many propositions such as hardware issues (Memory or CPU). But this one definitively solve the issue. I was not using authentication on redis server Those interesting on setting redis password easily in docker can refer to this Docker Tip. After setting a password to redis the url looks like REDIS_URL=redis://user:myPass#localhost:6379
You can try this answer: https://stackoverflow.com/a/74141982/1635525
TLDR Adding restart: unless-stopped to your docker-compose helps to recover from celery crashes including the ones caused by redis downtime/maintenance.

What is the root cause of distributed.scheduler.KilledWorker exception?

I'm trying to run a Dask job on a YARN cluster. This jobs reads and writes to HDFS using the hdfs3 library.
When I run it on a cluster without a Kerberos security layer, it runs fine.
But, on a cluster with a Kerberos security layer, I had to implement the solution here to avoid Kerberos related errors. Running the same job, led to the following error:
File "/fsstreamdevl/f6_development/acoustics/acoustics_analysis_dask/acoustics_analytics/task_runner/task_runner.py", line 123, in run
dask.compute(tasks)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/dask/base.py", line 446, in compute
results = schedule(dsk, keys, **kwargs)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 2568, in get
results = self.gather(packed, asynchronous=asynchronous, direct=direct)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 1822, in gather
asynchronous=asynchronous,
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 753, in sync
return sync(self.loop, func, *args, **kwargs)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/utils.py", line 331, in sync
six.reraise(*error[0])
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/six.py", line 693, in reraise
raise value
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/utils.py", line 316, in f
result[0] = yield future
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/tornado/gen.py", line 735, in run
value = future.result()
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/tornado/gen.py", line 742, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 1653, in _gather
six.reraise(type(exception), exception, traceback)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/six.py", line 693, in reraise
raise value
distributed.scheduler.KilledWorker: ('__call__-6af7aa29-2a09-45f3-a5e2-207c06562672', <Worker 'tcp://10.194.211.132:11927', memory: 0, processing: 1>)
Strangely enough, running the same solution on the former cluster without a Kerberos security layer, I get the same error.
Looking at the YARN application logs, I see the following traceback, but cannot tell what it means.
distributed.nanny - INFO - Closing Nanny at 'tcp://10.194.211.133:17659'
Traceback (most recent call last):
File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/queues.py", line 242, in _feed
send_bytes(obj)
File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
End of LogType:dask.worker.log
I do not see any explicit messages in the logs about low memory. Would anyone know how to diagnose this issue?
hdfs3 is not actively maintained any more. You have two main choices for interacting with HDFS:
pyarrow's hdfs driver (via libhdfs jni library), which requires you to have java and hadoop requirements correctly set up and available to the session calling it
webhdfs such as in fsspec, which does not need java libraries, and can interact with kerberos if HTTP authentication is allowed on your system.

Broken Pipe - Cannot connect to openERP 6.0.4 server using port 8070

I have an issue whereby all clients cannot connect to openERP server 6.0.4 using port 8070.
It happened sometimes in a while (4-6 months). I wonder whats the problem, I checked the network traffic, processor, memory of the server, nothing wrong at all But it just happened few times.
When I checked on server logs, the error are same each time I met this issue, as below :
[2013-04-23 12:33:53,258][Server] ERROR:web-services:netrpc: cannot
deliver exception message to client Traceback (most recent call last):
File "/opt/openerp/server/bin/service/netrpc_server.py", line 89, in
run
ts.mysend(e, exception=True, traceback=tb_s) File "/opt/openerp/server/bin/tiny_socket.py", line 64, in mysend
self.sock.sendall('%8d%s%s' % (len(msg), exception and "1" or "0", msg)) File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args) error: [Errno 32] Broken pipe
[2013-04-23 13:45:56,273][Server] ERROR:http:Could not run do_POST
Traceback (most recent call last): File
"/opt/openerp/server/bin/service/websrv_lib.py", line 299, in
_handle_one_foreign
method() File "/usr/lib/python2.7/SimpleXMLRPCServer.py", line 519, in do_POST
self.send_response(200) File "/usr/lib/python2.7/BaseHTTPServer.py", line 396, in send_response
(self.protocol_version, code, message)) File "/usr/lib/python2.7/socket.py", line 324, in write
self.flush() File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 104] Connection reset by peer [2013-04-23
13:45:56,647][Server] ERROR:http:code 500, message Internal error
[2013-04-23 13:45:56,650][Server] ERROR:init:Server error in request
from ('192.168.0.132', 1880): Traceback (most recent call last):
File "/opt/openerp/server/bin/service/websrv_lib.py", line 528, in
_handle_request2
self.process_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 310, in process_request
self.finish_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 323, in finish_request
self.RequestHandlerClass(request, client_address, self) File "/opt/openerp/server/bin/service/websrv_lib.py", line 246, in init
SocketServer.StreamRequestHandler.init(self,request,client_address,server)
File "/usr/lib/python2.7/SocketServer.py", line 641, in init
self.finish() File "/usr/lib/python2.7/SocketServer.py", line 694, in finish
self.wfile.flush() File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 32] Broken pipe
can anyone help me on this?
Broken-pipe error is a typical socket related error. It maybe if connect to slow from internet to server.
I suggest to use apache proxy to make available local server to internet. Mapping local server LOCALHOST:8069 to www.wxample.net:9000 using VirtualHost setting in apache. It may work for you.
For more information, Have a look at this link:
https://bugs.launchpad.net/openerp-web/+bug/927793
It may be helpful for you.

Resources