gRPC Name Resolution Failure - docker

While trying to run tensorflow-serving with docker, I am getting the following error issuing a client request using gRPC with following code:
`python client.py --server=172.17.0.2/16:9000 --image=./test_images/image2.jpg
debug_error_string = "{"created":"#1551888435.208113000","description":"Failed to create subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":2267,"referenced_errors":[{"created":"#1551888435.208109000","description":"Name resolution failure","file":"src/core/ext/filters/client_channel/request_routing.cc","file_line":165,"grpc_status":14}]}"`
Information about my environment:
OS: macOS virtual env.: Anaconda 3 Python 3.6 gRPC/tools 1.19
Would you please help me in resolving the issue?

This happens when the channel is in TRANSIENT_FAILURE and the load balancing policy can't find any ready backend to send the request.
Please file an issue on https://github.com/grpc/grpc/ detailing what you did, hopefully with more log/tracing context, so that we can better help you.

The IP Address, 172.17.0.2/16, which looks slightly different seems to cause the problem. You can use localhost instead.
So, the command for running the client.py can be
python client.py --server=localhost:9000 --image=./test_images/image2.jpg

Related

DTLSv1_listen unable to accept second client in a docker container

I'm experiencing an issue with OpenSSL/DTLS server.
Environment: docker container based on CentOs7
OpenSSL version: OpenSSL-1.1.1d
A DTLS server (non-blocking) using DTLSv1_Listen having a UDP socket with SO_REUSEADDR is unable to accept a second
client connection when it has already been accepted a client connection and serving it.
When the first client has finished, the second client connection is accepted.
I have used the dtls_udp_echo.c (taken from http://web.archive.org/web/20150617012520/http://sctp.fh-muenster.de/dtls-samples.html ) to carry out the test and reproduce the issue.
The test application has been compiled and executed within a docker container, having CentOS7 as base image, but the behaviour has been noticed with other base images OS too (e.g. Redhat, Ubuntu, Debian, SLES).
The same application running on a bare metal works without any issue.
Is there any known compatibility issue between Docker and OpenSSL/DTLS?
Is there any specific configuration to be done to overcome this issue?
Best Regards

Connect Grafana to Kapua's Elasticsearch

I have kapua (as a docker container on my pc) and kura on the raspberryPi.
I managed to connect them, to run the example publisher and to correctly receive the data on kapua.
Now I would like to view the data via graphana (docker container) by linking this to kapua's elasticsearch (container docker).
I tried to link them indicating the address of elastichsearch localhost:9200 and to enter the credentials of kapua but it continues to give error 502 bad gateway.
Could anyone help me?
Thanks in advance.
By default Elasticsearch in Kapua has no credential.
The capability of configure them is not yet released and it has been introduced with https://github.com/eclipse/kapua/pull/2685. It will be released in Kapua 1.1.0
Have you tried without credentials?

Boot-clj connection refused

When attempting to run Boot inside Docker, using the adzerk/boot-clj image, I receive connection refused errors.
Specifically, when the container starts up, boot is started, and then a stack trace is output. The trace (which is not easy to copy and paste between computers with no connectivity) essentially is to do with downloading - https://github.com/boot-clj/boot/releases/download/2.7.2/boot.jar - and receiving "Connection refused" errors.
I’m asking, and answering this, question in the hope that it might help someone else.
Where to start?
My main problem was with a Docker + Clojure + Boot setup, specifically when running “boot” from inside the container. Doing this spewed out a stack trace. This is where my journey begins.
I’m using the adzerk/boot-clj image. I’ve used it locally (OSX) without issue, the problem I experienced was in using a VM (CentOS 7) hosted within a corporate data center.
docker run -ti adzerk/boot-clj
Issuing this starts up the container, the entry point is Boot, and it starts pulling down some jars, specifically boot.jar from Github. The resulting stack trace details several problems, but the crux of it was
“java.net.ConnectException: Connection refused” (connecting to Clojars.org:443)
Hmmm…
So instead of running Boot straight away in the container, I specified the container entry point as “—-entrypoint bash” so I can prod around a little.
So, wget - connection refused.
What about without Docker in the way. Same thing. Connection refused.
After a little wrangling with the network team, I found that the “https_proxy” env variable needs to be set on CentOS to route traffic out to the internet. A very specific issue to me in the situation.
However….
wget is now fine, both on the host, and inside the adzerk/boot-clj container. Boot however was not.
In an effort to simplify things even more, I took Docker out of the equation entirely, and used boot locally.
Installed java-1.8.0-openjdk.x86_64, installed Boot. Same problem.
So dug around a little, and found this - https ://github.com/boot-clj/boot-bin/issues/2
This was a start. It mentions setting the BOOT_JVM_OPTIONS, specifically https.proxyHost and https.proxyPort.
It still didn’t work… Arrrg.
OK, let’s take Boot out of the equation.
I wrote a test harness in Java, very simple that connects to https ://clojars.org and attempts to read the index page. Copied from https ://docs.oracle.com/javase/tutorial/networking/urls/readingWriting.html, and setting the JVM_OPTS.
It still fails. “Connection refused”
…. Weird beard.
I finally stumbled on this SO - https ://stackoverflow.com/questions/43695299/java-httpurlconnection-works-on-windows-and-fails-on-linux - specifically the answer from Stephen C
“Java doesn't necessarily respect your system's default proxy settings. Since you are able to "curl" the URL on the Linux machine, the most likely explanation is that Java is not using the proxy that you have configured. The following links explains various ways to configure the proxies for Java:”
So taking the first link - https ://stackoverflow.com/questions/120797/how-do-i-set-the-proxy-to-be-used-by-the-jvm - and the answer from Leonel
I issued “java -Dhttps.proxyHost=xxx -Dhttps.proxyPort=80 HelloWorld”
I get an error, but a different one. This is progress. “Unable to tunnel through proxy”
A quick Google of this led me here: http ://www.oracle.com/technetwork/java/javase/8u111-relnotes-3124969.html - “Disable Basic authentication for HTTPS tunneling”
So updated to “java -Dhttps.proxyHost=xxx -Dhttps.proxyPort=80 -Djdk.http.auth.tunneling.disabledSchemes=“” HelloWorld
Profit.
Info:
java -v
openjdk version 1.8.0_144
Openjdk Runtime Environment (build 1.8.0_144-b01)
OpenJDK 64-Bit Server VM (build 25.144-b01, mixed mode)
Sorry for all my profanity Boot.

Wildfly error: Could not start http listener

I'm new to Wildfly and I hope you guys can help me with this problem:
I'm following this tutorial on how to Install Wildfly 8 and when I'm trying to execute step 4 I get the following errors:
I've been googling for a while now and I can't find an answer. I've tryed with JDK 7 and 8, no changes, I'm using admin permissions, I've even tried to download Wildfly again and still no changes.
More experienced co-workers have seen this and don't have a clue about what's going on.
Can you help me? Thanks
The tutorial you linked to, has Wildfly configured to use the default port 8080. Most likely, you have another process or service running which is already using port 8080. Try to find out what process it is and stop it, or try configuring Wildfly to use a different port.
try restart the machine or enable IPV6 in the machine, this error will be resolved
Those having the same problem should check who else uses the port 9990 in your Windows system. TCPView is a good tool to find out the guilty of charge. One of possible common causes in this case is NVIDIA Network Service (NvNetworkService.exe).
If that's the case just find it in your Windows services list and stop/disable it. The service itself is responsible for checking for Nvidia drivers updates, so any time you want it back just turn it on manually.
In my case, I inadvertedly added an AJP socket binding while using standalone jboss_cli utility:
[standalone#localhost:9990 /] /subsystem=undertow/server=default-server/ajp-listener=ajp:add(socket-binding=ajp)
This led to an 'already in use' error that doesn't let any app to start and signaled 503 error through an Apache web server.
I deleted the binding:
/subsystem=undertow/server=default-server/ajp-listener=ajp:remove
And then everything worked normally.
I too had the same issue.After analysis it was found that the SSL port(443 in my case) was creating this issue. I just terminated the processes that were running on 443 and restarted the wildfly and everything worked fine after that.
I had faced same issue with wildfly_8.2.1
Port 8080 was also free, so that solution doesn't worked for me.
Try below procedure as it helped to resolve my issue.
add below lines to your server's /etc/sysctl.conf file
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
No restart is required for this solution.

cqlsh connection error: Could not connect to localhost:9160

I am totally new to Cassandra and met the following error when using cqlsh:
cqlsh
Connection error: Could not connect to localhost:9160
I read the solutions from the following link and tried them all. But none of them works for me.
How to connect Cassandra to localhost using cqlsh?
I am working on CentOS6.5 and installed Cassandra2.0 using yum intall dsc20.
I ran into the same issue running the same OS and same install method. While the cassandra service claims that it's starting ok, if you run service cassandra status it would tell me that the process was dead. Here are the steps I took to fix it:
Viewing the log file at /var/log/cassandra/cassandra.log gave told me that my heap size was too small. Manually set the heap size in /etc/cassandra/conf/cassandra-env.sh:
MAX_HEAP_SIZE="1G"
HEAP_NEWSIZE="256M"
Tips on setting the heap size for your system can be found here
Next, the error log claimed the stack size was too small. Once again in /etc/cassandra/conf/cassandra-env.sh find a line that looks like JVM_OPTS="$JVM_OPTS -Xss128k" and raise that number to JVM_OPTS="$JVM_OPTS -Xss256k"
Lastly, the log complained that the local url was misformed and threw a java exception. I found the answer to the last part here. Basically, you want to manually bind your server's hostname in your /etc/hosts file.
127.0.0.1 localhost localhost.localdomain server1.example.com
Hope this helps~
Change:
/etc/cassandra/cassandra.yaml
Whether to start the thrift rpc server.
start_rpc: false
to
start_rpc: true

Resources