gRPC endpoint with non-root path - url

Maybe (hopefully) I'm missing something very simple, but I can't seem to figure this out.
I have a set of gRPC services that I would like to put behind a nghttpx proxy. For this I need to be able to configure my client with a channel on a non-root url. Eg.
channel = grpc.insecure_channel('localhost:50051/myapp')
stub = MyAppStub(channel)
This wasn't working immediately through the proxy (it just hangs), so I tested with a server on the sub context.
server = grpc.server(executor)
service_pb2.add_MyAppServicer_to_server(
MyAppService(), server)
server.add_insecure_port('{}:{}/myapp'.format(hostname, port))
server.start()
I get the following
E1103 21:00:13.880474000 140735277326336 server_chttp2.c:159]
{"created":"#1478203213.880457000","description":"OS Error",
"errno":8,"file":"src/core/lib/iomgr/resolve_address_posix.c",
"file_line":115,"os_error":"nodename nor servname provided, or not known",
"syscall":"getaddrinfo","target_address":"[::]:50051/myapp"}
So the question is - is it possible to create gRPC channels on non-root urls?

As confirmed here, this is not possible. I will route traffic via subdomains in nghttpx.

Related

blazor run in docker,How to get client IP?

string loginip = Request.Headers["X-Forwarded-For"].FirstOrDefault();// not get
string loginip = HttpContext.Connection.RemoteIpAddress?.ToString();// not get,only get docker ip
Is there any other way?
You're on the right track where you're using the X-Forwarded-For.
It's the responsibility of the process that's forwarding the HTTP Request to the container to add the value(s) to that header.
This normally involves using a reverse proxy such as nginx.
https://www.thepolyglotdeveloper.com/2017/03/nginx-reverse-proxy-containerized-docker-applications/

How to capture gatling.io https / wss calls through fiddler?

I'm running gatling.io to load test my server, but I would like to be able to view the calls so I can debug portions of the script. I know I can have it write all the logs to the console, but viewing through fiddler is nicer.
I searched for a few hours until I found a solution. This is by far the easiest. Just modify your gatling.io's scala script's http configuration object to use fiddler's proxy.
Just like this:
val httpConf = http
.proxy(
Proxy("127.0.0.1", 8888)
.httpsPort(8888)
)

Graphstory and Neo4jphp

I have succesfully used neo4jphp library with graphenedb with this simple steps as per documentation (considering that graphenedb does not require https)
require('vendor/autoload.php'); // or your custom autoloader
// Connecting to a different port or host
$client = new Everyman\Neo4j\Client(url, port);
// Connecting using HTTP and Basic Auth
$client->getTransport()
->setAuth('username', 'password');
// Test connection to server
print_r($client->getServerInfo());
However, when trying to connect to a graphstory instance (of course they both work fine if I call the rest api from browser, the neo4j console works fine etc.) which requires https as follows
require('vendor/autoload.php'); // or your custom autoloader
// Connecting to a different port or host
$client = new Everyman\Neo4j\Client(url, port);
// Connecting using HTTPS and Basic Auth
$client->getTransport()
->useHttps()
->setAuth('username', 'password');
// Test connection to server
print_r($client->getServerInfo());
I get the following error. They should be identical, I can't get why.
Fatal error: Uncaught exception 'Everyman\Neo4j\Exception' with message 'Can't open connection to https://neo-54f500bf2cc7e-364459c455.do-stories.graphstory.com:7473/db/data/' in /Applications/XAMPP/xamppfiles/htdocs/graphene/vendor/everyman/neo4jphp/lib/Everyman/Neo4j/Transport/Curl.php:91
Stack trace:
#0 /Applications/XAMPP/xamppfiles/htdocs/graphene/vendor/everyman/neo4jphp/lib/Everyman/Neo4j/Transport.php(95): Everyman\Neo4j\Transport\Curl->makeRequest('GET', '/', NULL)
#1 /Applications/XAMPP/xamppfiles/htdocs/graphene/vendor/everyman/neo4jphp/lib/Everyman/Neo4j/Command.php(64): Everyman\Neo4j\Transport->get('/', NULL)
#2 /Applications/XAMPP/xamppfiles/htdocs/graphene/vendor/everyman/neo4jphp/lib/Everyman/Neo4j/Client.php(828): Everyman\Neo4j\Command->execute()
#3 /Applications/XAMPP/xamppfiles/htdocs/graphene/vendor/everyman/neo4jphp/lib/Everyman/Neo4j/Client.php(464): Everyman\Neo4j\Client->runCommand(Object(Everyman\Neo4j\Command\GetServerInfo))
#4 /Applications/XAMPP/xamppfiles/htdocs/graphene/story.php(20): Every in /Applications/XAMPP/xamppfiles/htdocs/graphene/vendor/everyman/neo4jphp/lib/Everyman/Neo4j/Transport/Curl.php on line 91
It seems to me that neo4jphp is not configuring the TLS part in the cURL request.
I fixed it by downloading the certificate bundle from http://curl.haxx.se/docs/caextract.html (ca_bundle.crt) and adding the following line to Everyman\Neo4j\Transport\Curl.php, function makeRequest:
$options[CURLOPT_CAINFO] = "your/path/to/ca-bundle.crt";
I've created an issue on GitHub for this: https://github.com/jadell/neo4jphp/issues/171
I'm the CTO/Lead Dev at Graph Story. Sorry to hear you're having troubles. I've actually just taken a look at your instance and things seem OK from the server side.
Without additional info it's hard to say if there's an issue with your sample connection code. Considering that you've used that same library to connect to GrapheneDB in the past, I think the chances an error in the sample code is low.
Based on the current state of your instance and on the exception thrown by Neo4jPHP, my guess is that port 7473 may be blocked on your network. You can confirm that with local tech support or by switching to a network where you know port 7473 is open and trying to connect again.

Connect to a password protected FTP through PROXY in Ruby

I'm trying to upload to my server (on Heroku) a file stored in a password protected FTP.
The problem is that this FTP also dont contain my production IP address on his whitelist (and i cant add it..) so i should use a proxy to connect my rails app this FTP.
I tried this code :
proxy_uri = URI(ENV['QUOTAGUARDSTATIC_URL'] || 'http://login:password#myproxy.com:9293')
Net::HTTP::Proxy(proxy_uri.host, proxy_uri.port,"login","password").start('ftp://login:password#ftp.website.com') do |http|
http.get('/path/to/myfile.gz').body
end
But my http.get returns me lookup ftp: no such host.
I also got this code for FTP download, but i dont know how to make it works with a proxy :
ftp = Net::FTP.new('ftp.myftp.com', 'login', 'password')
ftp.chdir('path/to')
ftp.getbinaryfile('myfile.gz', 'public/myfile.gz', 1024)
ftp.close
Thanks in advance.
I realise that you asked this question over 6 months ago, but I recently had a similar issue and found that this (unanswered) question is the top Google result, so I thought I would share my findings.
mudasobwa's comment below your original post has a link to the net/ftp documentation which explains how to use a SOCKS proxy...
Although you don't mention a specific requirement for a HTTP proxy in your original post, it seems obvious to me that is what you were trying to use. As I'm sure you're aware, this makes the SOCKS documentation totally irrelevant.
The following code has been tested on ruby-1.8.7-p357 using an HTTP proxy that does not require authentication:
file = File.open('myfile.gz', 'w')
http = Net::HTTP.start('myproxy.com', '9293')
resp, data = http.get('ftp://login:password#ftp.website.com')
file.write(data) if resp.code == "200"
file.close unless file.nil?
Source
This should give you a good starting point to figure the rest out for yourself.
To get you going, I would guess that you could use user:pass#myproxy.com for basic auth, or perhaps sending a Proxy-Authorization header in your GET request.

How to check server connection

i want to check my server connection to know if its available or not to inform the user..
so how to send a pkg or msg to the server (it's not SQL server; it's a server contains some serviecs) ...
thnx in adcvance ..
With all the possibilities for firewalls blocking ICMP packets or specific ports, the only way to guarantee that a service is running is to do something that uses that service.
For instance, if it were a JDBC server, you could execute a non-destructive SQL query, such as select * from sysibm.sysdummy1 for DB2. If it's a HTTP server, you could create a GET packet for index.htm.
If you actually have control over the service, it's a simple matter to create a special sub-service to handle these requests (such as you send through a CHECK packet and get back an OKAY response).
That way, you avoid all the possible firewall issues and the test is a true end-to-end one. PINGs and traceroutes will be able to tell if you can get to the machine (firewalls permitting) but they won't tell you if your service is functioning.
Take this from someone who's had to battle the network gods in a corporate environment where machines are locked up as tight as the proverbial fishes ...
If you can open a port but don't want to use ping (i dont know why but hey) you could use something like this:
import socket
host = ''
port = 55555
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host, port))
s.listen(1)
while 1:
try:
clientsock, clientaddr = s.accept()
clientsock.sendall('alive')
clientsock.close()
except:
pass
which is nothing more then a simple python socket server listening on 55555 and returning alive

Resources