Postman “Could not get any response” http://127.0.0.1:5000/get_chain - response

I am using Postman for request and response using local host, even turning off ssl from setting in postman still no response. Any idea?

Restart your localhost server and try again.

Related

Add a remote schema on Hasura from localhost rails server fails

I am testing Hasura with docker on my localhost, and I would like to add a remote schema, from a graphql endpoint on my local environment (rails app, http://localhost:3000/graphql)
When I try to add the remote schema URL on Hasura (via http://host.docker.internal:3000/graphql), it fails with the following message:
Adding remote schema failed
Error in $: Failed reading: not a valid json value at '<!DOCTYPEhtml>'
And, I have no log on my rails server.
I tried to use ngrok to have an https endpoint, (https://6e12fa99336b.ngrok.io forwarding to localhost:3000) but I had the same message. My ngrok console did show a post to /graphql, with 403 Forbidden, and still no log on the rails server
However it works with a public external API (https://countries.trevorblades.com/graphql for instance)
Is there something that I am doing wrong, some headers missing ?
Ok, finally got it ! The issue was with rails, not having host.docker.internal as a registered host, thus rendering an HTML error page.
After adding host.docker.internal to config.host everything worked.

HTTP Request working in postman but not with axios in docker containers

So I setup 2 containers (node and react) to talk to each other in docker.
When I try to send HTTP request using Postman everything works perfectly, but when I try with axios on client side it spits out CORS error. I have enabled cors in my node project so I don't know what's the issue. Also I should mention that GET requests work perfectly fine using axios.

Testing Twilio with ngrok tunnel to localhost results in bad host name error

In the past I've used ngrok to test twilio webhooks on my local machine - it's always worked. I'm working on a new app that uses Co-Pilot (not sure if it has anything to do with co-pilot) and I'm getting the 11210 error: HTTP bad host name.
I initialize my tunnel with /Applications/ngrok http -host-header=rewrite local.whicheversiteimworkingon.com:80
The URL listed in the Message Text is http://fcd0ed57.ngrok.io/sms/twilio/incoming but the body shows
Twilio was unable to fetch content from: https://local.thesiteimworkingon.com/sms/twilio/incoming
Error: Unknown host local.thesiteimworkingon.com
Account SID: AC5a22f090b458f6942da879d347451dfd
SID: SM9c45741b5b70967df6a7e196e3bee552
Request ID: 9fde222c-14e1-448e-ad79-4a392d212ffd
Remote Host: local.thesiteimworkingon.com
Request Method: POST
Request URI: https://local.thesiteimworkingon.com/sms/twilio/incoming
SSL Version: TLSv1.2
URL Fragment: true
Unfortunately I don't have an example of this from when it was working - it's been months (maybe 12+) since I've had to do this.
[Update] I've confirmed this happens with co-pilot and regular numbers, starting to think it's environment related.
Have I misconfigured something in order to test this locally?
Can you try the https ngrok instead of http?
Twilio developer evangelist here.
It may be to do with the application server you are using expecting a different host name. When you start Ngrok, you can pass the --host-header flag to rewrite the host header for your application.
ngrok http 3000 --host-header=rewrite local.domain.com
Let me know if that helps at all.

Rails app not responding to Postman requests

My locally running rails app (on localhost:3000) responds to requests in the browser or from curl, but is not responding to requests from the desktop postman client, which immediately gives the generic "Could not get any response". Any idea what could be causing this?
For this you can use NGROK. It provides you a tunnel which can easily be used with postman or anyother such service. Download the library from here and run the tunnel as
./ngrok http 3000
or you can use lvh.me:3000 if your request is from same machine.

Faye: Can't send GET messages in browser even though socket is open, working and responding with data via CURL

So my websocket is opening properly to faye, I'm using the nginx_tcp_proxy module. When I run a curl it looks good:
$ curl http://now.2u.fm:9200/faye\?message\=%5B%7B%22channel%22%3A%22%2Fmeta%2Fhandshake%22%2C%22version%22%3A%221.0%22%2C%22supportedConnectionTypes%22%3A%5B%22callback-polling%22%5D%2C%22id%22%3A%221%22%7D%5D\&jsonp\=__jsonp6__
__jsonp6__([{"id":"1","channel":"/meta/handshake","successful":true,"version":"1.0","supportedConnectionTypes":["long-polling","cross-origin-long-polling","callback-polling","websocket","eventsource","in-process"],"clientId":"jls0srprht51xb368yrojft3h4drgu0","advice":{"reconnect":"retry","interval":0,"timeout":45000}}]);#
And curl with the -I flag
HTTP/1.1 200 OK
...
Connection: close
But I'm getting failed GET requests when my website tries to hit this as GET (no error code):
edit: I noticed it says "switching protocols" just now!
Same as when I try and hit the url directly in my browser.
My gut says, "hey, thats because you have a tcp connection open not an http one!", but then is private_pub using GET? But for all I know a GET request is just fine over TCP and I'm doing something wrong.
I found out the issue was that Faye needs to be able to open a socket and use http. Because private_pub assumes you do both on the same domain/port this was impossible using the tcp module without having to modify private_pub and the faye extension it was loading to use the right ports.
In the end I used HAProxy instead as it was much simpler to set up and required no modification of private pub.

Resources