"getaddrinfo: Temporary failure in name resolution" in RoR application - ruby-on-rails

I'm trying to retrieve emails from gmail using pop3 to my rails applicaiton. I get the error - "getaddrinfo: Temporary failure in name resolution" when i try to retrieve the email.
the weird thing is, it works when i try it at home but not at my university. i'm guessing it has something to do with the internet connection.
please help!

I had the same problem just started getting this error out of the blue in a RoR application that connects to an API using a RestClient running on a local virtual machine using Vagrant that I have as a development environment.
The only thing that fixed the issue was simply restarting my virtual machine. Just done a vagrant down & up command, then rackup and back in the game.

This generally means you aren't getting a response from DNS. Your university connection is probably behind a proxy preventing you from directly accessing the Internet. If so, this proxy must be specified in your code. Check your POP3 library documentation, or failing that, you may be able to use a library like socksify that redirects TCP connections through your SOCKS proxy.

Simple. You may be directed through a proxy server. Set up a new connection ,set up your college settings,restart your server and it should work.

ssh into your server and check if the machine is able to resolve the domain.
ping <your_site> should resolve the domain name to IP.
If its not resolving correctly, then there is some problem in your hosting service.
quick fix: You can manually map domain-to-ip in the etc/hosts file of your server.

Related

WebSocket connection failure. Due to security constraints in your web browser

Today I download neo4j-community-3.2.0 in windows, when i start the server, i meet one problem in browser, i meet this problem in neo4j-community-3.1.2 and i had solved it by Ticking the "Do not use Bolt" option in settings solved the issue. But in neo4j-community-3.2.0 , i can't see "Do not use Bolt" option ,and i don't know how to do.
N/A: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver. Please use your browsers development console to determine the root cause of the failure. Common reasons include the database being unavailable, using the wrong connection URL or temporary network problems. If you have enabled encryption, ensure your browser is configured to trust the certificate Neo4j is configured to use. WebSocket readyState is: 3
This happens because the browser is trying (under the hood) to also access the bolt port, which uses an unsigned certificate.
You probably allowed the browser to access the SSL 7474 port through allowing the unsigned certificate as an exception on your browser (and if you didn't, you should in order to make it work).
The url was:
https://[neo4j_host]:7474
Do the same for the bolt certificate, allow it as an exception for url:
https://[neo4j_host]:7687
I ran into the same problem trying to use Neo4j Community Edition on an AWS Ubuntu 16.04 instance. The key thing that solved it was to open port 7687 (the bolt port) in the AWS security group settings.
Found this based on https://stackoverflow.com/a/45234105/1529646
Thus, full answer is:
Make sure to configure Neo4j correctly, ie. uncomment the line dbms.connectors.default_listen_address=0.0.0.0 AND the line dbms.connector.bolt.listen_address=:7687
Open ports 7474 AND 7687 in the AWS security group settings.
In the lower left corner of the browser gear, select do not use bolt.
Open your ${NEO4J_HOME}/conf/neo4j.conf file and edit the bolt settings. It is just about uncommenting this line dbms.connector.bolt.address=0.0.0.0:7687
Change the version of Neo4j
Check your JDK version, use JDK1.8
Adding another option, which worked for me. If your bolt's tls_level is set to REQUIRED, you need to change it to OPTIONAL, if you are not using it with SSL certificate; to get this working.
If you are using Neo4J Community Edition (ver 3.5.1 - in my case) from AWS Marketplace, you need to change the configuration in:
/etc/neo4j/pre-neo4j.sh
Change this line:
echo "dbms_connector_bolt_tls_level" "${dbms_connector_bolt_tls_level:=REQUIRED}"
to
echo "dbms_connector_bolt_tls_level" "${dbms_connector_bolt_tls_level:=OPTIONAL}"
You can find more about Neo4J connector configuration option here. Ideally as per docs, by default bolt.tls_level should have been OPTIONAL only. But I'm not really sure what exactly happened in my case, which got it changed to REQUIRED. Or if it came as is from AWS Marketplace.
Assuming you have valid certs and placed them under the correct certificates directory:
dbms.ssl.policy.bolt.client_auth=NONE
Version 4.0. Took it from this article.
I shared my full ssl config on this other answer.
I had the same error. New to Neo, so take this with a grain of salt, but my solution didn't match these above idea. But thanks as they did lead me to the right "water". So
I went into the conf file, noticed that there was the same port number (previously, the Neo desktop had been constantly telling me it'd needed to update the port numbers...I never checked to verity, but they'd be #, #+1 and #+2. But that didn't work yet that'd happened again and again...but now, after checking the conf file myself, I noticed that the number was the same for all three port requirements for BOLT. Tried that and it didn't work either...but maybe that was important in what did:
In the folder, where the specific database is housed, named "..neo4jdatabases/[GUID Value]" there were two directories titled "/installation-3.4.0" and "...1". I removed the ".0", restarted things and IT WORKED.
So, either there should NOT be two versions under the same database collection OR that's true AND you need the three ports to be the same.
Final add for any Neo4j experts who actually know what they're doing, I have three databases running, two without issue. This occurred AFTER I was messing around trying to see how PowerShell might be useful. Not sure if this is related, but the other databases have worked fine...but, this db is the original playground/sandbox I'd had since the beginning. Not 100% sure, I made the version update before or after, creating the other two databases. HTH.
Using a windows trial version on a Windows 10 machine. Current N4j version is 3.4.1.
Do love what I see so far with Neo BTW!!!
Please mention the correct bolt port under the Connect URL textbox.if you are using the service port the mention the service port in place of bolt port.
Then finally I resolve it by replacing the bolt port with service port inside k8s.
user: neo4j
password: neo4j
I resolve this error by replace the port 7687 with node port 30033 inside Neo4j
then it works fine.
I was facing the same issue with Neo4J version 4 installed on an Ubuntu 18 EC2 instance. Tthe workaround that did the trick for me was to replace the 0.0.0.0 entries in /etc/neo4j/neo4j.conf with the actual private IP of my instance.
Following are the lines where the replace happened:
dbms.default_listen_address=172.X.X.232
dbms.connector.bolt.address=172.X.X.232:7687
Post restart of the DB, the Connect URL when accessing from browser should also use the private IP instead of localhost.

Connect to rails server remotely from raspberry pi

I have ssh'd into my rasberry pi and built a rails application.
Now how do I load the rails app from another machine?
I have tried IP:port in a web browser, but this fails.
Can I use ssh from a web browser to load the rails server process?
Are there gems I need to install to do this?
Is there any good documentation that I have missed?
SOLUTION
use ngrok to tunnel https://medium.com/#karimbutt/using-ngrok-to-create-a-publicly-accessible-web-facing-raspberry-pi-server-35deef8c816a#.sraso7zar
Maybe the problem is with the IP address you're trying to use. Servers don't necessarily forward their public IP traffic to localhost automatically.
Perhaps you could configure the IP address somehow, I don't know (others might?). Alternatively, you have a use a "local tunnel" service like ngrok or localtunnel. What these do is create a public URL for your localhost (i.e. your "loopback" address), so anyone can access it.
I spoke with a Ngrok author via email. He ensured me that I shouldn't need to expect any downtime from the service or to have to manually restart it. Although keep in mind that if you're on the free plan, whenever you restart Ngrok you're going to get a different URL. He also described it as kind of like a "souped up SSH -R"

Jenkins Server - Issues with setting URL

I am trying to set up an internal Jenkins server for our QA team and facing some issues with the server URL. This is inside a corporate network and all sort of firewall and proxy settings are in place, however we need to access the server only with in our internal network. This server runs from a Mac Mini. I was able to install and access the server without any issues using localhost:8080.
I tried to set a custom URL (something like testjenkins.local:8080)under the Manage Jenkins option and never was able to access the server. The only option worked for me is with the IP address (IP:8080). I was able to access the server from other machines in the network using this URL.
The real problem with the above setup is that the machine IP changes(I am not able to make it static), and hence wont be able to get an always working URL.
Highly appreciate if any one guide me in the wright direction.
Given you have a dynamic IP on your server, a good alternative would be using ngrok. Ngrok can expose the port 8080 of that server to the internet via secure tunnels, and you can access it via an URL, so changes in the IP won't affect it.
However, ngrok exposes the server to the whole Internet. To make it accessible only for your team you can add authentication in both ngrok tunnel and Jenkins server (would it work for you?).

How do I make my ruby on rails app respond to external requests (visible to the public on the internet)?

Problem:
My rails app (on my local machine) only responds to requests sent from the same machine to localhost, 127.0.0.1, or my internal ip address. When I try to hit it using my internet ip or from any other machine, inside or outside of my network, it just times out. I'm on Mac OS 10.9.1, ruby 1.9.3, rails 4.0.0.
I've done a lot of searching but all I can find is problems where people didn't forward their ports or bind the right ip.
Here are the areas I've investigated:
Ports -
I've tried several different ports. I configured my router to forward every port I tried but got the same result. I thought maybe there was a problem with the router so I built a simple server in Java and bound all the same ports I was binding with my rails app. Sure enough, when I hit the Java app using my internet ip it worked just fine so the router/firewall/port forwarding isn't the problem. Also, I run an apache server on port 80 and that has never had any problems. I turned apache off and tried port 80 for my rails app but that didn't fix the problem.
Rails Server -
I started with WEBrick and I thought that perhaps there was some setting inside that blocked external requests. I searched google extensively and found nothing on that matter. Just to be safe I installed Thin and got the exact same result I did with WEBrick. One interesting thing is that when the rails server is started, the external request takes a long time to time-out, but the server console displays no output at all. However if I try to send the same request w/out starting the server at all it fails immediately.
User Permissions -
I started the server with root (i'm starting to just shoot in the dark here) and it had no effect.
Environment -
I was starting in development environment originally because I'm developing but just for fun I tried starting in production and it also made no difference.
PLEASE HELP ME SMART PEOPLE
Update:
I installed the app on my Ubuntu machine and it doesn't have this problem! So that suggests the problem may have something to do with Mac OS.
SOLVED:
It turns out that in the System Preferences -> Security & Privacy -> Firewall in Mac OS, it was somehow set to block incoming connections to Ruby 1.9.3. I must have accidentally set that some time ago.
The problem is you are probably trying to request the page from your local machine (or any computer on your local network, behind your firewall) to your public IP expecting a result... not unless you setup routes through your firewall for this (and not usually available on a consumer level router... linksys, dlink, etc)
So forward port 80 if you are using something like pow, or 3000 for web bricks default port to your local machine
Then have someone outside your local lan request your external (public) IP
This may be related: Rails 3.1 on Ubuntu 11.10 under VirtualBox very slow
Your mention of slowness combined with the use of webrick makes me think you've got some reverse-DNS lookup awfulness going on. A quick first step is hacking /etc/hosts to bypass this lookup.
The situation I dealt with on Ubuntu was solved in the short-term by hacking /etc/hosts. You could do this quick hack in order to see if it is indeed just webrick's reverse-DNS lookup. Edit /etc/hosts and add a line for the external user's IP address, something like this:
156.123.48.55 TestPerson
Replace the IP address with the tester's IP address. Since you said you can get the external request to hit an Apache server on port 80, you can grab their IP address from the Apache access logs if necessary, otherwise just ask the person testing.
You could also try a different web server, such as unicorn, which may help out. Add "gem unicorn-rails" to your Gemfile, run bundle install, and then (according to their docs), rails server will just use unicorn directly.
With any local server, you'll need to correctly configure port forwarding on your firewall. Like said by CaptChrisD, tests must be done by an external IP/browser (if you own a server, ssh on it, then w3m to test).
I already had same symptom (server started => timeout, server stopped => fail) and the origin was an issue with firewall configuration. I think it is your problem.
With MacOS, Pow is really awesome: installation is easy, no configuration required (no /etc/hosts…). Moreover, they give you a hook for external access to your virtualhosts (but you still need port forwarding on your firewall).
Otherwise, there is other solutions like Forward to do it without firewall configuration (30-days free trial).
Hope this helps!

acts_as_ferret with multiple hosts

I've got everything working with ferret and acts_as_ferret for development (or localhost DRb), but I can't get my multiple host deployment working. All of the remote systems get ECONNREFUSED when accessing the port. On the ferret server, the daemon is listening on localhost only despite the configuration listing the FQDN as the host.
I also tried switching to a UNIX socket to share data between the ferret DRb daemon and the app code but it too gets ECONNREFUSED. (The socket is available to all of the machines via an NFS mount).
Is there a better way to do this or should I be looking for another search indexer? Thanks.
I did figure out that if the address is changed to druby://0.0.0.0:port that it would listen on all ips on the DRb server; however, it doesn't provide any protection against bad code injection into the DRb process.
Basically don't use ferret. I'm on to Xapian with acts_as_xapian for RoR. It supports multiple processes reading but only one writing, so it's an offline index. However, I will be able to make use of sharing the same index between multiple servers via the shared file system (NFS).
Check out Pitfalls of acts_as_ferret, with DrbServer to the rescue
http://www.subelsky.com/2007/03/pitfalls-of-actsasferret-with-drbserver.html
worked pretty well for me. The only thing I'd add is be sure to set the host value to where you're ferret is running.

Resources