Not able to check if file exists on S3(Failed to open TCP connection to 169.254.169.254:80) - ruby-on-rails

We are using paperclip gem to use S3 functionality(upload, fetch, check if file present).
We are not providing AWS keys in code, we are using role-based mechanism.
Things work fine on ECS-EC2 but break on ECS fargate but both have the same role and the same attach policies.
On fargate we are getting
Failed to open TCP connection to 169.254.169.254:80 (Invalid argument - connect(2) for "169.254.169.254" port 80)
Any ideas?.

Related

Is there a way to allow cloudbuild steps to access the Cloud SQL in GCP

I'm setting up a cloud build trigger in order to deploy a PHP/Symfony Application. When the docker file runs the php app/console assetic:dump command in order to create the assets I get the following error.
SQLSTATE[HY000] [2002] Connection timed out
[PDOException]
SQLSTATE[HY000] [2002] Connection timed out
[Doctrine\DBAL\Driver\PDOException]
An exception occurred in driver: SQLSTATE[HY000] [2002]
Connection timed out
[Doctrine\DBAL\Exception\ConnectionException]
I have resolved to trying to get the docker container to connect to the database instead of trying to fix the symfony application because I don't know enough about the framework or php.
Is it possible to set this up so that I can allow some kind of IP on the CLOUDSQL side to allow these connections?
A solution to setup the proxy in the same step is described in the answer here:
Run node.js database migrations on Google Cloud SQL during Google Cloud Build

Signalr connection forcefully close when sending request to aws elastic beanstalk

At beginning of project I use http for connecting with ec2 directly with ip address not domain name and its connects fine and worked fine to my c# client and web client that connected to ec2 through ip adderess.
Recently I added Https to my load balancer and configured all ec2 with https security groups and there trouble started.
Signalr web client with https and ip address connects fine on ec2 but c# client with https and ip address not connecting. C# client throw connection close method continuously.
To solve that I change my connection url from ip to elastic beanstalk domain name to c# client and signalr connected but following things happen.
1) First time when I connect with beanstalk domain name, it response with 400 header error on connection establishment and serve also reply data from database so first time connection established.
2) After server's reply I invoke another method of server at that time error occurred stated that connection disconnected please start connection before making request to server.
3) In signalr there is a connection close method that invoke if connection has been close and it is not invoking.
4) I have searched my query in Internet and found that we have to configured socket connection on beanstalk as they have same issue with nginx. I am using IIS and there is no particular answer for that.
5) I have try to connect directly on ec2 instance domain name but signalr did not established connection and directly fire connection close method without any error or warning.
6) In my network configuration I have enabled inbound port with 443 and 80. If i made request from my browser to that domain url of beanstalk or ec2 its works fine
If you have any idea to configure socket on aws ec2 or elastic beanstalk might help to solve this problem.

Using data flow with https on cloud foundry

I am trying to deploy a data flow server on Cloud foundry and create a simple app.
Only https end point could be exposed. I cannot enable https using this :
http://docs.spring.io/spring-cloud-dataflow/docs/current-SNAPSHOT/reference/htmlsingle/#configuration-security-enabling-https
As ssl is managed by cf. How do I make data flow server using https?
I have this error:
dataflow:>app list
Command failed org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://dataflow-server.run.aws-usw02-pr.ice.predix.io/apps": Connect to dataflow-server.run.aws-usw02-pr.ice.predix.io:80 [dataflow-server.run.aws-usw02-pr.ice.predix.io/54.201.89.124, dataflow-server.run.aws-usw02-pr.ice.predix.io/52.88.128.224] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to dataflow-server.run.aws-usw02-pr.ice.predix.io:80 [dataflow-server.run.aws-usw02-pr.ice.predix.io/54.201.89.124, dataflow-server.run.aws-usw02-pr.ice.predix.io/52.88.128.224] failed: Connection refused (Connection refused)
Thanks in advance.
Best Regards
as you already mentioned, you can not enable https at the container level inside cloudfoundry today. The traffic between the router and diego cell is not encrypted (unless you are using IPSEC).
So your dataflow server would not be configured with https, just deploy the server as it is. You should rely on your cloudfoundry install to have an open port at 443 on it's loadbalancer that forwards traffic to the router. Later CF incarnations support certificate placement at the router level.
Now, at the client (dataflow-shell) if you are using a valid certificate you don't need to do anything, but if you have a selfsigned certificate, you need to tell it to accept self-signed certificates, or skip validation all together.

Using a remote Postgresql with AWS for Rails app

I am using a remote PostgreSQL on another server and want to deploy Rails app to AWS. I want the AWS to communicate with that remote PostgreSQL database server.
I'm getting the error:
FATAL: Peer authentication failed for user "postgres"
Although I've whitelisted the IP in pg_hba.conf
How I've whitelisted?
I've seen the Public IP in AWS Console and added that. I've pinged my AWS site and added that IP.
Peer authentication in the error means you're not trying to connect remotely, but locally. You must review the settings in database.yml. See
PG Peer authentication failed for a related question.
Once you're ready to connect to the real remote server, that'll probably still won't work with the pg_hba.conf linked to in the comments because of:
host all all * md5
host all all [AWS-PINGED-IP] md5
host all all [AWS-SPECIFIED-PUBLIC-IP] md5
* is not accepted as an IP address mask, shell wildcards syntax is not welcome here. Use 0.0.0.0/0 in CIDR notation to mean "any IPv4 address".
Or remove entirely this line if you didn't mean to accept connections from any address, which seems to be the case given the two lines after.
Note that rules interpretation stops at the first match in order of declaration, so it doesn't make sense to have an "accept-all" rule followed by a much more restrictive rule, as the latter will always be ignored.

Remote connection to Neo4j server

I believe the way to creating a remote connection is by changing this line in conf/neo4j-server.properties, specifically by removing the comment and restarting the server.
org.neo4j.server.webserver.address=0.0.0.0
My URL is https://0.0.0.0:7473/browser/ and works on the local machine, but when I test the URL in Safari on iPhone over 3G, it cannot connect.
What do I set the address to in the properties file?
I thought it was the IP address of my computer, but after trying the remote address which I got from Googling “ip address mac” that didn’t work, nor did (obviously) the local IP address of my machine, 192.168.0.14
I should point out that setting it to the IP address from Google throws an error and the log reads:
2015-01-29 17:10:08.888+0000 INFO [API] Failed to start Neo Server on port [7474], reason [MultiException[java.net.BindException: Can't assign requested address, java.net.BindException: Can't assign requested address]]
With default configuration Neo4j only accepts local connections
In neo4j-community-3.1.0 edit conf/neo4j.conf file and uncomment the following to accept non-local connections
dbms.connectors.default_listen_address=0.0.0.0
By setting
org.neo4j.server.webserver.address=0.0.0.0
enables Neo4j on all network interfaces.
The remainder of that reply is not Neo4j related at all - it's regular networking. Double check if port 7473 (and/or 7474) are not blocked neither be a locally running firewall nor by your router. You local IP 192.168.0.14 indicates you're behind a router doing NAT. Therefore you have to setup a port forwarding in your router for the ports mentioned above.
Please be aware that this is potentially dangerous since everyone knowing your external IP can access your Neo4j instance. Consider using either https://github.com/neo4j-contrib/authentication-extension or use a VPN in favour of port forwarding.
in 3.0:
##### To have HTTP accept non-local connections, uncomment this line
dbms.connector.http.address=0.0.0.0:7474
Confused myself with the setting. Anyone who has the same problem, 0.0.0.0 just means “this server isn’t local any more” and so to access it you use the public IP address of the computer that’s hosting the Neo4j server.
Just make sure that the ports you set in the server properties (default are 7474 and 7473) are open for incoming connections on your router/firewall etc.
I think there's some confusion here. That configuration property org.neo4j.server.webserver.address is about which IP address the server you're starting listens on for external connections. Relevant documentation is here.
It seems you're asking how to configure your database to talk to a remote database. I don't think you can do that. Rather, by editing that file you're planning on running a database on the host where that file is. Your local database on that host will write files to wherever the org.neo4j.server.database.location configuration parameter points.
A remote connection is something that the neo4j shell might establish, or that you browser might make to a foreign server running neo4j; but you don't establish that sort of remote connection by editing that file. Hopefully this helps.
Also if you have ssh access to remote server with neo4j you can setup ssh tunnel to access it via localhost:
ssh -NfL localhost:7474:localhost:7474 -L localhost:7687:localhost:7687 yourname#yourhost
then type in browser:
localhost:7474
Depends on the version.
Look for the phrase 'non-local connections' in the conf file.(In my case, $NEO4J_HOME/conf/neo4j.conf)
Then follow the instructions in the comments.
In my case,
# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
server.default_listen_address=0.0.0.0

Resources