I am using http://freegeoip.net for geo location on my website. To reach higher reliability, I would like to create a local copy of the service on a separate server.
I have set up Docker cloud with Amazon AWS and installed this repository: https://hub.docker.com/r/fiorix/freegeoip/.
If I enter e.g. "curl localhost:8080/json/1.2.3.4" in the Docker terminal, it correctly answers with the location of that IP address.
I now want to integrate this into my website. So far my website source code references the address "//freegeoip.net/json/". By which IP address do I have to replace this to get to my copy on Docker cloud? Thank you!
You have to replace with the address of your Amazon AWS instance or load balancer.
Related
I am a beginner in regards to ArangoDB and I am trying to deploy my first project using it. The website is PHP based - what I did is that I created an Arango Docker container on Digital Ocean so that I can access it from the browser with the ipv4 provided. Public access to port 8529 is enabled. Locally, I am able to modify the .config file in order to point to the corresponding ip and I can painlessly retrieve data.
As a hosting provider I am using one.com. When uploading the same files that I am able to run locally on my own domain I get the following error:
["_database":"ArangoDBClient\Connection":private]=> string(7) "_system" } ArangoDBClient\ConnectException: cannot connect to endpoint 'tcp://xxx.xx.xxx.xxx:8529/': Connection timed out
I want to mention that I have also tried out ArangoOasis. No luck with it - I get the same error. Been at it for quite a few weeks - I would very much use some guidance. Even what to do next as I am out of ideas and documentation to read.
I have set up a node.js server and run it in a Docker container.
I am hosting MySQL database on my computer and have the server connected.
I have deployed my website on netlify and it uses REST API to send and retrieve data to the server.
Currently, the API url uses 'http://localhost:4941/api/v1...' When I access the website on the same computer as the database is hosted, I am able to see the data retrieved from the server.
However, when I access the website on another computer (and from a different network), I am not able to see the data obtained from the server.
I have tried using 127.0.0.1 as well as the Docker container's IPAddress of 172.17.0.2 but was unable to solve the problem too.
I expect to be able to use my website from any computers in the world that have access to Internet and be able to send and retrieve data from the server.
So, does the problem lie in the API url of me using localhost? If so, what address should I use instead?
I'm developing an app using GCP managed Cloud Run and MongoDB Atlas. If I allow connection from anywhere for IP Whitelist of Atlas, Cloud Run perfectly works well with MongoDB Atlas. However, I want to restrict connection only for necessary IPs but I cloud't find outbound IPs of Cloud Run. Any way to know the outbound IPs?
Update (October 2020): Cloud Run has now launched VPC egress feature that lets you configure a static IP for outbound requests through Cloud NAT. You can follow this step by step guide in the documentation to configure a static IP to whitelist at MongoDB Atlas.
Until Cloud Run starts supporting Cloud NAT or Serverless VPC Access, unfortunately this is not supported.
As #Steren has mentioned, you can create a SOCKS proxy by running a ssh client that routes the traffic through a GCE VM instance that has a static external IP address.
I have blogged about it here: https://ahmet.im/blog/cloud-run-static-ip/, and you can find step-by-step instructions with a working example at: https://github.com/ahmetb/cloud-run-static-outbound-ip
Cloud Run (like all scalable serverless products) does not give you dedicated IP addresses that are known to be the origination of outgoing traffic. See also: Possible to get static IP address for Google Cloud Functions?
Cloud Run services do no get static IPs.
A solution is to send your outbound requests through a proxy that has a static IP.
For example in Python:
import requests
import sys
from flask import Flask
import os
app = Flask(__name__)
#app.route("/")
def hello():
proxy = os.environ.get('PROXY')
proxyDict = {
"http": proxy,
"https": proxy
}
r = requests.get('http://ifconfig.me/ip', proxies=proxyDict)
return 'You connected from IP address: ' + r.text
With the PROXY environemnt variable containing the IP or URL of your proxy (see here to set an environment variable )
For this proxy, you can either:
create it yourself, for example using a Compute Engine VM with a static public IP address running squid, this likely fits in the Compute Engine free tier.
use a service that offers a proxy with static IP, for example https://www.quotaguard.com/static-ip/ that starts at $19/m
I personally used this second solution. The service gives me a URL that includes a username and password, that I then use as a proxy using the code above.
This feature is now released in beta by the Cloud Run team:
https://cloud.google.com/run/docs/configuring/static-outbound-ip
I have an API that exists on an external machine(let's call it: exmachine). The API itself has its own address(let's call it: http://10.10.10.10:8000).
I need use a local Docker container to connect to this external API. I think the best way to do this would be to create an nginx container and have the plug in the API address and machine name into the nginx.conf file. But I am unsure how that would work remotely.
Would anyone know the best method to connect to an external API via Docker? Much appreicated.
We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure