I have Google Cloud Project with VPN enabled connectivity and Google Cloud SQL (PostgreSQL) database instance with the same VPN connectivity along with SSL enabled. Cloud SQL has both Public and Private IP addresses. Public IP I used for connecting database from external such as PgAdmin client tool and Private IP used for internal connectivity such as Dataflow. Now I want to connect this CloudSQL from Cloud Composer. Used PostgresOperator to connect the Cloud Postgresql database. Created separate connection with Puplic IP as port in under Airflow -> Connections section. Since this CloudSQL has SSL enabled, so pushed the certificates to DAG's GCS location. In the connection under the extra properties section just passed ssl certificates path information as like below,
{
"sslmode": "verify-ca",
"sslcert": "/home/airflow/gcs/dags/certificates/client-cert.pem",
"sslca": "/home/airflow/gcs/dags/certificates/server-ca.pem",
"sslkey": "/home/airflow/gcs/dags/certificates/client-key.pem"
}
Got below error message,
psycopg2.OperationalError: private key file
"/home/airflow/gcs/dags/certificates/client-key.pem" has group or
world access; permissions should be u=rw (0600) or les
It would be good if some one help me on this issue fix.
postgresoperator = PostgresOperator(
task_id='create_field_reports',
sql=create_field_reports_query,
postgres_conn_id='pgconnection_google_private',
dag=dag
)
Cloud Composer uses GCSFUSE to mount certain directories (DAGs/plugins) from Cloud Storage into Airflow worker pods running in GKE. It mounts these with default permissions that cannot be overwritten, because that metadata is not tracked by GCS.
A workaround is to use a BashOperator that runs at the beginning of your DAG to copy the files to a new directory, and then run chmod on all of them.
You may want to use gcp_sql_operator instead as it takes care of the cloud proxy. You can see an example on my answer to a related question:
Google Cloud Composer and Google Cloud SQL
It requires several steps for that, all sparsely documented in the web. It does not use SSL but I think it can be refactored to use SLL:
Define a Cloud SQL connection factory with proxy:
def create_cloudsql_conn(name, user, password, instance, database, port='3308'):
"""
MySQL: connect via proxy over TCP (specific proxy version)
It uses the format AIRFLOW_CONN_* to create a connection named PROXY_ODS_VAT
https://airflow.readthedocs.io/en/latest/howto/connection/gcp_sql.html
"""
os.environ[f'AIRFLOW_CONN_{name.upper()}'] = \
"gcpcloudsql://{user}:{password}#{public_ip}:{public_port}/{database}?" \
"database_type=mysql&" \
"project_id={project_id}&" \
"location={location}&" \
"instance={instance}&" \
"use_proxy=True&" \
"sql_proxy_version=v1.13&" \
"sql_proxy_use_tcp=True".format(
user=quote_plus(user),
password=quote_plus(password),
public_ip='0.0.0.0',
public_port=port,
database=quote_plus(database),
project_id=quote_plus(Variable.get('gcp_project')),
location=quote_plus(Variable.get('gce_region')),
instance=quote_plus(instance),
)
In your DAG file, create the connection:
create_cloudsql_conn(
'proxy_ods_vat',
Variable.get('gcsql_ods_user'),
Variable.get('gcsql_ods_password'),
Variable.get('gcsql_ods_instance'),
Variable.get('gcsql_vat_database')
)
Create a CloudSQLQueryOperator:
cloudsql_prep = CloudSqlQueryOperator(
task_id="cloudsql-load-prep",
gcp_cloudsql_conn_id='proxy_ods_vat',
sql='templates/ingestion_prep.sql',
params={
'database': Variable.get('gcsql_vat_database')
},
)
Use you operator.
Related
I'm running MinIO under docker. I've been using a version that was released before the integration of the MinIO console (circa July 2021). This was setup with an SSL certificate purchased from a third party, bound to my external web address (https://minio.example.com for instance).
After running the new version of Minio RELEASE.2021-09-24T00-24-24Z via Docker, I needed to update my config (the env variables for MINIO_ACCESS_KEY / MINIO_SECRET_KEY change for example. I've also added --console-address=":9001" to my config, MinIO is running on port 9000 for the main service.
The service runs fine for storing data, but accessing the web address gives the error:
x509: cannot validate certificate for 172.19.0.2 because it doesn't contain any IP SANs
I believe this is to do with MinIO looking at the internal Docker IP addresses, and not finding them in the SSL (there are no IPs in the SSL at all). I'm unable to find documentation explaining how to resolve this. Ideally, I don't want to get a new SSL that contains the IP address (external or internal!).
Can I change some of the Docker config such that MinIO will not try to check the IP addresses in the SSL?
To answer my own question, I re-read the quickstart guide more carefully (https://docs.min.io/docs/minio-quickstart-guide.html), noting the following:
Similarly, if your TLS certificates do not have the IP SAN for the MinIO server host, the MinIO Console may fail to validate the connection to the server. Use the MINIO_SERVER_URL environment variable and specify the proxy-accessible hostname of the MinIO server to allow the Console to use the MinIO server API using the TLS certificate.
For example: export MINIO_SERVER_URL="https://minio.example.net"
For me, this meant I needed to update my docker-compose.yml file, adding the MINIO_SERVER_URL env variable. It had to point to the data URL for MinIO, not the console URL (otherwise you get an error about "Expected element type <AssumeRoleResponse> but have <html>").
It now works fine.
I'm developing an app using GCP managed Cloud Run and MongoDB Atlas. If I allow connection from anywhere for IP Whitelist of Atlas, Cloud Run perfectly works well with MongoDB Atlas. However, I want to restrict connection only for necessary IPs but I cloud't find outbound IPs of Cloud Run. Any way to know the outbound IPs?
Update (October 2020): Cloud Run has now launched VPC egress feature that lets you configure a static IP for outbound requests through Cloud NAT. You can follow this step by step guide in the documentation to configure a static IP to whitelist at MongoDB Atlas.
Until Cloud Run starts supporting Cloud NAT or Serverless VPC Access, unfortunately this is not supported.
As #Steren has mentioned, you can create a SOCKS proxy by running a ssh client that routes the traffic through a GCE VM instance that has a static external IP address.
I have blogged about it here: https://ahmet.im/blog/cloud-run-static-ip/, and you can find step-by-step instructions with a working example at: https://github.com/ahmetb/cloud-run-static-outbound-ip
Cloud Run (like all scalable serverless products) does not give you dedicated IP addresses that are known to be the origination of outgoing traffic. See also: Possible to get static IP address for Google Cloud Functions?
Cloud Run services do no get static IPs.
A solution is to send your outbound requests through a proxy that has a static IP.
For example in Python:
import requests
import sys
from flask import Flask
import os
app = Flask(__name__)
#app.route("/")
def hello():
proxy = os.environ.get('PROXY')
proxyDict = {
"http": proxy,
"https": proxy
}
r = requests.get('http://ifconfig.me/ip', proxies=proxyDict)
return 'You connected from IP address: ' + r.text
With the PROXY environemnt variable containing the IP or URL of your proxy (see here to set an environment variable )
For this proxy, you can either:
create it yourself, for example using a Compute Engine VM with a static public IP address running squid, this likely fits in the Compute Engine free tier.
use a service that offers a proxy with static IP, for example https://www.quotaguard.com/static-ip/ that starts at $19/m
I personally used this second solution. The service gives me a URL that includes a username and password, that I then use as a proxy using the code above.
This feature is now released in beta by the Cloud Run team:
https://cloud.google.com/run/docs/configuring/static-outbound-ip
We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure
I am using http://freegeoip.net for geo location on my website. To reach higher reliability, I would like to create a local copy of the service on a separate server.
I have set up Docker cloud with Amazon AWS and installed this repository: https://hub.docker.com/r/fiorix/freegeoip/.
If I enter e.g. "curl localhost:8080/json/1.2.3.4" in the Docker terminal, it correctly answers with the location of that IP address.
I now want to integrate this into my website. So far my website source code references the address "//freegeoip.net/json/". By which IP address do I have to replace this to get to my copy on Docker cloud? Thank you!
You have to replace with the address of your Amazon AWS instance or load balancer.
Can I use Secure-Gateway between my Cloud Foundry apps on Bluemix and my Bluemix docker container database (mongo)? It does not work for me.
Here the steps I have followed:
upload secure gw client docker image on bluemix
docker push registry.ng.bluemix.net/NAMESPACE/secure-gateway-client:latest
run the image with token as a parameter
cf ic run registry.ng.bluemix.net/edevregille/secure-gateway-client:latest GW-ID
when i look at the logs of the container secure-gateway, I get the following:
[INFO] (Client PID 1) Setting log level to INFO
[INFO] (Client PID 1) There are no Access Control List entries, the ACL Deny All flag is set to: true
[INFO] (Client PID 1) The Secure Gateway tunnel is connected
and the secure-gateway dashboard interface shows that it is connected too.
But then, when I try to add the MongoDB database (running also on my Bluemix at 134.168.18.50:27017->27017/tcp) as a destination from the service secure-gateway dashboard, nothing happened: the destination is not created (does not appear).
I am doing something wrong? Or is it just that this not a supported use case?
1) The Secure Gateway is a service used to integrate resources from a remote (company) data center into Bluemix. Why do you want to use the SG to access your docker container on Bluemix?
2) From a technical point of view the scenario described in the question should work. However, you need to add rule to the access control list (ACL) to allow access to the docker container with your MongoDB. When you are running the SG it has a console to type in commands. You could use something like allow 134.168.18.50:27017 as command to add the rule.
BTW: There is a demo using the Secure Gateway to connect to a MySQL running in a VM on Bluemix. It shows how to install the SG and add a ACL rule.
Added: If you are looking into how to secure traffic to your Bluemix app, then just use https instead of http. It is turned on automatically.