Cloud Composer - How to enable private IP environment? - google-cloud-composer

I'm trying to enable the private IP environment for cloud composer.
However, I'm getting into IP issues, but I don't know how to solve them in GCP:
Http error status code: 400 Http error message: BAD REQUEST Additional
errors:
{"ResourceType":"qba864c4814984c15-tp/compute-address-v1-api:globalAddresses","ResourceErrorCode":"UNSUPPORTED_OPERATION","ResourceErrorMessage":"Requested
range conflicts with other resources: The provided IP range overlaps
with reserved range for auto subnetwork."}
It used to work in a default environment, where I created a cloud NAT with a IP address and all of the request when to the cloud NAT.
Now when I tried to re-create the environment, it started saying that the IP ranges overlaps even though there's no change in my GCP project.
Any ideas?

Related

MinIO + Docker - cannot use SSL certificate with new version (x509 doesn't contain any IP sans)

I'm running MinIO under docker. I've been using a version that was released before the integration of the MinIO console (circa July 2021). This was setup with an SSL certificate purchased from a third party, bound to my external web address (https://minio.example.com for instance).
After running the new version of Minio RELEASE.2021-09-24T00-24-24Z via Docker, I needed to update my config (the env variables for MINIO_ACCESS_KEY / MINIO_SECRET_KEY change for example. I've also added --console-address=":9001" to my config, MinIO is running on port 9000 for the main service.
The service runs fine for storing data, but accessing the web address gives the error:
x509: cannot validate certificate for 172.19.0.2 because it doesn't contain any IP SANs
I believe this is to do with MinIO looking at the internal Docker IP addresses, and not finding them in the SSL (there are no IPs in the SSL at all). I'm unable to find documentation explaining how to resolve this. Ideally, I don't want to get a new SSL that contains the IP address (external or internal!).
Can I change some of the Docker config such that MinIO will not try to check the IP addresses in the SSL?
To answer my own question, I re-read the quickstart guide more carefully (https://docs.min.io/docs/minio-quickstart-guide.html), noting the following:
Similarly, if your TLS certificates do not have the IP SAN for the MinIO server host, the MinIO Console may fail to validate the connection to the server. Use the MINIO_SERVER_URL environment variable and specify the proxy-accessible hostname of the MinIO server to allow the Console to use the MinIO server API using the TLS certificate.
For example: export MINIO_SERVER_URL="https://minio.example.net"
For me, this meant I needed to update my docker-compose.yml file, adding the MINIO_SERVER_URL env variable. It had to point to the data URL for MinIO, not the console URL (otherwise you get an error about "Expected element type <AssumeRoleResponse> but have <html>").
It now works fine.

Getting Neo4J running on OpenShift

I am trying to get the Bitnami Neo4j image running on OpenShift (testing on my local Minishift), but I am unable to connect. I am following the steps outlined in this issue (now closed), however, now I cannot access the external IP for the load balancer.
Here are the steps I have taken:
Deploy Image (bitnami/neo4j)
Create service for the load balancer,
using the YAML supplied in the issue mentioned
Get the external IP
address for the LB (oc get services) The command in step 3 lists 2
of the same IP addresses, and when I attempt to go to this IP in my
browser it times out.
I can create a route that points to port 7374 on the IP of the LB, but
then I get the same error as reported in the aforementioned issue.
(ServiceUnavailable: WebSocket connection failure. Due to security
constraints in your web browser, the reason for the failure is not
available to this Neo4j Driver. Please use your browsers development
console to determine the root cause of the failure. Common)
Configure neo4j to accept non-local connections. E.g.:
dbms.connector.bolt.address=0.0.0.0:7687
Source: https://neo4j.com/developer/kb/explanation-of-error-websocket-connection-failure/

I can not create a Webhooks in gitlab to integrate jenkins

Prepare the environment in jenkins to integrate sonarqube and gitlab, with sonarqube I have no problem but when I try to create Webhooks, it does not let me enter a URL localhost.
If someone can help me to give access to my URL.
This was reported in gitlab-ce issue 49315, and linked to the documentation "Webhooks and insecure internal web services"
Because Webhook requests are made by the GitLab server itself, these have complete access to everything running on the server (http://localhost:123) or within the server’s local network (http://192.168.1.12:345), even if these services are otherwise protected and inaccessible from the outside world.
If a web service does not require authentication, Webhooks can be used to trigger destructive commands by getting the GitLab server to make POST requests to endpoints like http://localhost:123/some-resource/delete.
To prevent this type of exploitation from happening, starting with GitLab 10.6, all Webhook requests to the current GitLab instance server address and/or in a private network will be forbidden by default.
That means that all requests made to 127.0.0.1, ::1 and 0.0.0.0, as well as IPv4 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 and IPv6 site-local (ffc0::/10) addresses won’t be allowed.
If you really needs this:
This behavior can be overridden by enabling the option “Allow requests to the local network from hooks and services” in the “Outbound requests” section inside the Admin area under Settings (/admin/application_settings/network):

Unable to find the server's IP address even though an IP is defined

First of all, I need to warn you that I started to use Jelastic a few hours ago, so it might be a newbie question.
I'm using the "free trial" version of Jelastic, and made a few tests with them, where I tried a custom Docker image or a NodeJS environment.
I chose a MySQL image, a load balancer so that I have the SSL, and a NodeJS docker image.
It worked only once, the first time: I could reach the NodeJS image from outside, where a drawing game was available. After that, I only get the following error:
This website is unavailable
Unable to find the IP address of [the
auto-generated domain name thingy]
DNS_PROBE_FINISHED_NXDOMAIN
According to Jelastic, since I'm in free trial mode, I can't have more than one IP address, and it must be an IPV6. And according to this screenshot, it is enabled.
So... why can't I reach the server from anywhere?
Edit: here are a few screenshots (sorry for the time it took)
So after asking the question here yesterday, I changed the IP address from Nginx to NodeJS (just to test), and the error message got different, but not better:
It seems that somehow, even if I remove the ip from NodeJS to put it back to Nginx, I get the same error. No more DNS_PROBE_FINISHED_NXDOMAIN, can't obviously tell why.
Here is how look my IP address on both nodes:
Thank you in advance

AWS Load Balancer EC2 health check request timed out failure

I'm trying to get down and dirty with DevOps and I'm running into a health check request timed out failure. The problem is my Elastic Load Balancer sends a health check to my EC2 instance and gets a network timeout. I'm not sure what I did wrong. I am following this tutorial and I have completed all the steps up to and including "Using a Elastic Load Balancer". My EC2 instance seems to be working fine and I am able to successfully curl localhost on port 9292 from within the EC2 instance.
EC2 instance security group setup:
Elastic Load Balancer setup:
My target group for the ELB routing has port 9292 open via HTTP and here's a screenshot of the target in my target group that is unhealthy.
Health check config:
I have a VPC that my EC2 instance is a part of and my ELB is connected to the same VPC. I do not have Apache installed and I do not have nginx installed. To my understanding, I do not need these. I have a Rails Puma server running and I can send successful curl requests to the server.
My hunch is that my ELB is not allowed to reach my EC2 instance, resulting in a network timeout and a failed health check. I'm unable to find the cause for this. Any ideas? This SO post didn't help much. Are my security groups misconfigured? What else could potentially block a routing request from ELB to my EC2 instance?
Also, is there a way to view network requests / logs for my EC2 instance? I keep seeing VPC flow logging but I feel like there are simpler alternatives.
Here's something I posted in the AWS forums but to no avail.
UPDATE: I can curl the private IP of target just fine from within an EC2 instance. I don't think it's the target instance, I think it's something to do with the security group setup. I am unable to identify why though because I have basically allowed all traffic from the Load Balancer to the EC2 instance.
I made my mistake during the "Setup your VPC" step. I finished creating a subnet for an RDS instance. I proceeded to start an instance and the default subnet that AWS chose when I switched to my VPC was the subnet I made for my RDS, which was NOT a public subnet. Therefore, any attempts, from any EC2 instance or my load balancer, would not be able to reach it because I had only set up my public subnet to take requests.
The solution was to create a new instance and this time, pick the correct public subnet. My original EC2 instance was associated with a private subnet while the load balancer was pointing to the public subnet.
Here's a link to a hand drawn image that helped me pin point my problem, hopefully can help anyone else who's having trouble setting up. I didn't put image here directly because it's bigger than 2MB.
Glad to answer any further questions too!

Resources