Secure gateway between Bluemix CF apps and containers - docker

Can I use Secure-Gateway between my Cloud Foundry apps on Bluemix and my Bluemix docker container database (mongo)? It does not work for me.
Here the steps I have followed:
upload secure gw client docker image on bluemix
docker push registry.ng.bluemix.net/NAMESPACE/secure-gateway-client:latest
run the image with token as a parameter
cf ic run registry.ng.bluemix.net/edevregille/secure-gateway-client:latest GW-ID
when i look at the logs of the container secure-gateway, I get the following:
[INFO] (Client PID 1) Setting log level to INFO
[INFO] (Client PID 1) There are no Access Control List entries, the ACL Deny All flag is set to: true
[INFO] (Client PID 1) The Secure Gateway tunnel is connected
and the secure-gateway dashboard interface shows that it is connected too.
But then, when I try to add the MongoDB database (running also on my Bluemix at 134.168.18.50:27017->27017/tcp) as a destination from the service secure-gateway dashboard, nothing happened: the destination is not created (does not appear).
I am doing something wrong? Or is it just that this not a supported use case?

1) The Secure Gateway is a service used to integrate resources from a remote (company) data center into Bluemix. Why do you want to use the SG to access your docker container on Bluemix?
2) From a technical point of view the scenario described in the question should work. However, you need to add rule to the access control list (ACL) to allow access to the docker container with your MongoDB. When you are running the SG it has a console to type in commands. You could use something like allow 134.168.18.50:27017 as command to add the rule.
BTW: There is a demo using the Secure Gateway to connect to a MySQL running in a VM on Bluemix. It shows how to install the SG and add a ACL rule.
Added: If you are looking into how to secure traffic to your Bluemix app, then just use https instead of http. It is turned on automatically.

Related

DDEV - create SFTP user

I have created two containers (ddev-website-web and ddev-api-web) via DDEV.
Now I want to access the website container from the api container via SFTP.
How can I create a SFTP user in DDEV for the website container? Is this possible at all?
The containers are already connected via a router.
I think
Install sshd using this technique from ddev-contrib will work for you, at least will get you started with having an ssh server
Add vsftpd by adding to webimage_extra_packages inthe config.ssh.yaml: webimage_extra_packages: [vsftpd, openssh-server] to your .ddev/config.yaml
From there, you may have some extra config to do based on https://linuxopsys.com/topics/install-vsftpd-ftp-server-on-debian

Programmatically check if Cloud Run domain mapping has done

I'm developing a service which will have a subdomain for each customer. So far I've set a DNS rule on Google Domains as
* | CNAME | 3600 | ghs.googlehosted.com.
and then I add the mapping for each subdomain in the Cloud Run console. I want to do all this programmatically everytime a new user registers.
The DNS rule will handle automatically any new subdomain, and to map it to the service I'll use the gcloud command:
gcloud beta run domain-mappings create --service frontend --domain sub.domain.com
Now, how can I check when the Cloud Run provisioning has done so that I can notify the customer that the platform is ready to use? I could CRON every minute the command gcloud beta run domain-mappings describe --domain sub.domain.com, parse the JSON output and check if the status has done. It's expensive, but it should work.
The problem is that even if the gcloud cli or the web console mark the provisioning as done, the platform isn't reachable for another 5-10 minutes, resulting in a ERR_CONNECTION_REFUSED error. The service logs show that a request to the subdomain is being made, but somehow it won't serve it.
I ended up using a load balancer as suggested. I followed this doc "Setting up a load balancer with Cloud Run, App Engine, or Cloud Functions", the only different thing is that I provided my own wildcard certificate (thanks to Let's Encrypt and certbox).
Now I can just use the Google Domains' API to instantly create a subdomain.

Is it possible to create a ftpserver in Azure web app service

I can create an FTP-server using docker according to this.
I wonder whether it works in the azure web app for containers.
If the answer is yes, how could make it works?
By the way, I've tried it, according to the steps from the link, I have to create users. but I don't know how to connect the container's linuxOS.
Generally, the FTP server should need to open multiple ports as the document you provided shows. But Azure Web App service only can open 80 and 443 port. And you would not be a whole controller for it. So, if you want to deploy an FTP server, the Azure Web App service is not a good choice. Even if it can run the FTP image. And the VM is recommended.
By the way, if you want to connect to the container's LinuxOS of Web App, you need to enable the SSH feature in the image before you deploy it into the Web App. You can follow the steps about How to enable the SSH in the Web App.

I can not create a Webhooks in gitlab to integrate jenkins

Prepare the environment in jenkins to integrate sonarqube and gitlab, with sonarqube I have no problem but when I try to create Webhooks, it does not let me enter a URL localhost.
If someone can help me to give access to my URL.
This was reported in gitlab-ce issue 49315, and linked to the documentation "Webhooks and insecure internal web services"
Because Webhook requests are made by the GitLab server itself, these have complete access to everything running on the server (http://localhost:123) or within the server’s local network (http://192.168.1.12:345), even if these services are otherwise protected and inaccessible from the outside world.
If a web service does not require authentication, Webhooks can be used to trigger destructive commands by getting the GitLab server to make POST requests to endpoints like http://localhost:123/some-resource/delete.
To prevent this type of exploitation from happening, starting with GitLab 10.6, all Webhook requests to the current GitLab instance server address and/or in a private network will be forbidden by default.
That means that all requests made to 127.0.0.1, ::1 and 0.0.0.0, as well as IPv4 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 and IPv6 site-local (ffc0::/10) addresses won’t be allowed.
If you really needs this:
This behavior can be overridden by enabling the option “Allow requests to the local network from hooks and services” in the “Outbound requests” section inside the Admin area under Settings (/admin/application_settings/network):

Docker cannot acces registry from openshift

Here is my whole scenario.
I have a RHEL 7.1 vmware image, with the corporate proxy properly configured, accessing stuff over http or https works properly.
Installed docker-engine, and added the HTTP_PROXY setting to /etc/systemd/system/docker.service.d/http-proxy.conf. I can verify the proxy setting is picked up by executing:
sudo systemctl show docker --property Environment
which will print:
Environment=HTTP_PROXY=http://proxy.mycompany.com:myport/ with real values of course.
Pulling and running docker images works correctly this way.
The goal is to work with the binary distribution of openshift-origin. I downloaded the binaries, and started setting up things as per the walkthrough page on github:
https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
Starting openshift seems to work as I can:
* login via the openshift cli
* create a new project
* even access the web console
But when I try to create an app in the project (also via the cli):
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git
It fails:
error: can't look up Docker image "centos/ruby-22-centos7": Internal error occurred: Get https://registry-1.docker.io/v2/: dial tcp 52.71.246.213:443: connection refused
I can access (without authentication though) this endpoint via the browser on the VM or via WGET.
Hence I believe DOCKER fails to pick up the proxy settings. After some searching I also fear if there are IPTABLES settings missing. Referring to:
https://docs.docker.com/v1.7/articles/networking/
But I don't know if I should fiddle with the IPTABLES settings, should not Docker figure that out itself?
Check your HTTPS_PROXY environment property.

Resources