I have a question about jhipster working combined with docker and localhost. I have started the registry and the uaa apps using docker compose, everything is fine. Then i started locallly one microservice and the gateway. Both of them are sucessfully seen in the registry instances view. The problem is, that when the gateway tries to connect to the uaa (uaa/oauth/token) it fails (I/O error on POST request for http://uaa/oauth/token). I have tried to set in /etc/hosts uaa localhost but it did not help. Does anybody have an idea how to deal with this issue? Thanks in advance
The UAA server will have a port as well as a host name. Both will need to be specified. To specify the port you will need to change your application.properties.
Related
I'm using GKE for deployments.
Edit: I need to access a customer's API endpoint which is only accessible when using their VPN. So far I can run a container which connects to this VPN and I can cURL the endpoint successfully.
For the above, I have configured a Debian docker image which successfully connects to a VPN (specifically, using Kerio Control VPN) when deployed. Whenever I make a net request from this container, it runs through the VPN connection, as expected.
I have another image which runs a .NET Core program which makes necessary HTTP requests.
From this guide I know it is possible to run a container's traffic through another using pure docker. Specifically using the --net=container:something option (trimmed the example):
docker run \
--name=jackett \
--net=container:vpncontainer \
linuxserver/jackett
However, I have to use Kubernetes for this deployment so I think it would be good to use a 2-container pod. I want to keep the VPN connection logic and the program separated.
How can I achieve this?
Each container in pod have shared network resources. If you run vpn client in one container them all containers in this pod will have access to network via vpn.
Based on your comment I think I can advise you two methods.
Private GKE Cluster with CloudNAT
In this setup, you should you use Private GKE cluster with CloudNAT for external communication. You would need to to use manual externalIP.
This scenario is using specific externalIP for VPN connection, but it's required from your customer to whitelist access for this IP.
Site to site VPN using CloudVPN
You can configure your VPN to forward packets to your cluster. For details you should check other Stackoverflow threads:
Google Container Engine and VPN
Why can't I access my Kubernetes service via its IP?
I'm using a similar approach. I have a Django app for whose static files to be served files I need nginx. I want the app to be accessible through VPN for which I'm using OpenVPN.
Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod.
But this doesn't seem to be working. I get a 504 Time-Out in the browser and the nginx logs confirm that the upstream timed out. Have you done anything extra to make this work in your case?
I've configured NGINX in a cloud instance with reverse proxy to a docker container. The app sends emails using nodemailer - Gmail SMTP but isn't working inside a docker container.
My guess:
Missing ports configurations
A mail proxy or something is needed...
I tried exposing ports 587 and 465 in the Dockerfile with no success (Not sure if that's correct or if I need something else).
Other considerations:
The container runs it's own server using Koa.
The cloud instance will host more containers that may send mail too. Each with their own domain and reverse proxy configurations.
Your help is really appreciated!
UPDATE
Running the app in the container: Gmail is giving a 534 response code (invalid login error)
Still working fine runnning the app outside the container.
Gmail authentication was giving login errors running the app in the container.
The correct way is to configure it through OAuth2 and it works flawlessly.
Here's the tutorial I found that helped me out: https://alexb72.medium.com/how-to-send-emails-using-a-nodemailer-gmail-and-oauth2-fe19d66451f9
Thanks timsmelik for your help.
So I have a setup with a react service running in a docker-compose service and on a network in that compose. For that react service I use the http-proxy-middleware to be able to just use relative endpoints (/api/... instead of localhost:xxxx/api/...) both in development and in production but also because one of the libraries that I depend on requires it (for the same reason).
I also have a python flask backend that I want to run on the localhost network to be able to avoid restarting the entire docker-compose on every change.
Currently, the proxy (as expected I suppose) gives a "ECONNREFUSED" error when used as it cannot connect to the backend.
Does anyone have an idea of how I could get the proxy to be able to access the backend without having to run the backend in the docker-compose?
Thanks in advance, Vidar
So I finally got it working, with help from #Hikash, by setting my frontend proxy to connect to the localhost through the IP I get from ip -4 addr show docker0 | grep -Po 'inet \K[\d.]+'.
I have some JHipster Spring Microservice and gateway projects. I deployed all of them on a host using docker except the gateway. I started the gateway on another host.
I use Keycloak for OAuth authentication.
Everything works fine when i deploy all of the microservices and databases and Gateways as docker containers on a docker network using docker-compose.
But it doesn't work when i just deploy everything on docker except the gateway.i mean if the gateway resides outside of docker-created network. the motivation for this action is that I just want my UI programmer to up and run the gateway on his own PC, and use microservices which are deployed on server host. Just for ease of UI development in need to up and run this sole gateway using gradle bootRun -Pprod.
I used a technique to assign a separate IP to each container on my docker network. This technique is called Docker MacVLan networking. so that every container in the host have a separate IP address in physical network and each of these containers are visible on other hosts in the network.
the problem is that in normal docker deployment (when gateway is deployed in a docker network in same host) everything works fine. but in my scenario after successful login, every microservice return error 401.
in microservice it says this error:
o.s.s.oauth2.client.OAuth2RestTemplate : Setting request Accept header to [application/json, application/x-jackson-smile, application/cbor, application/*+json]
o.s.s.oauth2.client.OAuth2RestTemplate : GET request for "http://keycloak:9080/auth/realms/jhipster/protocol/openid-connect/userinfo" resulted in 401 (Unauthorized); invoking error handler
n.m.b.s.o.CachedUserInfoTokenServices : Could not fetch user details: class org.springframework.security.oauth2.client.resource.OAuth2AccessDeniedException, Unable to obtain a new access token for resource 'null'. The provider manager is not configured to support it.
p.a.OAuth2AuthenticationProcessingFilter : Authentication request failed: error="invalid_token", error_description="token string here"
it says that your token is invalid. the same mechanism just works when everything is deployed in same host in docker. is it for the Keycloak that prevents the token to validate for external hosts? i personally doubt that , because it didn't prevent me from logging into gateway successfully. and i just checked keycloak. its up by the command -b 0.0.0.0
Please help me up and run a gateway just by gradle bootRun -Pprod.
In summary I could rephrase my question to: i just want the UI Developer be able to test his angular/spring-gateway project in it's own PC while other services are deployed in powerful server using docker (authentication using Keycloak). and it is not possible to deploy those other services on UI developers own PC. how to do it in JHipster?
add server.use-forward-headers=true to your config when using the gateway
I have successfully deployed Openshift all in one cluster using the client
tools provided in git hub.
./oc cluster up
And I also build a WordPress web site and a MySQL database for it. Both are working fine and now I want to access the web site via a local IP address in my network. So others can access my web site in the Openshift. I don't know how to do this. Tried as much as I can, cannot edit the master-config file as it is resides on docker container, when restarted it is gone, please help
thank you
You can bring up the cluster using your IP address
something like:
oc cluster up --public-hostname=192.168.122.154
Check
oc status
once the cluster is up and use the URL to access.