I have a demo application running perfectly on my local environment. However, I would like to run the same application remotely by giving it a HTTP endpoint. My goal is to test the performance of the application.
How to give a HTTP endpoint to any multi container docker application?
The following is the Github repository link for the demo application
https://github.com/LonareAman/BankCQRS.git
Use docker-compose and handle containers based on what you need
One of your containers should be web server like nginx. And then bind your machine port to your nginx like 80:80
Then handle your containers in nginx and make a proxy to them
You can find some samples in https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/
Related
I have domain called example.com which routed to 123.12.123.12 server.
And this is docker server so docker containers are like
123.12.123.12:1201
123.12.123.12:1202
123.12.123.12:1203
I am accessing this containers like
http://example.com:1201
http://example.com:1202
http://example.com:1203
But my project is webapp using microphone so i need all my docker container secured with SSL. Projects developed with Node.js
Is there any solutions? Thanks!
I've setup an app where a nodejs backend has to communicate with a rasa chatbot backend through a react frontend. All services are running through the same docker-compose. Being a docker beginner there are some things I'm not sure about:
communication between host and container is done using the container's ip
browser opening the local react server running on localhost:3000 or 172.22.0.1:3000
browser sending a request to the express backend on localhost:4000 172.22.0.2:4000
however communication between two docker containers is done is the container's name:
rasa server conmmunicating with the rasa custom action server through http://action_server:5055/webhooks
rasa custom action server communicating with the express backend through http://backend_name:4000/users/
my problem is that when I need to contact the rasa backend from my react front end I need to put the rasa docker container's ip which (sometimes) changes upon docker-compose reinitialization. To workaround this I do a docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' app_rasa_1 to get the ip and manually change it into the react frontend.
is there a way to avoid changing the ip alltogether and using the container name (or an alias/link) or what would be a way to automate the change of the container's ip in the react frontend (are environment variables updated via a script an option?)
Completely ignore the container-private IP addresses. They're implementation details that have several practical problems, including (as you note) them changing when a container is recreated. (They're also unreachable on non-Linux hosts, or if the browser isn't on the same host as the containers.)
You show the correct patterns in your question. For calls between containers, use the container names as host names (this setup is described more in Networking in Compose. For calls from outside containers, including from browser-based applications, use the host's DNS name or IP address and the first number from the ports: you publish.
If the browser application needs to contact a back-end server, it needs a path to do this. This could be via published ports:, or one of your other components could proxy the request to the service (maybe using the express-http-proxy middleware).
A dedicated container that only proxies to other backend services is also a useful pattern (Docker Nginx Proxy: how to route traffic to different container using path and not hostname includes some examples), particularly since this will let you use path-only URLs like /api or /rasa in your browser application. If the React application is served from http://localhost:8080/, and the main backend is http://localhost:8080/api, then you can just make HTTP requests to /api and they will be interpreted relative to the page's URL. This avoids the hostname problem completely, so long as your reverse proxy has path-based routes to every container you need to directly contact.
I have some JHipster Spring Microservice and gateway projects. I deployed all of them on a host using docker except the gateway. I started the gateway on another host.
I use Keycloak for OAuth authentication.
Everything works fine when i deploy all of the microservices and databases and Gateways as docker containers on a docker network using docker-compose.
But it doesn't work when i just deploy everything on docker except the gateway.i mean if the gateway resides outside of docker-created network. the motivation for this action is that I just want my UI programmer to up and run the gateway on his own PC, and use microservices which are deployed on server host. Just for ease of UI development in need to up and run this sole gateway using gradle bootRun -Pprod.
I used a technique to assign a separate IP to each container on my docker network. This technique is called Docker MacVLan networking. so that every container in the host have a separate IP address in physical network and each of these containers are visible on other hosts in the network.
the problem is that in normal docker deployment (when gateway is deployed in a docker network in same host) everything works fine. but in my scenario after successful login, every microservice return error 401.
in microservice it says this error:
o.s.s.oauth2.client.OAuth2RestTemplate : Setting request Accept header to [application/json, application/x-jackson-smile, application/cbor, application/*+json]
o.s.s.oauth2.client.OAuth2RestTemplate : GET request for "http://keycloak:9080/auth/realms/jhipster/protocol/openid-connect/userinfo" resulted in 401 (Unauthorized); invoking error handler
n.m.b.s.o.CachedUserInfoTokenServices : Could not fetch user details: class org.springframework.security.oauth2.client.resource.OAuth2AccessDeniedException, Unable to obtain a new access token for resource 'null'. The provider manager is not configured to support it.
p.a.OAuth2AuthenticationProcessingFilter : Authentication request failed: error="invalid_token", error_description="token string here"
it says that your token is invalid. the same mechanism just works when everything is deployed in same host in docker. is it for the Keycloak that prevents the token to validate for external hosts? i personally doubt that , because it didn't prevent me from logging into gateway successfully. and i just checked keycloak. its up by the command -b 0.0.0.0
Please help me up and run a gateway just by gradle bootRun -Pprod.
In summary I could rephrase my question to: i just want the UI Developer be able to test his angular/spring-gateway project in it's own PC while other services are deployed in powerful server using docker (authentication using Keycloak). and it is not possible to deploy those other services on UI developers own PC. how to do it in JHipster?
add server.use-forward-headers=true to your config when using the gateway
I'm building a application with microservices architecture.
So basically, my app look like this
API GATEWAY(port 3000) => USERS-SERVICE(port 9090), AUTH-SERVICE(port 8080), SEND-SMS-SERVICE(port 7070).
all work fine until now.
now I try to implement docker in my project. I build an image for each service
and run container instance for each on my local machine.
now I want to develop new service Customer-Service. and this service run on
http://localhost:3030
.
question:
1) How i can request http://localhost:3030 from api gateway, if in development I run api-gateway from container.
You must understand the network concept, when you start independent docker instance and you don't define the network they will be unreachable between them.
There is other things, you CAN'T access to one micro service hosted in a Docker to other Micro services hosted in other docker image using localhost, localhost is a 127.0.0.1. This is a call for the local machine. Then the concept of docker is like "diferent machines running on a same machine" is like a virtual machine but docker shares the host machine kernel.
You can access to another docker image in 2 ways.
Configure in a host network, which i do not recommend
Create a network, add every docker image instance to this network and call other micro services using the container name. IE you can use http://my-service-1:3400/api/v1/post
I recommend you to use docker-compose.
This is one of my repositories, I created with the propuse of share an Node App using JWT, but this project use Docker and docker-compose
https://github.com/camiloperezv/jwt-template
how you can see, i define an Network attribute in the docker-compose.ymland use this network in all of my services.
In the service section you will put all your micro-services, and in the code you will make the http request using the container name instead of using localhost or an IP address.
In my services y use the build: . this is for development propuse, in production you should use the pre build docker image instead of building it on the production server.
Feel free to use my github code.
Regards
As far as I understand from the question, a new service Costumer-Service runs on http://localhost:3030 on the host machine.
If yes, api-gateway docker container should be started in the host network:
docker run --network host -d <api-gateway_image_name>
After this Costumer-Service will be reachable on localhost:3030 from the api-gateway container.
My app integrates with a web service that supports a proxy server. So I need to have integration tests that prove that works.
So I wanted to use Docker to create a local proxy server that I can run real integration tests to verify that my web service can be called through the proxy interface without errors.
So I tried https://github.com/jwilder/nginx-proxy
I started up the container with:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
When I use it i get a 503 error 503 Service Temporarily Unavailable
Am I misunderstanding what this proxy does?
Although this has been resolved in the comments, I'll try to answer the following question:
Am I misunderstanding what this proxy does?
Yes. What your project requires, is the availability of a forward-proxy and what you are trying to use, is a reverse-proxy. This will become more clear once you go through the most top rated answers at Difference between proxy server and reverse proxy server
For a TL;DR moment:
There are many forward-proxy software available. You could choose any one of them for your project. Some of them are:
Squid
Polipo
Apache Traffic Server
Privoxy
TinyProxy