Docker compose a Server API and Nuxt Client? - docker

I'm trying since 2 weeks to make a docker compose that link an api server (strapi) and a Nuxt app (client)
Nuxt make request to the api server to get data. But I don't know how to make them communicate to share data and make request.
I tried a lot of config but don't work.
My docker-compose.yml:
version: "3.9"
services:
client:
image: nuxt-client
networks:
- my-network
environment:
STRAPI_URL: server:4040
depends_on:
- server
server:
image: strapi-server
networks:
- my-network
volumes:
- ./data:/app/.tmp/
networks:
my-network:
driver: bridge
I try my nuxt app to make a request on the server with the port 4040
In environment, nuxt app take STRAPI_URL which is the url of the api (example: http://localhost:4040)
But the request on the network tab is 404
I want to put this docker compose in my nginx proxy manager, and i don't want to make port outside containers.
Any help ? :)

Related

NetCore Docker Application with connection refused

I have two containers (both .net-core), a Web Application and a Web API, the Web Application can be accessed from the host machine using http://localhost:51217, however I can't access the Web API using http://localhost:51218, I got the connection refused, in order to access the Web API, I had to change the Kerstel URL configuration from ASPNETCORE_URLS=http://localhost to ASPNETCORE_URLS=http://0.0.0.0, so webserver listen all IP's.
Any clue why the localhost works for the Web App but not for the Web API, although both have different port mapping.
See below my docker-compose working fine, if I change the API to ASPNETCORE_URLS=http://localhost, I will get connection refused. The docker files exposes port 80.
version: '3.5'
services:
documentuploaderAPI:
image: ${DOCKER_REGISTRY-}documentuploader
container_name: DocumentUpoaderAPI
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://0.0.0.0
networks:
- doc_manager
ports:
- "51217:80"
volumes:
- ${APPDATA}/Microsoft/UserSecrets/:/root/.microsoft/usersecrets
- ${APPDATA}/ASP.NET/Https/:/root/.aspnet/https/
- c:\azurite:/root/.unistad/
build:
context: .
dockerfile: DocumentUploader/Dockerfile
documentmanagerAPP:
image: ${DOCKER_REGISTRY-}documentmanager
container_name: DocumentManagerApp
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://localhost;http://localhost
networks:
- doc_manager
ports:
- "51218:80"
volumes:
- ${APPDATA}/Microsoft/UserSecrets/:/root/.microsoft/usersecrets
- ${APPDATA}/ASP.NET/Https/:/root/.aspnet/https/
build:
context: .
dockerfile: Document Manager/Dockerfile
networks:
doc_manager:
name: doc_manager
driver: bridge
Any idea why localhost doesn't work for the API? Any suggestion also how can I trace or sniff the communication from browser until the web server in the container?
You can find below the docker networking design, which may help on my question.

docker-compose: varnish+apache2 return a 503 error `Backend fetch failed`

I am trying to run a very simple Docker-compose.yml file based on varnish and php7.1+apache2 services:
version: "3"
services:
cache:
image: varnish
container_name: varnish
volumes:
- ./default.vcl:/etc/varnish/default.vcl
links:
- web:webserver
depends_on:
- web
ports:
- 80:80
web:
image: benit/stretch-php-7.1
container_name: web
ports:
- 8080:80
volumes:
- ./index.php:/var/www/html/index.php
The default.vcl contains:
vcl 4.0;
backend default {
.host = "webserver";
.port = "8080";
}
I encountered the following error when browsing at http://localhost/:
Error 503 Backend fetch failed
Backend fetch failed
Guru Meditation:
XID: 9
Varnish cache server
The web service works fine when I test it at http://localhost:8080/.
What's wrong?
You need to configure varnish to communicate with "web" on port "80" rather than "webserver" on port "8080".
The "web" comes from the service name in your compose file. There's no need to set a container name, and indeed that breaks the ability to scale or perform rolling updates if you transition to swarm mode. Links have been deprecated in favor of shared networks that docker compose will provide (links are very brittle, breaking if you update the web container). And depends_on does not assure that the other service is ready to receive requests. If you have a hard dependency to hold varnish from starting until the web server is ready to receive requests, then you'll want to update the entrypoint with a task to wait for the remote port to be reachable and have a plan for how to handle the web server going down.
The port 80 comes from the container port. There is no need to publish port 8080 on the docker host if you only want to access it through varnish, and this would be a security risk to many. Containers communicate directly to the container port, not back out to the host and mapped back into a container.
The resulting compose file could look like:
version: "3"
services:
cache:
image: varnish
container_name: varnish
volumes:
- ./default.vcl:/etc/varnish/default.vcl
ports:
- 80:80
web:
image: benit/stretch-php-7.1
volumes:
- ./index.php:/var/www/html/index.php
And importantly, your varnish config would look like:
vcl 4.0;
backend default {
.host = "web";
.port = "80";
}

Conusume API from a client docker container to the server container

I have two different projects running on different docker containsers. Below the two YML files:
FILE webserver-api/docker-compose.yml
version: "3.1"
services:
webserver:
image: nginx:alpine
container_name: webserver-api
working_dir: /application
volumes:
- .:/application
- ./docker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8005:80"
FILE client-app/docker-compose.yml
version: '3'
services:
web:
container_name: client-app
build:
context: ./
dockerfile: deploy/web.docker
volumes:
- ./:/var/www
ports:
- "8010:80"
links:
- app
app: [...]
database: [...]
From the client-app I would like to call the webserver-api.
When I'm trying to consume the API from webserver-api I'm getting the message "cURL error connection refused" or timeout error.
For example
$response = file_get_contents('http:/localhost:8005/api/test');
I tried also to replace the localhost with the IP of the webserver-api container like this:
$response = file_get_contents('http://172.25.0.2:8005/api/test');
But still I get a timeout connection error.
Which is the correct URL of the server container to call form the client container? Or how to set the host URL?
Thanks a lot for the help and time.
You need create a network first. Then use this network for both your client and server docker compose. Otherwise the network is isolated.
Another approach is expose the port of server to localhost and connect to localhost from client side.
As per the docker-compose documentation
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
So ideally if your service are interdependent you should put them in a single compose file. In that case you could have accessed your service directly by name and container port
http://webserver/api/test
But since they are in separate compose file, you can access the service via host mapped port
$response = file_get_contents('http://localhost:8005/api/test');
it should also work.
To debug you can check
If port binding to 8005 is happening on your host.
The endpoint specified is correct and accessible from host.
Finally I figured it out.
By default docker creates a network called (in my case) webserver-api_default where webserver-api is the name of the folder that contains the YML file [projectname]_default.
On the client-app/docker-compose.yml of the client I had to specify which network to join:
version: '3'
networks:
default:
external:
name: webserver-api_default
web:
container_name: client-app
build:
context: ./
dockerfile: deploy/web.docker
volumes:
- ./:/var/www
ports:
- "8010:80"
links:
- app
app: [...]
database: [...]
And from the client container I have to make the call to the URL:
$response = file_get_contents('http://webserver-api:8005/api/test');
Where webserver-api is the name of the server container and not the name of the network.
https://docs.docker.com/compose/networking/

How can I get a LAMP stack application in docker to send an HTTP POST to another service running on my local host?

I have a simple web application that uses HTML and PHP to capture information via a HTML form. It take this information and uses HTML-POST to sent it to a service, also running on my local host on port 8400, as application XML. I have this application running in a LAMP stack on macOS and it works perfectly. The XML gets to the service without any issues.
When I moved this app into a containerized LAMP stack using Docker, the application runs, but when my PHP tries to post it to the other service running on port 8400, it cannot get there.
I am confident that this is an issue with Docker networking, but I am not sure what the problem is. Here is my docker-compose.yml file:
version: "1.0"
services:
php:
build: "./php/"
networks:
- backend
- frontend
volumes:
- ./public_html/:/var/www/html/
apache:
build: "./apache/"
depends_on:
- php
- mysql
networks:
- frontend
- backend
ports:
- "8080:80"
volumes:
- ./public_html/:/var/www/html/
mysql:
image: mysql:5.6.40
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=rootpassword
networks:
frontend:
backend:
I think the answer probably lies in the allowing the container to reach my localhost network, but being relatively new to Docker, I am unsure.
How can I configure Docker networking to allow posting to services outside of Dockernet on specific ports?

Docker web app can't communicate with API app

I have 2 .net core apps running in docker (one is a web api, the other is a web app consuming the web api):
I can't seem to communicate with the api via the web app, but I can access the api by going directly to it in my browser at http://localhost:44389
I have an environment variable in my web app that has that same info, but it can't get to it.
If I were to specify the deployed version of my API on azure, it's able to communicate with that address. Seems like the problem is the containers talking to each other.
I read that creating a bridge should fix that problem but it doesn't seem to. What am I doing wrong?
Here is my docker compose file:
version: '3.4'
services:
rc.api:
image: ${DOCKER_REGISTRY}rcapi
build:
context: .
dockerfile: rc.Api/Dockerfile
ports:
- "44389:80"
rc.web:
image: ${DOCKER_REGISTRY}rcweb
build:
context: .
dockerfile: rc.Web/Dockerfile
environment:
- api_endpoint=http://localhost:44389
depends_on:
- rc.api
networks:
my-net:
driver: bridge
docker-compose automatically creates a network between your containers. As your containers are in the same network you would be able to connect between containers using aliases. docker-compose creates an alias with the container name and the container IP. So in your case docker-compose should look like
version: '3.4'
services:
rc.api:
image: ${DOCKER_REGISTRY}rcapi
build:
context: .
dockerfile: rc.Api/Dockerfile
ports:
- "44389:80"
rc.web:
image: ${DOCKER_REGISTRY}rcweb
build:
context: .
dockerfile: rc.Web/Dockerfile
environment:
- api_endpoint=http://rc.api
depends_on:
- rc.api
networks:
my-net:
driver: bridge
As in rc.api opens port 80 in its container, therefore rc.web can access to 80 port with http://rc.api:80 or http://rc.api (without port since HTTP port is default 80)
You need to call http://rc.api because you have two containers and the API containers localhost is different from the web apps container localhost.
The convention is each service can be resolved by its name specified in the docker-compose.yml.
Thus you can call the API on internal Port 80 instead of exposing it on a particular port.

Resources