Caddy as reverse proxy in docker refuses to connect to other containers - docker

I wanted to try out Caddy in a docker environment but it does not seem to be able to connect to other containers. I created a network "caddy" and want to run a portainer alongside it. If I go into the volume of caddy, I can see, that there are certs generated, so that seems to work. Also portainer is running and accessible via the Server IP (http://65.21.139.246:1000/). But when I access via the url: https://smallhetzi.fading-flame.com/ I get a 502 and in the log of caddy I can see this message:
{
"level": "error",
"ts": 1629873106.715402,
"logger": "http.log.error",
"msg": "dial tcp 172.20.0.2:1000: connect: connection refused",
"request": {
"remote_addr": "89.247.255.231:15146",
"proto": "HTTP/2.0",
"method": "GET",
"host": "smallhetzi.fading-flame.com",
"uri": "/",
"headers": {
"Accept-Encoding": [
"gzip, deflate, br"
],
"Accept-Language": [
"de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7"
],
"Cache-Control": [
"max-age=0"
],
"User-Agent": [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36"
],
"Sec-Fetch-Site": [
"none"
],
"Accept": [
"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"
],
"Sec-Fetch-Mode": [
"navigate"
],
"Sec-Fetch-User": [
"?1"
],
"Sec-Fetch-Dest": [
"document"
],
"Sec-Ch-Ua": [
"\"Chromium\";v=\"92\", \" Not A;Brand\";v=\"99\", \"Google Chrome\";v=\"92\""
],
"Sec-Ch-Ua-Mobile": [
"?0"
],
"Upgrade-Insecure-Requests": [
"1"
]
},
"tls": {
"resumed": false,
"version": 772,
"cipher_suite": 4865,
"proto": "h2",
"proto_mutual": true,
"server_name": "smallhetzi.fading-flame.com"
}
},
"duration": 0.000580828,
"status": 502,
"err_id": "pq78d9hen",
"err_trace": "reverseproxy.statusError (reverseproxy.go:857)"
}
But two compose files:
Caddy:
version: '3.9'
services:
caddy:
image: caddy:2-alpine
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- certs-volume:/data
- caddy_config:/config
volumes:
certs-volume:
caddy_config:
networks:
default:
external:
name: caddy
Caddyfile:
{
email simonheiss87#gmail.com
# acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
smallhetzi.fading-flame.com {
reverse_proxy portainer:1000
}
and my portainer file:
version: '3.9'
services:
portainer:
image: portainer/portainer-ce
container_name: portainer
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data portainer/portainer
entrypoint: /portainer -p :80
ports:
- "1000:80"
volumes:
portainer_data:
networks:
default:
external:
name: caddy
What I think happens is, that those two containers are somehow not in the same network, but I dont get why.
What works as a workaround right now is, when i make this change to my Caddyfile:
smallhetzi.fading-flame.com {
reverse_proxy 65.21.139.246:1000
}
Then I get a valid certificate and the portainer ui. But i would rather not spread the IPs over my Caddyfile. Do I have to configure something else for caddy to run in docker?

I just got help from the forum and it turns out, that caddy redirects to the port INSIDE the container, not the public one. In my case, portainer runs on 80 internally, so changing the Caddyfile to this:
smallhetzi.fading-flame.com {
reverse_proxy portainer:80
}
or this
smallhetzi.fading-flame.com {
reverse_proxy http://portainer
}
does the job. This also means, that I could get rid of exposing portainer directly over the port 1000. Now I can only access it via the proxy.
Hope someone gets some help from that :)

Related

Selenium Selenoid File Server not running in Browser Container

selenium is unable to download any files from the browsers due to a 502 error on my coworkers machine, none of my other coworkers are seeing the issue, just this one dude. We are using Firefox.
After looking at the Selenoid code a bit I learned that the containers the Browser runs in uses a File Server on port 8080 to allow downloading files from the container, but I discovered that this File Server is not running within these containers.
I verified this through this command:
docker exec -it <browser_container> curl 127.0.0.1:8080
On my machine I get a 200 response:
<pre>
test.xlsx
</pre>
But when I run this command on his machine I get this error:
Failed to connect to 127.0.0.1 port 8080 after 8 ms: Connection refused
This is indicative that the File Server is not running within his Browser Containers. I've been trying many different firefox arguments and I've restart selenoid and the docker containers and still can't figure out what's going on, I'm completely lost right now. If anyone knows what might be going on I would be appreciative, or even if anyone has any idea how to gain more information into what's going on.
Here is the Firefox options we are using
options = webdriver.FirefoxOptions()
options.add_argument('--width=1600')
options.add_argument('--height=900')
options.set_preference('browser.download.dir', '/home/selenium/Downloads')
And our browsers.json file
{
"chrome": {
"default": "105.0",
"versions": {
"105.0": {
"image": "selenoid/vnc_chrome:105.0",
"port": "4444",
"path": "/",
"env": ["TZ=America/Denver"]
}
},
"caps": {
"loggingPrefs": {"browser": "ALL"},
"enableVNC": true,
"browserName": "chrome",
"timeZone": "America/Denver",
"sessionTimeout": "1m30s"
}
},
"firefox": {
"default": "latest",
"versions": {
"latest": {
"image": "selenoid/firefox",
"port": "4444",
"path": "/wd/hub",
"env": ["TZ=America/Denver"]
}
},
"caps": {
"loggingPrefs": {"browser": "ALL"},
"enableVNC": true,
"browserName": "firefox",
"timeZone": "America/Denver",
"sessionTimeout": "1m30s"
}
}
}
We do have a custom docker-compose.yml file for starting the selenoid and selenoid_ui containers, here is the file just in case that setup matters, I doubt the issue lies here.
version: "3.9"
networks:
selenoid_net:
name: selenoid_net
attachable: true
ipam:
config:
- subnet: 172.198.1.0/24
services:
selenoid:
image: aerokube/selenoid
restart: always
networks:
selenoid_net:
ports:
- "4444:4444"
environment:
- OVERRIDE_VIDEO_OUTPUT_DIR=${VIDEO_OUTPUT}/video
- TZ=America/Denver
volumes:
- "/etc/selenoid:/etc/selenoid"
- "/var/run/docker.sock:/var/run/docker.sock"
- "${VIDEO_OUTPUT}/video:${VIDEO_OUTPUT}/video"
- "${VIDEO_OUTPUT}/logs:${VIDEO_OUTPUT}/logs"
- "${PWD}:/etc/browsers"
command: ["-conf", "/etc/browsers/browsers.json",
"-video-output-dir", "${VIDEO_OUTPUT}/video",
"-log-output-dir", "${VIDEO_OUTPUT}/logs",
"-limit", "6",
"-timeout", "1m30s","-container-network", 'selenoid_net']
selenoid-ui:
image: "aerokube/selenoid-ui:latest"
restart: always
networks:
selenoid_net:
links:
- "selenoid"
ports:
- "8080:8080"
command: ["--selenoid-uri", "http://selenoid:4444"]

ocelot redirect to internal container failed

so I have a console app that needs to call the backend running on docker.
i'm using ocelot for api gateway and an api.
when I do a test call in postman to my gateway, the request come in and ocelot tries to redirect to the api, but I get a ENOTFOUND.
I'm using the docker-compose name, and I also tried with the container-name.
ocelot config:
{
"Routes": [
{
"DownstreamPathTemplate": "/api/fileagent",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "fileagentapi",
"Port": 80
}
],
"UpstreamPathTemplate": "/fileagent",
"UpstreamHttpMethod": [ "POST"]
}
],
"GlobalConfiguration": {
"RequestIdKey": "ocRequestId"
}
}
port 80 is the internal docker container port.
docker-compose config:
version: '3.4'
services:
fileagentapi:
image: ${DOCKER_REGISTRY-}fileagentapi
container_name: FileAgentApi
build:
context: .
dockerfile: Services/FileAgentApi/Dockerfile
gateways:
image: ${DOCKER_REGISTRY-}gateways
build:
context: .
dockerfile: Api-Gateways/Gateways/Dockerfile
ports:
- 50000:443
- 50001:80
After a long time but I have found my problem.
In my program file -> I still had the app.UseHttpsRedirect() enabled.
This will not work when testing in http.

Error consume route ApiGateway with ocelot and docker service

I am creating an ApiGateway with ocelot that consume an Api service in net core.
The ApiGateway and ApiService are deployed on docker with docker compose of this way:
Docker-compose:
tresfilos.webapigateway:
image: ${DOCKER_REGISTRY-}tresfilosapigateway
build:
context: .
dockerfile: tresfilos.ApiGateway/ApiGw-Base/Dockerfile
tresfilos.users.service:
image: ${DOCKER_REGISTRY-}tresfilosusersservice
build:
context: .
dockerfile: tresfilos.Users.Service/tresfilos.Users.Service/Dockerfile
Docker-compose.override:
tresfilos.webapigateway:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- IdentityUrl=http://identity-api
ports:
- "7000:80"
- "7001:443"
volumes:
- ./tresfilos.ApiGateway/Web.Bff:/app/configuration
tresfilos.users.service:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
ports:
- "7002:80"
- "7003:443"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
In configuration ocelot apigateway i define .json like:
"ReRoutes": [
{
"DownstreamPathTemplate": "/api/{version}/{everything}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "tresfilos.users.service",
"Port": 7002
}
],
"UpstreamPathTemplate": "/api/{version}/user/{everything}",
"UpstreamHttpMethod": [ "POST", "PUT", "GET" ]
},
],
"GlobalConfiguration": {
"BaseUrl": "https://localhost:7001"
}
When i consume the ApiGateway from the url:
http://localhost:7000/api/v1/user/Login/authentication
I have the error in terminal docker:
Why does the above error occur and how to fix it?
What version of Ocelot are you running?
I found another thread which has a similar looking problem and apparently from version 16.0.0 of Ocelot 'ReRoutes' was changed to 'Routes' in the Ocelot configuration file.
The thread I found was - 404 trying to route the Upstream path to downstream path in Ocelot
I fix it of this way:
Change ReRoutes to Routes because ocelot version is 16.0.1
Define config like:
"Routes": [
{
"DownstreamPathTemplate": "/api/{version}/{everything}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "tresfilos.users.service",
"Port": 7002
}
],
"UpstreamPathTemplate": "/api/{version}/User/{everything}"
},
],
"GlobalConfiguration": {
"BaseUrl": "https://localhost:7001"
}
In postman i send the data in Body like json and not like paramaters.
(JasonS thanks)...

how to initial setup consul with defined key/value

I have setup docker config using docker compose.
this is part of docker compose file
version: '3'
networks:
pm:
services:
consul:
container_name: consul
image: consul:latest
restart: unless-stopped
ports:
- 8300:8300
- 8301:8301
- 8302:8302
- 8400:8400
- 8500:8500
- 8600:8600
environment:
CONSUL_LOCAL_CONFIG: >-
{
"bootstrap": true,
"server": true,
"node_name": "consul1",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"bootstrap_expect": 1,
"ui": true,
"addresses" : {
"http" : "0.0.0.0"
},
"ports": {
"http": 8500
},
"log_level": "DEBUG",
"connect" : {
"enabled" : true
}
}
volumes:
- ./data:/consul/data
command: agent -server -bind 0.0.0.0 -client 0.0.0.0 -bootstrap-expect=1
Then set the key value via browser
I would like to add the key/value as initial at new environment, so that additional setup steps at browser could be avoided.
this is the configuration i export by using consul kv command:
# consul kv export config/
[
{
"key": "config/",
"flags": 0,
"value": ""
},
{
"key": "config/drug2/",
"flags": 0,
"value": ""
},
{
"key": "config/drug2/data",
"flags": 0,
"value": "e30="
}
]
To my knowledge Docker Compose does not have a way to run a custom command/script after the containers have started.
As a workaround you could write a shell script which executes docker-compose up and then either runs consul kv import or a curl command against Consul's Transaction API to add the data you're trying to load.

docker-compose referencing wrong ports

I am running multiple docker-compositions on one host (identical images for different usecases).
For that reason I use different HTTPS (+REST) ports the compositions are available remotely under. However, docker will reference the port range of the first composition in every other composition as well, but not use it. Although I cannot see any negative implication atm, I would like to get rid of it, fearing that some implication might eventually arise.
docker ps shows this
PORTS
Second container:
**8643-8644/tcp**, 0.0.0.0:8743-8744->8743-8744/tcp
0.0.0.0:27020->27020/tcp
First container:
0.0.0.0:**8643-8644->8643-8644**/tcp
0.0.0.0:27019->27019/tcp
First docker-compose file (excerpt):
version: '2'
services:
mongo:
image: *****
ports:
- "27019:27019"
tty: true
volumes:
- /data/mongodb
- /data/db
- /var/log/mongodb
entrypoint: [ "/usr/bin/mongod", "--port", "27019" ]
rom:
image: *****
links:
- mongo
ports:
- "8643:8643"
- "8644:8644"
environment:
WEB_PORT_SECURE: 8643
REST_PORT_SECURE: 8644
MONGO_PORT: 27019
MONGO_INST: mongod
entrypoint: [ "node", "/usr/src/app/app.js" ]
Second docker-compose file (excerpt):
version: '2'
services:
mongo:
image: *****
ports:
- "27020:27020"
tty: true
volumes:
- /data/mongodb
- /data/db
- /var/log/mongodb
entrypoint: [ "/usr/bin/mongod", "--port", "27020" ]
rom:
image: *****
links:
- mongo
ports:
- "8743:8743"
- "8744:8744"
environment:
WEB_PORT_SECURE: 8743
REST_PORT_SECURE: 8744
MONGO_PORT: 27020
MONGO_INST: mongod
entrypoint: [ "node", "/usr/src/app/app.js" ]
and finally, docker inspect shows this for the second container
"Config": {
"Hostname": *****,
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"8643/tcp": {},
"8644/tcp": {},
"8743/tcp": {},
"8744/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"MONGO_PORT=27020",
"MONGO_INST=mongodb",
"WEB_PORT_SECURE=8743",
"REST_PORT_SECURE=8744",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_VERSION=4.7.2",
"WORK_DIR=/usr/src/app"
],
"NetworkSettings": {
"Ports": {
"8643/tcp": null,
"8644/tcp": null,
"8743/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8743"
}
],
"8744/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8744"
}
]
},
The last block clearly shows that docker is not doing anything with the 8643 and 8644 port, but still references it there.
"8643/tcp": null,
"8644/tcp": null,
Any idea why this happens and how to avoid it?
They are there because the image exposes them (built with EXPOSE).
This is not a problem, it's totally normal. You won't have a problem unless you try to export the same port on the outside host more than once. Here, none of your exported ports are in conflict.
0.0.0.0:8743-8744->8743-8744/tcp
0.0.0.0:27020->27020/tcp
0.0.0.0:8643-8644->8643-8644/tcp
0.0.0.0:27019->27019/tcp
You are exporting 8643-8644, 8743-8744, 27019, 27020. No conflicts.
A container can expose whatever ports it wants, it is only important that exposed ports are not in conflict with one another.

Resources