I am creating an ApiGateway with ocelot that consume an Api service in net core.
The ApiGateway and ApiService are deployed on docker with docker compose of this way:
Docker-compose:
tresfilos.webapigateway:
image: ${DOCKER_REGISTRY-}tresfilosapigateway
build:
context: .
dockerfile: tresfilos.ApiGateway/ApiGw-Base/Dockerfile
tresfilos.users.service:
image: ${DOCKER_REGISTRY-}tresfilosusersservice
build:
context: .
dockerfile: tresfilos.Users.Service/tresfilos.Users.Service/Dockerfile
Docker-compose.override:
tresfilos.webapigateway:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- IdentityUrl=http://identity-api
ports:
- "7000:80"
- "7001:443"
volumes:
- ./tresfilos.ApiGateway/Web.Bff:/app/configuration
tresfilos.users.service:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
ports:
- "7002:80"
- "7003:443"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
In configuration ocelot apigateway i define .json like:
"ReRoutes": [
{
"DownstreamPathTemplate": "/api/{version}/{everything}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "tresfilos.users.service",
"Port": 7002
}
],
"UpstreamPathTemplate": "/api/{version}/user/{everything}",
"UpstreamHttpMethod": [ "POST", "PUT", "GET" ]
},
],
"GlobalConfiguration": {
"BaseUrl": "https://localhost:7001"
}
When i consume the ApiGateway from the url:
http://localhost:7000/api/v1/user/Login/authentication
I have the error in terminal docker:
Why does the above error occur and how to fix it?
What version of Ocelot are you running?
I found another thread which has a similar looking problem and apparently from version 16.0.0 of Ocelot 'ReRoutes' was changed to 'Routes' in the Ocelot configuration file.
The thread I found was - 404 trying to route the Upstream path to downstream path in Ocelot
I fix it of this way:
Change ReRoutes to Routes because ocelot version is 16.0.1
Define config like:
"Routes": [
{
"DownstreamPathTemplate": "/api/{version}/{everything}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "tresfilos.users.service",
"Port": 7002
}
],
"UpstreamPathTemplate": "/api/{version}/User/{everything}"
},
],
"GlobalConfiguration": {
"BaseUrl": "https://localhost:7001"
}
In postman i send the data in Body like json and not like paramaters.
(JasonS thanks)...
Related
selenium is unable to download any files from the browsers due to a 502 error on my coworkers machine, none of my other coworkers are seeing the issue, just this one dude. We are using Firefox.
After looking at the Selenoid code a bit I learned that the containers the Browser runs in uses a File Server on port 8080 to allow downloading files from the container, but I discovered that this File Server is not running within these containers.
I verified this through this command:
docker exec -it <browser_container> curl 127.0.0.1:8080
On my machine I get a 200 response:
<pre>
test.xlsx
</pre>
But when I run this command on his machine I get this error:
Failed to connect to 127.0.0.1 port 8080 after 8 ms: Connection refused
This is indicative that the File Server is not running within his Browser Containers. I've been trying many different firefox arguments and I've restart selenoid and the docker containers and still can't figure out what's going on, I'm completely lost right now. If anyone knows what might be going on I would be appreciative, or even if anyone has any idea how to gain more information into what's going on.
Here is the Firefox options we are using
options = webdriver.FirefoxOptions()
options.add_argument('--width=1600')
options.add_argument('--height=900')
options.set_preference('browser.download.dir', '/home/selenium/Downloads')
And our browsers.json file
{
"chrome": {
"default": "105.0",
"versions": {
"105.0": {
"image": "selenoid/vnc_chrome:105.0",
"port": "4444",
"path": "/",
"env": ["TZ=America/Denver"]
}
},
"caps": {
"loggingPrefs": {"browser": "ALL"},
"enableVNC": true,
"browserName": "chrome",
"timeZone": "America/Denver",
"sessionTimeout": "1m30s"
}
},
"firefox": {
"default": "latest",
"versions": {
"latest": {
"image": "selenoid/firefox",
"port": "4444",
"path": "/wd/hub",
"env": ["TZ=America/Denver"]
}
},
"caps": {
"loggingPrefs": {"browser": "ALL"},
"enableVNC": true,
"browserName": "firefox",
"timeZone": "America/Denver",
"sessionTimeout": "1m30s"
}
}
}
We do have a custom docker-compose.yml file for starting the selenoid and selenoid_ui containers, here is the file just in case that setup matters, I doubt the issue lies here.
version: "3.9"
networks:
selenoid_net:
name: selenoid_net
attachable: true
ipam:
config:
- subnet: 172.198.1.0/24
services:
selenoid:
image: aerokube/selenoid
restart: always
networks:
selenoid_net:
ports:
- "4444:4444"
environment:
- OVERRIDE_VIDEO_OUTPUT_DIR=${VIDEO_OUTPUT}/video
- TZ=America/Denver
volumes:
- "/etc/selenoid:/etc/selenoid"
- "/var/run/docker.sock:/var/run/docker.sock"
- "${VIDEO_OUTPUT}/video:${VIDEO_OUTPUT}/video"
- "${VIDEO_OUTPUT}/logs:${VIDEO_OUTPUT}/logs"
- "${PWD}:/etc/browsers"
command: ["-conf", "/etc/browsers/browsers.json",
"-video-output-dir", "${VIDEO_OUTPUT}/video",
"-log-output-dir", "${VIDEO_OUTPUT}/logs",
"-limit", "6",
"-timeout", "1m30s","-container-network", 'selenoid_net']
selenoid-ui:
image: "aerokube/selenoid-ui:latest"
restart: always
networks:
selenoid_net:
links:
- "selenoid"
ports:
- "8080:8080"
command: ["--selenoid-uri", "http://selenoid:4444"]
so I have a console app that needs to call the backend running on docker.
i'm using ocelot for api gateway and an api.
when I do a test call in postman to my gateway, the request come in and ocelot tries to redirect to the api, but I get a ENOTFOUND.
I'm using the docker-compose name, and I also tried with the container-name.
ocelot config:
{
"Routes": [
{
"DownstreamPathTemplate": "/api/fileagent",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "fileagentapi",
"Port": 80
}
],
"UpstreamPathTemplate": "/fileagent",
"UpstreamHttpMethod": [ "POST"]
}
],
"GlobalConfiguration": {
"RequestIdKey": "ocRequestId"
}
}
port 80 is the internal docker container port.
docker-compose config:
version: '3.4'
services:
fileagentapi:
image: ${DOCKER_REGISTRY-}fileagentapi
container_name: FileAgentApi
build:
context: .
dockerfile: Services/FileAgentApi/Dockerfile
gateways:
image: ${DOCKER_REGISTRY-}gateways
build:
context: .
dockerfile: Api-Gateways/Gateways/Dockerfile
ports:
- 50000:443
- 50001:80
After a long time but I have found my problem.
In my program file -> I still had the app.UseHttpsRedirect() enabled.
This will not work when testing in http.
I wanted to try out Caddy in a docker environment but it does not seem to be able to connect to other containers. I created a network "caddy" and want to run a portainer alongside it. If I go into the volume of caddy, I can see, that there are certs generated, so that seems to work. Also portainer is running and accessible via the Server IP (http://65.21.139.246:1000/). But when I access via the url: https://smallhetzi.fading-flame.com/ I get a 502 and in the log of caddy I can see this message:
{
"level": "error",
"ts": 1629873106.715402,
"logger": "http.log.error",
"msg": "dial tcp 172.20.0.2:1000: connect: connection refused",
"request": {
"remote_addr": "89.247.255.231:15146",
"proto": "HTTP/2.0",
"method": "GET",
"host": "smallhetzi.fading-flame.com",
"uri": "/",
"headers": {
"Accept-Encoding": [
"gzip, deflate, br"
],
"Accept-Language": [
"de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7"
],
"Cache-Control": [
"max-age=0"
],
"User-Agent": [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36"
],
"Sec-Fetch-Site": [
"none"
],
"Accept": [
"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"
],
"Sec-Fetch-Mode": [
"navigate"
],
"Sec-Fetch-User": [
"?1"
],
"Sec-Fetch-Dest": [
"document"
],
"Sec-Ch-Ua": [
"\"Chromium\";v=\"92\", \" Not A;Brand\";v=\"99\", \"Google Chrome\";v=\"92\""
],
"Sec-Ch-Ua-Mobile": [
"?0"
],
"Upgrade-Insecure-Requests": [
"1"
]
},
"tls": {
"resumed": false,
"version": 772,
"cipher_suite": 4865,
"proto": "h2",
"proto_mutual": true,
"server_name": "smallhetzi.fading-flame.com"
}
},
"duration": 0.000580828,
"status": 502,
"err_id": "pq78d9hen",
"err_trace": "reverseproxy.statusError (reverseproxy.go:857)"
}
But two compose files:
Caddy:
version: '3.9'
services:
caddy:
image: caddy:2-alpine
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- certs-volume:/data
- caddy_config:/config
volumes:
certs-volume:
caddy_config:
networks:
default:
external:
name: caddy
Caddyfile:
{
email simonheiss87#gmail.com
# acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
smallhetzi.fading-flame.com {
reverse_proxy portainer:1000
}
and my portainer file:
version: '3.9'
services:
portainer:
image: portainer/portainer-ce
container_name: portainer
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data portainer/portainer
entrypoint: /portainer -p :80
ports:
- "1000:80"
volumes:
portainer_data:
networks:
default:
external:
name: caddy
What I think happens is, that those two containers are somehow not in the same network, but I dont get why.
What works as a workaround right now is, when i make this change to my Caddyfile:
smallhetzi.fading-flame.com {
reverse_proxy 65.21.139.246:1000
}
Then I get a valid certificate and the portainer ui. But i would rather not spread the IPs over my Caddyfile. Do I have to configure something else for caddy to run in docker?
I just got help from the forum and it turns out, that caddy redirects to the port INSIDE the container, not the public one. In my case, portainer runs on 80 internally, so changing the Caddyfile to this:
smallhetzi.fading-flame.com {
reverse_proxy portainer:80
}
or this
smallhetzi.fading-flame.com {
reverse_proxy http://portainer
}
does the job. This also means, that I could get rid of exposing portainer directly over the port 1000. Now I can only access it via the proxy.
Hope someone gets some help from that :)
I have a docker-compose which fires up a mercure container
docker-compose
version: '3.8'
services:
...
mercure:
image: dunglas/mercure
ports:
- '8003:443'
- '8004:80'
environment:
- JWT_KEY='so_secret'
- DEMO=1
- DEBUG=1
- ALLOW_ANONYMOUS=1
- CORS_ALLOWED_ORIGINS=*
- PUBLISH_ALLOWED_ORIGINS=*
networks:
default:
But when I POST to POST http://mercure/.well-known/mercure I get this from my mercure container:
(prettified)
Log #1
{
"level":"info",
"ts":1606379852.84174,
"logger":"http.handlers.mercure",
"msg":"Topic selectors not matched or not provided",
"remote_addr":"192.168.192.3:37534",
"error":"unable to parse JWT: signature is invalid"
}
Log #2
{
"level":"error",
"ts":1606379852.8418272,
"logger":"http.log.access",
"msg":"handled request",
"request":{
"remote_addr":"192.168.192.3:37534",
"proto":"HTTP/1.1",
"method":"POST",
"host":"mercure",
"uri":"/.well-known/mercure",
"headers":{
"Authorization":[
"Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJtZXJjdXJlIjp7InB1Ymxpc2giOltdfX0.VuGJakeE0mowuQj0ErJjtEE-U4iYey2_XCbESaaGvtU"
],
"User-Agent":[
"Symfony HttpClient/Curl"
],
"Accept-Encoding":[
"gzip"
],
"Content-Length":[
"1339"
],
"Content-Type":[
"application/x-www-form-urlencoded"
],
"Accept":[
"*/*"
]
}
},
"common_log":"192.168.192.3 - - [26/Nov/2020:08:37:32 +0000] \"POST /.well-known/mercure HTTP/1.1\" 401 13",
"duration":0.001635684,
"size":13,
"status":401,
"resp_headers":{
"X-Content-Type-Options":[
"nosniff"
],
"X-Xss-Protection":[
"1; mode=block"
],
"Content-Security-Policy":[
"default-src 'self' mercure.rocks cdn.jsdelivr.net"
],
"Content-Type":[
"text/plain; charset=utf-8"
],
"Server":[
"Caddy"
],
"X-Frame-Options":[
"DENY"
]
}
}
Why does it say the Signature is invalid, when https://jwt.io/ says it is verified? Does the JWT_KEY from the docker-compose get ignored?
// EDIT
sudo docker-compose exec mercure env shows JWT_KEY=so_secret, so what else can I check?
I had the same issue. I tried things like:
restarting,
recreating,
link changing
(thought that mercure container had some cache.)
Then i go to documentation and get the example of payload.
So i change my JWT_KEY, and rebuild Authorization token, based on new example and it starts working!!
[Working payload][1]
[1]: https://i.stack.imgur.com/4bCFE.png
I have setup docker config using docker compose.
this is part of docker compose file
version: '3'
networks:
pm:
services:
consul:
container_name: consul
image: consul:latest
restart: unless-stopped
ports:
- 8300:8300
- 8301:8301
- 8302:8302
- 8400:8400
- 8500:8500
- 8600:8600
environment:
CONSUL_LOCAL_CONFIG: >-
{
"bootstrap": true,
"server": true,
"node_name": "consul1",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"bootstrap_expect": 1,
"ui": true,
"addresses" : {
"http" : "0.0.0.0"
},
"ports": {
"http": 8500
},
"log_level": "DEBUG",
"connect" : {
"enabled" : true
}
}
volumes:
- ./data:/consul/data
command: agent -server -bind 0.0.0.0 -client 0.0.0.0 -bootstrap-expect=1
Then set the key value via browser
I would like to add the key/value as initial at new environment, so that additional setup steps at browser could be avoided.
this is the configuration i export by using consul kv command:
# consul kv export config/
[
{
"key": "config/",
"flags": 0,
"value": ""
},
{
"key": "config/drug2/",
"flags": 0,
"value": ""
},
{
"key": "config/drug2/data",
"flags": 0,
"value": "e30="
}
]
To my knowledge Docker Compose does not have a way to run a custom command/script after the containers have started.
As a workaround you could write a shell script which executes docker-compose up and then either runs consul kv import or a curl command against Consul's Transaction API to add the data you're trying to load.