ocelot redirect to internal container failed - docker

so I have a console app that needs to call the backend running on docker.
i'm using ocelot for api gateway and an api.
when I do a test call in postman to my gateway, the request come in and ocelot tries to redirect to the api, but I get a ENOTFOUND.
I'm using the docker-compose name, and I also tried with the container-name.
ocelot config:
{
"Routes": [
{
"DownstreamPathTemplate": "/api/fileagent",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "fileagentapi",
"Port": 80
}
],
"UpstreamPathTemplate": "/fileagent",
"UpstreamHttpMethod": [ "POST"]
}
],
"GlobalConfiguration": {
"RequestIdKey": "ocRequestId"
}
}
port 80 is the internal docker container port.
docker-compose config:
version: '3.4'
services:
fileagentapi:
image: ${DOCKER_REGISTRY-}fileagentapi
container_name: FileAgentApi
build:
context: .
dockerfile: Services/FileAgentApi/Dockerfile
gateways:
image: ${DOCKER_REGISTRY-}gateways
build:
context: .
dockerfile: Api-Gateways/Gateways/Dockerfile
ports:
- 50000:443
- 50001:80

After a long time but I have found my problem.
In my program file -> I still had the app.UseHttpsRedirect() enabled.
This will not work when testing in http.

Related

Selenium Selenoid File Server not running in Browser Container

selenium is unable to download any files from the browsers due to a 502 error on my coworkers machine, none of my other coworkers are seeing the issue, just this one dude. We are using Firefox.
After looking at the Selenoid code a bit I learned that the containers the Browser runs in uses a File Server on port 8080 to allow downloading files from the container, but I discovered that this File Server is not running within these containers.
I verified this through this command:
docker exec -it <browser_container> curl 127.0.0.1:8080
On my machine I get a 200 response:
<pre>
test.xlsx
</pre>
But when I run this command on his machine I get this error:
Failed to connect to 127.0.0.1 port 8080 after 8 ms: Connection refused
This is indicative that the File Server is not running within his Browser Containers. I've been trying many different firefox arguments and I've restart selenoid and the docker containers and still can't figure out what's going on, I'm completely lost right now. If anyone knows what might be going on I would be appreciative, or even if anyone has any idea how to gain more information into what's going on.
Here is the Firefox options we are using
options = webdriver.FirefoxOptions()
options.add_argument('--width=1600')
options.add_argument('--height=900')
options.set_preference('browser.download.dir', '/home/selenium/Downloads')
And our browsers.json file
{
"chrome": {
"default": "105.0",
"versions": {
"105.0": {
"image": "selenoid/vnc_chrome:105.0",
"port": "4444",
"path": "/",
"env": ["TZ=America/Denver"]
}
},
"caps": {
"loggingPrefs": {"browser": "ALL"},
"enableVNC": true,
"browserName": "chrome",
"timeZone": "America/Denver",
"sessionTimeout": "1m30s"
}
},
"firefox": {
"default": "latest",
"versions": {
"latest": {
"image": "selenoid/firefox",
"port": "4444",
"path": "/wd/hub",
"env": ["TZ=America/Denver"]
}
},
"caps": {
"loggingPrefs": {"browser": "ALL"},
"enableVNC": true,
"browserName": "firefox",
"timeZone": "America/Denver",
"sessionTimeout": "1m30s"
}
}
}
We do have a custom docker-compose.yml file for starting the selenoid and selenoid_ui containers, here is the file just in case that setup matters, I doubt the issue lies here.
version: "3.9"
networks:
selenoid_net:
name: selenoid_net
attachable: true
ipam:
config:
- subnet: 172.198.1.0/24
services:
selenoid:
image: aerokube/selenoid
restart: always
networks:
selenoid_net:
ports:
- "4444:4444"
environment:
- OVERRIDE_VIDEO_OUTPUT_DIR=${VIDEO_OUTPUT}/video
- TZ=America/Denver
volumes:
- "/etc/selenoid:/etc/selenoid"
- "/var/run/docker.sock:/var/run/docker.sock"
- "${VIDEO_OUTPUT}/video:${VIDEO_OUTPUT}/video"
- "${VIDEO_OUTPUT}/logs:${VIDEO_OUTPUT}/logs"
- "${PWD}:/etc/browsers"
command: ["-conf", "/etc/browsers/browsers.json",
"-video-output-dir", "${VIDEO_OUTPUT}/video",
"-log-output-dir", "${VIDEO_OUTPUT}/logs",
"-limit", "6",
"-timeout", "1m30s","-container-network", 'selenoid_net']
selenoid-ui:
image: "aerokube/selenoid-ui:latest"
restart: always
networks:
selenoid_net:
links:
- "selenoid"
ports:
- "8080:8080"
command: ["--selenoid-uri", "http://selenoid:4444"]

Caddy as reverse proxy in docker refuses to connect to other containers

I wanted to try out Caddy in a docker environment but it does not seem to be able to connect to other containers. I created a network "caddy" and want to run a portainer alongside it. If I go into the volume of caddy, I can see, that there are certs generated, so that seems to work. Also portainer is running and accessible via the Server IP (http://65.21.139.246:1000/). But when I access via the url: https://smallhetzi.fading-flame.com/ I get a 502 and in the log of caddy I can see this message:
{
"level": "error",
"ts": 1629873106.715402,
"logger": "http.log.error",
"msg": "dial tcp 172.20.0.2:1000: connect: connection refused",
"request": {
"remote_addr": "89.247.255.231:15146",
"proto": "HTTP/2.0",
"method": "GET",
"host": "smallhetzi.fading-flame.com",
"uri": "/",
"headers": {
"Accept-Encoding": [
"gzip, deflate, br"
],
"Accept-Language": [
"de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7"
],
"Cache-Control": [
"max-age=0"
],
"User-Agent": [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36"
],
"Sec-Fetch-Site": [
"none"
],
"Accept": [
"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"
],
"Sec-Fetch-Mode": [
"navigate"
],
"Sec-Fetch-User": [
"?1"
],
"Sec-Fetch-Dest": [
"document"
],
"Sec-Ch-Ua": [
"\"Chromium\";v=\"92\", \" Not A;Brand\";v=\"99\", \"Google Chrome\";v=\"92\""
],
"Sec-Ch-Ua-Mobile": [
"?0"
],
"Upgrade-Insecure-Requests": [
"1"
]
},
"tls": {
"resumed": false,
"version": 772,
"cipher_suite": 4865,
"proto": "h2",
"proto_mutual": true,
"server_name": "smallhetzi.fading-flame.com"
}
},
"duration": 0.000580828,
"status": 502,
"err_id": "pq78d9hen",
"err_trace": "reverseproxy.statusError (reverseproxy.go:857)"
}
But two compose files:
Caddy:
version: '3.9'
services:
caddy:
image: caddy:2-alpine
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- certs-volume:/data
- caddy_config:/config
volumes:
certs-volume:
caddy_config:
networks:
default:
external:
name: caddy
Caddyfile:
{
email simonheiss87#gmail.com
# acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
smallhetzi.fading-flame.com {
reverse_proxy portainer:1000
}
and my portainer file:
version: '3.9'
services:
portainer:
image: portainer/portainer-ce
container_name: portainer
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data portainer/portainer
entrypoint: /portainer -p :80
ports:
- "1000:80"
volumes:
portainer_data:
networks:
default:
external:
name: caddy
What I think happens is, that those two containers are somehow not in the same network, but I dont get why.
What works as a workaround right now is, when i make this change to my Caddyfile:
smallhetzi.fading-flame.com {
reverse_proxy 65.21.139.246:1000
}
Then I get a valid certificate and the portainer ui. But i would rather not spread the IPs over my Caddyfile. Do I have to configure something else for caddy to run in docker?
I just got help from the forum and it turns out, that caddy redirects to the port INSIDE the container, not the public one. In my case, portainer runs on 80 internally, so changing the Caddyfile to this:
smallhetzi.fading-flame.com {
reverse_proxy portainer:80
}
or this
smallhetzi.fading-flame.com {
reverse_proxy http://portainer
}
does the job. This also means, that I could get rid of exposing portainer directly over the port 1000. Now I can only access it via the proxy.
Hope someone gets some help from that :)

Error consume route ApiGateway with ocelot and docker service

I am creating an ApiGateway with ocelot that consume an Api service in net core.
The ApiGateway and ApiService are deployed on docker with docker compose of this way:
Docker-compose:
tresfilos.webapigateway:
image: ${DOCKER_REGISTRY-}tresfilosapigateway
build:
context: .
dockerfile: tresfilos.ApiGateway/ApiGw-Base/Dockerfile
tresfilos.users.service:
image: ${DOCKER_REGISTRY-}tresfilosusersservice
build:
context: .
dockerfile: tresfilos.Users.Service/tresfilos.Users.Service/Dockerfile
Docker-compose.override:
tresfilos.webapigateway:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- IdentityUrl=http://identity-api
ports:
- "7000:80"
- "7001:443"
volumes:
- ./tresfilos.ApiGateway/Web.Bff:/app/configuration
tresfilos.users.service:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
ports:
- "7002:80"
- "7003:443"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
In configuration ocelot apigateway i define .json like:
"ReRoutes": [
{
"DownstreamPathTemplate": "/api/{version}/{everything}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "tresfilos.users.service",
"Port": 7002
}
],
"UpstreamPathTemplate": "/api/{version}/user/{everything}",
"UpstreamHttpMethod": [ "POST", "PUT", "GET" ]
},
],
"GlobalConfiguration": {
"BaseUrl": "https://localhost:7001"
}
When i consume the ApiGateway from the url:
http://localhost:7000/api/v1/user/Login/authentication
I have the error in terminal docker:
Why does the above error occur and how to fix it?
What version of Ocelot are you running?
I found another thread which has a similar looking problem and apparently from version 16.0.0 of Ocelot 'ReRoutes' was changed to 'Routes' in the Ocelot configuration file.
The thread I found was - 404 trying to route the Upstream path to downstream path in Ocelot
I fix it of this way:
Change ReRoutes to Routes because ocelot version is 16.0.1
Define config like:
"Routes": [
{
"DownstreamPathTemplate": "/api/{version}/{everything}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "tresfilos.users.service",
"Port": 7002
}
],
"UpstreamPathTemplate": "/api/{version}/User/{everything}"
},
],
"GlobalConfiguration": {
"BaseUrl": "https://localhost:7001"
}
In postman i send the data in Body like json and not like paramaters.
(JasonS thanks)...

AWS ECS containers are not connecting but works perfectly in my local machine

I have an application(runs at http://localhost:8080) that talks to a backend api which runs at http://localhost:8081. I have dockerized the frontend and the backend separately and running them through docker-compose locally works perfectly without any issues. But, when I run it in ECS, the frontend couldn't find http://localhost:8081(backend).
I am using an AutoScaling group with an Elastic Load Balancer and I have my both containers defined in a single Task Definition. Also, I have the backend linked to the front end. When I ssh into my ECS instance and run docker ps -a i can see both of my containers are running at the correct ports exactly like in my local machine(Result of docker ps -a) and I can successfully ping each of them from one container to the other.
Task Definition
"CartTaskDefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"ContainerDefinitions": [
{
"Name": "cs-cart",
"Image": "thishandp7/cs-cart",
"Memory": 400,
"PortMappings":[
{
"ContainerPort": "8080",
"HostPort": "8080"
}
],
"Links": [
"cs-server"
]
},
{
"Name": "cs-server",
"Image": "thishandp7/cs-server",
"Memory": 450,
"PortMappings":[
{
"ContainerPort": "8081",
"HostPort": "8081"
}
],
}
]
}
}
Listeners in my ElasticLoadBalancer,
The first listener is for the frontend and the second one is for the backend
"Listeners" : [
{
"LoadBalancerPort": 80,
"InstancePort": 8080,
"Protocol": "http"
},
{
"LoadBalancerPort": 8081,
"InstancePort": 8081,
"Protocol": "tcp"
}
],
EC2 instacne security Group Ingress rules:
"SecurityGroupIngress" : [
{
"IpProtocol" : "tcp",
"FromPort" : 8080,
"ToPort" : 8080,
"SourceSecurityGroupId" : { "Ref": "ElbSecurityGroup" }
},
{
"IpProtocol" : "tcp",
"FromPort" : 8081,
"ToPort" : 8081,
"SourceSecurityGroupId" : { "Ref": "ElbSecurityGroup" }
},
{
"IpProtocol" : "tcp",
"FromPort" : 22,
"ToPort" : 22,
"CidrIp" : "0.0.0.0/0"
}
],
Docker Compose
version: "3.5"
services:
cart:
build:
context: ..
dockerfile: docker/Dockerfile
args:
APP_LOCATION: /redux-saga-cart/
PORT: 8080
networks:
- server-cart
ports:
- 8080:8080
depends_on:
- server
server:
build:
context: ..
dockerfile: docker/Dockerfile
args:
APP_LOCATION: /redux-saga-shopping-cart-server/
PORT: 8081
ports:
- 8081:8081
networks:
- server-cart
networks:
server-cart:
Quick update: I have tried it with awsvpc network mode with application load balancer. Still not working
Thanks in advance.
What kind of Docker Network mode are you using(Brdige/Host) on ECS?. I don't think localhost will work properly on ECS containers. I had same issue so I used private IP or DNS name of EC2 host for my communication as temp testing purpose. Ex - http://10.0.1.100:8081.
Note - Please make sure to give security group rule to allow 8081 traffic from within EC2(Edit EC2 security group to allow 8081 from same sgid as source).
For Production deployments, I would recommend to use a service discovery to identify the backend service(Consul by Hashicorp) or AWS Private Service Discovery on ECS.
-- Update --
Since you are running both containers under same task def(under same ECS service), so typically ECS will bring both docker containers on same host. Do something like following.
By default ECS brings containers using Bridge mode on Linux.
You should be able to have each containers communicate using Docker Gateway IP - 172.17.0.1 on Linux. So for your case, try configuring http://172.17.0.1:8081

How to launch a docker bundle with specified exposed ports?

If you are not familiar with docker bundles please read this.
So I have tried to create a simple docker bundle from the following docker-compose.yml
version: "2"
services:
web:
image: cohenaj194/apache-simple
ports:
- 32701:80
nginx:
image: nginx
ports:
- 32700:80
But the ports of the docker services this bundle created were not exposed and I could not access any of the containers in my services through ports 32700 or 32701 as I specified it in thedocker-compose.yml. How am I supposed to expose the ports of docker bundle services?
Update: I believe my issue may be that my test.dab file that is created with docker-compose bundle does not contain any mention of port 32700 or 32701:
{
"Services": {
"nginx": {
"Image": "nginx#sha256:d33834dd25d330da75dccd8add3ae2c9d7bb97f502b421b02cecb6cb7b34a1b6",
"Networks": [
"default"
],
"Ports": [
{
"Port": 80,
"Protocol": "tcp"
}
]
},
"web": {
"Image": "cohenaj194/apache-simple#sha256:6196c5bce25e5f76e0ea7cbe8e12e4e1f96bd36011ed37d3e4c5f06f6da95d69",
"Networks": [
"default"
],
"Ports": [
{
"Port": 80,
"Protocol": "tcp"
}
]
}
},
"Version": "0.1"
}
Attempting to insert the extra ports into this file also does not work and results in the following error:
Error reading test.dab: JSON syntax error at byte 229: invalid character ':' after object key:value pair
Update2: My services are accessible over the default ports docker swarm assigns to services when the host port is not defined:
user#hostname:~/test$ docker service inspect test_nginx --pretty
ID: 3qimd4roft92w3es3qooa9qy8
Name: test_nginx
Labels:
- com.docker.stack.namespace=test
Mode: Replicated
Replicas: 2
Placement:
ContainerSpec:
Image: nginx#sha256:d33834dd25d330da75dccd8add3ae2c9d7bb97f502b421b02cecb6cb7b34a1b6
Networks: 1v5nyqqjnenf7xlti346qfw8n
Ports:
Protocol = tcp
TargetPort = 80
PublishedPort = 30000
I can then get at my service from port 30000 however I want to be able to define the host port my services will use.
As of the Docker 1.12 release, there is no way to specify a "published" port in the bundle. The bundle is a portable format, and exposed ports are non-portable (in that if two bundles used the same ones they would conflict).
So exposed ports will not be part of the bundle configuration. Currently the only option is to run a docker service update to add the published port. In the future there may be other ways to achieve this.

Resources