I'am facing issue with establishing SignalR connection on docker (IIS works well).
Main goal is to run docker compose and send data from nodeRed container to webApp (.net core 3.1 Blazor) and via versa. I made docker network and put there both containers succesfully.
Problem is that my SignalR connections fails with "Connection refused". I see it'll be some banality, but can't find out.
page.razor
protected override async Task OnInitializedAsync()
{
hubConnection = new HubConnectionBuilder()
.WithUrl("http://AICsystemApp/nodeRedHub")
.Build();
hubConnection.On<string, string>("ReceiveCommand", (user, message) =>
{
var encodedMsg = $"{destination}: {message}";
nodeRedOutput.Add(encodedMsg);
StateHasChanged();
});
await hubConnection.StartAsync();
}
Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80 /* tried 8024 */
EXPOSE 443 /* tried 44324 */
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
WORKDIR /src
COPY ["AICsystemApp/AICsystemApp.csproj", "AICsystemApp/"]
RUN dotnet restore "AICsystemApp/AICsystemApp.csproj"
COPY . .
WORKDIR "/src/AICsystemApp"
RUN dotnet build "AICsystemApp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "AICsystemApp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "AICsystemApp.dll"]
docker-compose.override.yml
version: '3.4'
services:
aicsystemapp:
container_name: AICsystemApp
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
ports:
- "8024:80"
- "44324:443"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
networks:
- my-network
networks:
my-network:
external: true
I made SignalR connection with official documentation and just used "http://AICsystemApp/nodeRedHub" instead of "NavigationManager.ToAbsoluteUri("/chathub")" acordingly to this thread : how to run StartAsync connection of signalr blazor client in docker image?
I am confused that docker runs containers in https:// but if I use "https://AICsystemApp/nodeRedHub" I get error: "The remote certificate is invalid according to the validation procedure."
Edit* Find out if I use "https://AICsystemApp:44324/nodeRedHub" I have not errror with https, but same "Connection refused" error. Leads nowhere but interesting.
If you need additional information, I am here ready to respond! :)
Thanks in advance.
Also tried ipaddress:port/nodeRedHub, same result.
I had the same error today and was able to get it working with the following configurations:
you have to use https, with the service name only (https://AICsystemApp/nodeRedHub)
you have to configure kestrel to use the certificates as shown in https://www.yogihosting.com/docker-compose-aspnet-core/
basically generate and mount the trusted dev-cert certificate into the container and use the following additional environment variables:
- ASPNETCORE_HTTPS_PORT=9000
- ASPNETCORE_Kestrel__Certificates__Default__Password=<password>
- ASPNETCORE_Kestrel__Certificates__Default__Path=<mountedpath>/aspnetapp.pfx
since you still have an invalid certificate, you should ignore certificate validation in development. Use only valid certificates in production. SignalR does not use the usual ServicePointManager.ServerCertificateValidationCallback but you have to configure it when building the SignalR connection (which I found here: https://github.com/dotnet/aspnetcore/issues/16919):
Connection = new HubConnectionBuilder()
.WithUrl("https://host.docker.internal:...",conf =>
{
if (env.IsDevelopment())
{
conf.HttpMessageHandlerFactory = (x) => new HttpClientHandler
{
ServerCertificateCustomValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator
};
}
});
I did not test if you could skip step 2, because I needed it anyway. You can try to ignore everything without any certificate, but you will need it in production anyway.
I had this problem but to fix it I didn't have to do anything to do with https or certificates. The problem in my case was that that Kestrel was still listening on the default network interface inside the Docker container and I needed to reconfigure it to listen on all interfaces for the connection to work.
So in launchsettings.json I needed to change applicationUrl from "https://localhost:5003;http://localhost:5002" to "https://0.0.0.0:5003;http://0.0.0.0:5002" and then everything worked pefectly.
Related
I have an ASP.NET Core 6 Web API running in docker and I want to connect to local SQL Server database (not dockerized!), but I'm unable to do so. Connecting to a database on remote server by IP works fine, but using connection string like
var dbHost = "COM-3195\\IMRANMSSQL";
var dbName = "CustomersDb";
var dbPassword = "prg#321654";
var connectionString = $"Data Source={dbHost};Initial Catalog={dbName};User Id=sa;Password={dbPassword}";
My dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["CustomerWebService/CustomerWebService.csproj", "CustomerWebService/"]
RUN dotnet restore "CustomerWebService/CustomerWebService.csproj"
COPY . .
WORKDIR "/src/CustomerWebService"
RUN dotnet build "CustomerWebService.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "CustomerWebService.csproj" -c Release -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CustomerWebService.dll"]
My Docker-Compsoe.yml
services:
customerwebservice:
image: ${DOCKER_REGISTRY-}customerwebservice
build:
context: .
dockerfile: CustomerWebService/Dockerfile
extra_hosts:
- "COM-3195\\IMRANMSSQL:<IP>"
My application is not connecting to the database, and showing this in the log:
info: Microsoft.Hosting.Lifetime[14]
Now listening on: https://[::]:443
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: /app/
info: Microsoft.EntityFrameworkCore.Infrastructure[10403]
Entity Framework Core 6.0.5 initialized 'CustomerDbContext' using provider 'Microsoft.EntityFrameworkCore.SqlServer:6.0.5' with options: None
fail: Microsoft.EntityFrameworkCore.Database.Connection[20004]
An error occurred using the connection to database 'master' on server '<IP>,1433'.
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server: Could not open a connection to SQL Server)
It is working fine if I am running without docker, if I do like this in my composer.yml file
version: '3.4'
networks:
backend:
services:
customerdb:
container_name: customer-db
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- ACCEPT_EULA=Y
- MSSQL_SA_PASSWORD=prg#321654
networks:
- backend
ports:
- 8001:1433
customerwebapi:
container_name: cutomer-api
image: ${DOCKER_REGISTRY-}customerwebapi
build:
context: .
dockerfile: CustomerWebAPI/Dockerfile
networks:
- backend
ports:
- 8002:80
environment:
- DB_HOST= customerdb
- DB_NAME= CustomersDb
- DB_SA_PASSWORD=prg#321654
It is working fine but it's running on local,8001 in SQL Server, but I want my local SQL Server to add.
Please help, I just started learning Dockering.
Please help me connect to SQL Server database (not dockerized) from my Docker image
To access something on the host, you can add a name for the host-gameway in the extra_hosts section. The usual thing is to call it host.docker.internal, so we'll do that here as well.
Then you need to change the database hostname to host.docker.internal and you should be able to connect.
I'm a little confused as to how you specify the database host name in your program. In the C# snippet, you show a hard-coded value. But in the docker-compose file where you run the database in a container, you set an environment variable. The 'nice' thing to do is to handle it via an environment variable, so I'll show that solution here
services:
customerwebservice:
image: ${DOCKER_REGISTRY-}customerwebservice
build:
context: .
dockerfile: CustomerWebService/Dockerfile
extra_hosts:
- host.docker.internal:host-gateway
environment:
- DB_HOST=host.docker.internal
You have the two following solutions:
Get the IP from your host as part of the virtual network that docker has created.
Use commands like docker inspect <container_id>, docker network ls and docker network inspect <network_id> to receive the IP address of the "Gateway"
This IP can be found with docker network inspect ... at IPAM > Config > Gateway
Use the IP of your host e.g. from eth0. I am not 100% sure, but you should also be able to use at least the IP-Adress from your main network interface.
Just as a general information, even if your container and the DB are running on the same host, you cannot use localhost as this is being resolved to different systems (once for the container, once for the host itself)
I am trying to find a way to import a realm in Keycloak version 17.0.1 that can be done at starting up a docker container (with docker-compose). I want to be able to do this in "start" mode and not "start-dev" mode as in my experience so far "start-dev" in 17 is forcing an H2/in-mem database and not allowing me to point to an external db which I would like to do to more closely resemble dev/prod environments when running locally.
Things I've tried:
1) It appears that according to recent conversations on Github (Issue 10216 and Issue 10754 to name a couple) that the environment variable that used to allow this (KEYCLOAK_IMPORT or KC_IMPORT_REALM in some versions) is no longer a trigger for this. In my attempts it also did not work for version 17.0.1.
2) I've also tried appending the following command in my docker-compose setup for keycloak and had no luck (also tried with just "start") - It appears to just ignore the command (no error or anything):
command: ["start-dev", "-Dkeycloak.import=/tmp/my-realm.json"]
3) I tried running the kc.sh command "import" in the Dockerfile (both before and after Entrypoint/start) but got error: Unmatched arguments from index 1: '/opt/keycloak/bin/kc.sh', 'im port', '--file', '/tmp/my-realm.json'
4) I've shifted gears and have tried to see if it is possible to just do it after the container starts (even with manual intervention) just to get some sanity restored. I attempted to use the admin-cli but after quite a few different attempts at different points/endpoints etc. I just get that localhost refuses to connect.
bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password adminpassword
Responds when hitting the following ports as shown:
8080: Failed to send request - Connect to localhost:8080 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
8443: Failed to send request - localhost:8443 failed to respond
I am sure there are other ways that I've tried and am forgetting - I've kind of spun my wheels at this point.
My code (largely the same as the latest docs on the Keycloak website):
Dockerfile:
FROM quay.io/keycloak/keycloak:17.0.1 as builder
ENV KC_METRICS_ENABLED=true
ENV KC_FEATURES=token-exchange
ENV KC_DB=postgres
RUN /opt/keycloak/bin/kc.sh build
FROM quay.io/keycloak/keycloak:17.0.1
COPY --from=builder /opt/keycloak/lib/quarkus/ /opt/keycloak/lib/quarkus/
WORKDIR /opt/keycloak
# for demonstration purposes only, please make sure to use proper certificates in production instead
ENV KC_HOSTNAME=localhost
RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname "CN=server" -alias server -ext "SAN:c=DNS:localhost,IP:127.0.0.1" -keystore conf/server.keystore
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start" ]
Docker-compose.yml:
version: "3"
services:
keycloak:
build:
context: .
volumes:
- ./my-realm.json:/tmp/my-realm.json:ro
env_file:
- .env
environment:
KC_DB_URL: ${POSTGRESQL_URL}
KC_DB_USERNAME: ${POSTGRESQL_USER}
KC_DB_PASSWORD: ${POSTGRESQL_PASS}
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: adminpassword
ports:
- 8080:8080
- 8443:8443 # <-- I've tried with only 8080 and with only 8443 as well. 8443 appears to be the only that I can get the admin console ui to even work on though.
networks:
- my_net
networks:
my_net:
name: my_net
Any suggestion on how to do this in a programmatic + "dev-opsy" way would be greatly appreciated. I'd really like to get this to work but am confused on how to get past this.
Importing realm upon docker initialization thru configuration is not supported yet. See https://github.com/keycloak/keycloak/issues/10216. They might release this feature in next release v18.
The workarounds people had shared in github thread is create own docker image and import the realm thru json file when building it.
FROM quay.io/keycloak/keycloak:17.0.1
# Make the realm configuration available for import
COPY realm-and-users.json /opt/keycloak_import/
# Import the realm and user
RUN /opt/keycloak/bin/kc.sh import --file /opt/keycloak_import/realm-and-users.json
# The Keycloak server is configured to listen on port 8080
EXPOSE 8080
EXPOSE 8443
# Import the realm on start-up
CMD ["start-dev"]
As #tboom said, it was not supported yet by keycloak 17.x. But it is now supported by keycloak 18.x using the --import-realm option :
bin/kc.[sh|bat] [start|start-dev] --import-realm
This feature does not work as it was before. The JSON file path must not be specified anymore: the JSON file only has to be copied in the <KEYCLOAK_DIR>/data/import directory (multiple JSON files supported). Note that the import operation is skipped if the realm already exists, so incremental updates are not possible anymore (at least for the time being).
This feature is documented on https://www.keycloak.org/server/importExport#_importing_a_realm_during_startup.
I have created an azure function that exposes an API that is being used in another project. I would like to enable the team that works on the front end to have a Docker image of our API to help them in their development. I have packed my azure function using the Dockerfile that is being generated when running func init LocalFunctionsProject --worker-runtime dotnet --docker as per this guide
This results in the following Dockerfile contents:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS installer-env
COPY . /src/dotnet-function-app
RUN cd /src/dotnet-function-app && \
mkdir -p /home/site/wwwroot && \
dotnet publish *.csproj --output /home/site/wwwroot
FROM mcr.microsoft.com/azure-functions/dotnet:2.0
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY --from=installer-env ["/home/site/wwwroot", "/home/site/wwwroot"]
I have then created a docker-compose.yml file on the front end project:
version: "3.7"
services:
frontend:
image: frontend:latest
build: .
ports:
- "3000:3000"
env_file:
- frontend.docker.env
backend:
image: backend:latest
env_file:
- backend.docker.env
ports:
- "8080:80"
This succesfully spins up two containers, one with the front end and another with the backend. Inside my frontend.docker.env I have set my variable to point to the backend so that calls from the front end are directed to http://backend/api/myendpoint
However this is where everything fails, Im getting a CORS issue:
What I have tried:
Whenever I call my backend from the exposed 8080 port from Postman, everyhting is working fine. I have tried to manually add an Access-Control-Allow-Origin header set to * on my response which I can verify is coming through in Postman. However the front end is still getting the CORS issue.
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://backend/api/myendpoint. (Reason: CORS request did not succeed)
My other approach is changing the web.config inside the azure-functions-host file manually by running the following instruction on my Dockerfile:
RUN sed -i 's/<customHeaders>/<customHeaders><add name="Access-Control-Allow-Origin" value="*" \/><add name="Access-Control-Allow-Headers" value="Content-Type" \/><add name="Access-Control-Allow-Methods" value="GET, POST, PUT, DELETE, OPTIONS" \/>/g' azure-functions-host/web.config
After that I see the following headers coming from the response in Postman:
Access-Control-Allow-Headers = X-Requested-With,content-type
Access-Control-Allow-Methods = GET, POST, OPTIONS, PUT, PATCH, DELETE
Access-Control-Allow-Origin = *
However Im still facing the same issue. Postman works, the frontend doesnt...
Do you have any idea what the issue might be, or how to properly set CORS
When launching an application deployed to Google Application Engine Flexible, it fails with too many 307 redirects. It runs successfully locally in the VS IDE.
The development and computing stack include:
MacOS
.NET Core 3
Visual Studio 2019 for Mac
Docker
Google Application Engine
I created a project using the VS api template (weather forecast).
Create API project
Add Docker support (via the menu)
Create and export SSL certificate:
dotnet dev-certs https -v -ep /Users/QQQQQQQ/Projects/CostZzzzzzzzzz/xxxxx.Orchestration.Cost/Certificate/dev-certificate.pfx -p ufo
(which i subsequently moved to root of project)
Modify Dockerfile as follows:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
EXPOSE 8080
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-buster AS build
WORKDIR /src
COPY Xxxxx.Orchestration.Cost/Xxxxx.Orchestration.Cost.csproj Xxxxx.Orchestration.Cost/
RUN dotnet restore "Xxxxx.Orchestration.Cost/Xxxxx.Orchestration.Cost.csproj"
COPY . .
WORKDIR "/src/Xxxxx.Orchestration.Cost"
RUN dotnet build "Xxxxx.Orchestration.Cost.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Xxxxx.Orchestration.Cost.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV ASPNETCORE_ENVIRONMENT=Development
ENV ASPNETCORE_URLS=http://*:8080;https://*:443
ENV ASPNETCORE_HTTPS_PORT=443
ENV ASPNETCORE_Kestrel__Certificates__Default__Path=Xxxxx.Orchestration.Cost/dev-certificate.pfx
ENV ASPNETCORE_Kestrel__Certificates__Default__Password=ufo
ENTRYPOINT ["dotnet", "Xxxxx.Orchestration.Cost.dll"]
Create app.yaml
runtime: custom
env: flex
# This sample incurs costs to run on the App Engine flexible environment.
# The settings below are to reduce costs during testing and are not appropriate
# for production use. For more information, see:
# https://cloud.google.com/appengine/docs/flexible/python/configuring-your-app-with-app-yaml
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
network:
name: default
subnetwork_name: default-us-east1
service: get-cost
env_variables:
# The __ in My__Greeting will be translated to a : by ASP.NET.
My__Greeting: Hello AppEngine Flex!
Modify Program.cs to support kestrel and ssl
public class Program
{
public static void Main(string[] args)
{
//CreateHostBuilder(args).Build().Run();
CreateWebHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseKestrel(options =>
{
// //options.Listen(IPAddress.Loopback, 5000); // http:localhost:5000
options.Listen(IPAddress.Any, 8080); // http:*:80
options.Listen(IPAddress.Any, 443, listenOptions =>
{
listenOptions.UseHttps("dev-certificate.pfx", "ufo");
});
})
.UseStartup<Startup>();
}
Deploy service to GAE: gcloud app deploy
This solution is the conflation of several articles describing how to create and deploy .NET Core applications to GAE via Docker.
The error log's primary message is:
XXX.YYY.ZZZ.AAA - "GET /" 307 undefined "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"
I am looking for help on how to properly configure the application so that it will run correctly in the GAE.
PS: By removing the ENV instructions in the Dockerfile, the docker container will run locally on my mac. However, running it on the GAE has eluded me.
It turns out that the solution was quite simple once i understood what the real problem was, part of which stemmed from being quite new to the newer .net core versions such as 3.x, being new to GAE Flex, and to Docker, all at the same time.
In any event, removing app.UseHttpsRedirection(); in the Startup.cs class' Configure method resolved the immediate problem. The issue was explained in this article: https://learn.microsoft.com/en-us/aspnet/core/security/enforcing-ssl?view=aspnetcore-3.1&tabs=visual-studio
Essentially GAE Flex was already providing redirection on port 8080, so the additional redirection instruction in the code was causing endless redirections with HTTP 307 results.
I am learning how to run my .net core web applications on docker (on windows host) with SSL. I started with the basic application that .net core gives by doing "dotnet new mvc".
Then I moved ahead and modified the code inside Program.cs to listen only on port 443.
public static IWebHost BuildWebHost(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.UseKestrel(options =>
{
options.Listen(IPAddress.Any, 443, listenOptions =>
{
listenOptions.UseHttps("test.com.pfx", "1234");
});
})
.Build();
}
I created s self signed certificate and mentioned the password. Please ignore the way the password is exposed as I am still doing this as a POC.
Then I modified my dockerfile as:
FROM microsoft/dotnet
WORKDIR /app
COPY bin/Debug/netcoreapp2.0/publish .
EXPOSE 443
ENTRYPOINT ["dotnet", "dotnetssl.dll"]
Now If I build and run the image with:
docker build -t httpssample .
docker run -it -p 443:443 httpssample
I am not able to access the website with the URL https://localhost , also I get below lines printed in the console:
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {9f9e447d-d766-4a7d-9413-03466512b7a9} may be persisted to storage in unencrypted form.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to bind to http://localhost:5000 on the IPv6 loopback interface: 'Error -99 EADDRNOTAVAIL address not available'.
Hosting environment: Production
Content root path: /app
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.