Setting CORS in Azure Functions running inside docker container - docker

I have created an azure function that exposes an API that is being used in another project. I would like to enable the team that works on the front end to have a Docker image of our API to help them in their development. I have packed my azure function using the Dockerfile that is being generated when running func init LocalFunctionsProject --worker-runtime dotnet --docker as per this guide
This results in the following Dockerfile contents:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS installer-env
COPY . /src/dotnet-function-app
RUN cd /src/dotnet-function-app && \
mkdir -p /home/site/wwwroot && \
dotnet publish *.csproj --output /home/site/wwwroot
FROM mcr.microsoft.com/azure-functions/dotnet:2.0
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY --from=installer-env ["/home/site/wwwroot", "/home/site/wwwroot"]
I have then created a docker-compose.yml file on the front end project:
version: "3.7"
services:
frontend:
image: frontend:latest
build: .
ports:
- "3000:3000"
env_file:
- frontend.docker.env
backend:
image: backend:latest
env_file:
- backend.docker.env
ports:
- "8080:80"
This succesfully spins up two containers, one with the front end and another with the backend. Inside my frontend.docker.env I have set my variable to point to the backend so that calls from the front end are directed to http://backend/api/myendpoint
However this is where everything fails, Im getting a CORS issue:
What I have tried:
Whenever I call my backend from the exposed 8080 port from Postman, everyhting is working fine. I have tried to manually add an Access-Control-Allow-Origin header set to * on my response which I can verify is coming through in Postman. However the front end is still getting the CORS issue.
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://backend/api/myendpoint. (Reason: CORS request did not succeed)
My other approach is changing the web.config inside the azure-functions-host file manually by running the following instruction on my Dockerfile:
RUN sed -i 's/<customHeaders>/<customHeaders><add name="Access-Control-Allow-Origin" value="*" \/><add name="Access-Control-Allow-Headers" value="Content-Type" \/><add name="Access-Control-Allow-Methods" value="GET, POST, PUT, DELETE, OPTIONS" \/>/g' azure-functions-host/web.config
After that I see the following headers coming from the response in Postman:
Access-Control-Allow-Headers = X-Requested-With,content-type
Access-Control-Allow-Methods = GET, POST, OPTIONS, PUT, PATCH, DELETE
Access-Control-Allow-Origin = *
However Im still facing the same issue. Postman works, the frontend doesnt...
Do you have any idea what the issue might be, or how to properly set CORS

Related

Keycloak 17.0.1 Import Realm on Docker / Docker-Compose Startup

I am trying to find a way to import a realm in Keycloak version 17.0.1 that can be done at starting up a docker container (with docker-compose). I want to be able to do this in "start" mode and not "start-dev" mode as in my experience so far "start-dev" in 17 is forcing an H2/in-mem database and not allowing me to point to an external db which I would like to do to more closely resemble dev/prod environments when running locally.
Things I've tried:
1) It appears that according to recent conversations on Github (Issue 10216 and Issue 10754 to name a couple) that the environment variable that used to allow this (KEYCLOAK_IMPORT or KC_IMPORT_REALM in some versions) is no longer a trigger for this. In my attempts it also did not work for version 17.0.1.
2) I've also tried appending the following command in my docker-compose setup for keycloak and had no luck (also tried with just "start") - It appears to just ignore the command (no error or anything):
command: ["start-dev", "-Dkeycloak.import=/tmp/my-realm.json"]
3) I tried running the kc.sh command "import" in the Dockerfile (both before and after Entrypoint/start) but got error: Unmatched arguments from index 1: '/opt/keycloak/bin/kc.sh', 'im port', '--file', '/tmp/my-realm.json'
4) I've shifted gears and have tried to see if it is possible to just do it after the container starts (even with manual intervention) just to get some sanity restored. I attempted to use the admin-cli but after quite a few different attempts at different points/endpoints etc. I just get that localhost refuses to connect.
bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password adminpassword
Responds when hitting the following ports as shown:
8080: Failed to send request - Connect to localhost:8080 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
8443: Failed to send request - localhost:8443 failed to respond
I am sure there are other ways that I've tried and am forgetting - I've kind of spun my wheels at this point.
My code (largely the same as the latest docs on the Keycloak website):
Dockerfile:
FROM quay.io/keycloak/keycloak:17.0.1 as builder
ENV KC_METRICS_ENABLED=true
ENV KC_FEATURES=token-exchange
ENV KC_DB=postgres
RUN /opt/keycloak/bin/kc.sh build
FROM quay.io/keycloak/keycloak:17.0.1
COPY --from=builder /opt/keycloak/lib/quarkus/ /opt/keycloak/lib/quarkus/
WORKDIR /opt/keycloak
# for demonstration purposes only, please make sure to use proper certificates in production instead
ENV KC_HOSTNAME=localhost
RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname "CN=server" -alias server -ext "SAN:c=DNS:localhost,IP:127.0.0.1" -keystore conf/server.keystore
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start" ]
Docker-compose.yml:
version: "3"
services:
keycloak:
build:
context: .
volumes:
- ./my-realm.json:/tmp/my-realm.json:ro
env_file:
- .env
environment:
KC_DB_URL: ${POSTGRESQL_URL}
KC_DB_USERNAME: ${POSTGRESQL_USER}
KC_DB_PASSWORD: ${POSTGRESQL_PASS}
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: adminpassword
ports:
- 8080:8080
- 8443:8443 # <-- I've tried with only 8080 and with only 8443 as well. 8443 appears to be the only that I can get the admin console ui to even work on though.
networks:
- my_net
networks:
my_net:
name: my_net
Any suggestion on how to do this in a programmatic + "dev-opsy" way would be greatly appreciated. I'd really like to get this to work but am confused on how to get past this.
Importing realm upon docker initialization thru configuration is not supported yet. See https://github.com/keycloak/keycloak/issues/10216. They might release this feature in next release v18.
The workarounds people had shared in github thread is create own docker image and import the realm thru json file when building it.
FROM quay.io/keycloak/keycloak:17.0.1
# Make the realm configuration available for import
COPY realm-and-users.json /opt/keycloak_import/
# Import the realm and user
RUN /opt/keycloak/bin/kc.sh import --file /opt/keycloak_import/realm-and-users.json
# The Keycloak server is configured to listen on port 8080
EXPOSE 8080
EXPOSE 8443
# Import the realm on start-up
CMD ["start-dev"]
As #tboom said, it was not supported yet by keycloak 17.x. But it is now supported by keycloak 18.x using the --import-realm option :
bin/kc.[sh|bat] [start|start-dev] --import-realm
This feature does not work as it was before. The JSON file path must not be specified anymore: the JSON file only has to be copied in the <KEYCLOAK_DIR>/data/import directory (multiple JSON files supported). Note that the import operation is skipped if the realm already exists, so incremental updates are not possible anymore (at least for the time being).
This feature is documented on https://www.keycloak.org/server/importExport#_importing_a_realm_during_startup.

Apache Nifi (on docker): only one of the HTTP and HTTPS connectors can be configured at one time error

Have a problem adding authentication due to a new needs while using Apache NiFi (NiFi) without SSL processing it in a container.
The image version is apache/nifi:1.13.0
It's said that SSL is unconditionally required to add authentication. It's recommended to use tls-toolkit in the NiFi image to add SSL. Worked on the following process:
Except for environment variable nifi.web.http.port for HTTP communication, and executed up the standalone mode container with nifi.web.https.port=9443
docker-compose up
Joined to the container and run the tls-toolkit script in the nifi-toolkit.
cd /opt/nifi/nifi-toolkit-1.13.0/bin &&\
sh tls-toolkit.sh standalone \
-n 'localhost' \
-C 'CN=yangeok,OU=nifi' \
-O -o $NIFI_HOME/conf
Attempt 1
Organized files in directory $NIFI_HOME/conf. Three files keystore.jks, truststore.jsk, and nifi.properties were created in folder localhost that entered the value of the option -n of the tls-toolkit script.
cd $NIFI_HOME/conf &&
cp localhost/*.jks .
The file $NIFI_HOME/conf/localhost/nifi.properties was not overwritten as it is, but only the following properties were imported as a file $NIFI_HOME/conf/nifi.properties:
nifi.web.http.host=
nifi.web.http.port=
nifiweb.https.host=localhost
nifiweb.https.port=9443
Restarted container
docker-compose restart
The container died with below error log:
Only one of the HTTP and HTTPS connectors can be configured at one time
Attempt 2
After executing the tls-toolkit script, all files a were overwritten, including file nifi.properties
cd $NIFI_HOME/conf &&
cp localhost/* .
Restarted container
docker-compose restart
The container died with the same error log
Hint
The dead container volume was also accessible, so copied and checked file nifi.properties, and when did docker-compose up or restart, it changed as follows:
The part I overwritten or modified:
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=localhost
nifi.web.https.port=9443
The changed part after re-executing the container:
nifi.web.http.host=a8e283ab9421
nifi.web.http.port=9443
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=a8e283ab9421
nifi.web.https.port=9443
I'd like to know how to execute the container with http.host, http.port empty. docker-compose.yml file is as follows:
version: '3'
services:
nifi:
build:
context: .
args:
NIFI_VERSION: ${NIFI_VERSION}
container_name: nifi
user: root
restart: unless-stopped
network_mode: bridge
ports:
- ${NIFI_HTTP_PORT}:8080/tcp
- ${NIFI_HTTPS_PORT}:9443/tcp
volumes:
- ./drivers:/opt/nifi/nifi-current/drivers
- ./templates:/opt/nifi/nifi-current/templates
- ./data:/opt/nifi/nifi-current/data
environment:
TZ: 'Asia/Seoul'
########## JVM ##########
NIFI_JVM_HEAP_INIT: ${NIFI_HEAP_INIT} # The initial JVM heap size.
NIFI_JVM_HEAP_MAX: ${NIFI_HEAP_MAX} # The maximum JVM heap size.
########## Web ##########
# NIFI_WEB_HTTP_HOST: ${NIFI_HTTP_HOST} # nifi.web.http.host
# NIFI_WEB_HTTP_PORT: ${NIFI_HTTP_PORT} # nifi.web.http.port
NIFI_WEB_HTTPS_HOST: ${NIFI_HTTPS_HOST} # nifi.web.https.host
NIFI_WEB_HTTP_PORT: ${NIFI_HTTPS_PORT} # nifi.web.https.port
Thank you

SignalR can't connect using docker compose/container

I'am facing issue with establishing SignalR connection on docker (IIS works well).
Main goal is to run docker compose and send data from nodeRed container to webApp (.net core 3.1 Blazor) and via versa. I made docker network and put there both containers succesfully.
Problem is that my SignalR connections fails with "Connection refused". I see it'll be some banality, but can't find out.
page.razor
protected override async Task OnInitializedAsync()
{
hubConnection = new HubConnectionBuilder()
.WithUrl("http://AICsystemApp/nodeRedHub")
.Build();
hubConnection.On<string, string>("ReceiveCommand", (user, message) =>
{
var encodedMsg = $"{destination}: {message}";
nodeRedOutput.Add(encodedMsg);
StateHasChanged();
});
await hubConnection.StartAsync();
}
Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80 /* tried 8024 */
EXPOSE 443 /* tried 44324 */
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
WORKDIR /src
COPY ["AICsystemApp/AICsystemApp.csproj", "AICsystemApp/"]
RUN dotnet restore "AICsystemApp/AICsystemApp.csproj"
COPY . .
WORKDIR "/src/AICsystemApp"
RUN dotnet build "AICsystemApp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "AICsystemApp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "AICsystemApp.dll"]
docker-compose.override.yml
version: '3.4'
services:
aicsystemapp:
container_name: AICsystemApp
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
ports:
- "8024:80"
- "44324:443"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
networks:
- my-network
networks:
my-network:
external: true
I made SignalR connection with official documentation and just used "http://AICsystemApp/nodeRedHub" instead of "NavigationManager.ToAbsoluteUri("/chathub")" acordingly to this thread : how to run StartAsync connection of signalr blazor client in docker image?
I am confused that docker runs containers in https:// but if I use "https://AICsystemApp/nodeRedHub" I get error: "The remote certificate is invalid according to the validation procedure."
Edit* Find out if I use "https://AICsystemApp:44324/nodeRedHub" I have not errror with https, but same "Connection refused" error. Leads nowhere but interesting.
If you need additional information, I am here ready to respond! :)
Thanks in advance.
Also tried ipaddress:port/nodeRedHub, same result.
I had the same error today and was able to get it working with the following configurations:
you have to use https, with the service name only (https://AICsystemApp/nodeRedHub)
you have to configure kestrel to use the certificates as shown in https://www.yogihosting.com/docker-compose-aspnet-core/
basically generate and mount the trusted dev-cert certificate into the container and use the following additional environment variables:
- ASPNETCORE_HTTPS_PORT=9000
- ASPNETCORE_Kestrel__Certificates__Default__Password=<password>
- ASPNETCORE_Kestrel__Certificates__Default__Path=<mountedpath>/aspnetapp.pfx
since you still have an invalid certificate, you should ignore certificate validation in development. Use only valid certificates in production. SignalR does not use the usual ServicePointManager.ServerCertificateValidationCallback but you have to configure it when building the SignalR connection (which I found here: https://github.com/dotnet/aspnetcore/issues/16919):
Connection = new HubConnectionBuilder()
.WithUrl("https://host.docker.internal:...",conf =>
{
if (env.IsDevelopment())
{
conf.HttpMessageHandlerFactory = (x) => new HttpClientHandler
{
ServerCertificateCustomValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator
};
}
});
I did not test if you could skip step 2, because I needed it anyway. You can try to ignore everything without any certificate, but you will need it in production anyway.
I had this problem but to fix it I didn't have to do anything to do with https or certificates. The problem in my case was that that Kestrel was still listening on the default network interface inside the Docker container and I needed to reconfigure it to listen on all interfaces for the connection to work.
So in launchsettings.json I needed to change applicationUrl from "https://localhost:5003;http://localhost:5002" to "https://0.0.0.0:5003;http://0.0.0.0:5002" and then everything worked pefectly.

ddev: Call the endpoint of a certain port of the web container from another container

I set up a Shopware 6 project with ddev. Now I want to write cypress tests for one of my plugins. The shopware testsuite starts a node express server on port 8005 in the web container. I have configured the port for ddev so that I can open the express endpoint in my browser: http://my.ddev.site:8005/cleanup. That is working.
For cypress I have created a new ddev container with a new docker-compose file:
version: '3.6'
services:
cypress:
container_name: ddev-${DDEV_SITENAME}-cypress
image: cypress/included:4.10.0
tty: true
ipc: host
links:
- web:web
environment:
- CYPRESS_baseUrl=https://web
- DISPLAY
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
volumes:
# Project root
- ../shopware:/project
# Storefront and Administration
- ../shopware/vendor/shopware/platform/src/Storefront/Resources/app/storefront/test/e2e:/e2e-Storefront
- ../shopware/vendor/shopware/platform/src/Administration/Resources/app/administration/test/e2e:/e2e-Administration
# Custom plugins
- ../shopware/custom/plugins/MyPlugin/src/Resources/app/administration/test/e2e:/e2e-MyPlugin
# for Cypress to communicate with the X11 server pass this socket file
# in addition to any other mapped volumes
- /tmp/.X11-unix:/tmp/.X11-unix
entrypoint: /bin/bash
I can now successfully open the cypress interface and I see my tests. The problem is now, that always before a cypress test is executed, the express endpoint is called (with the URL from above) and the cypress container seems to has no access to the endpoint. This is the output:
cy.request() failed trying to load:
http://my.ddev.site:8005/cleanup
We attempted to make an http request to this URL but the request failed without a response.
We received this error at the network level:
> Error: connect ECONNREFUSED 127.0.0.1:8005
-----------------------------------------------------------
The request we sent was:
Method: GET
URL: http://my.ddev.site:8005/cleanup
So I can call this endpoint in my browser, but cypress can't. Is there any configuration in the cypress container missing to call the port 8005 from the web container?
You need to add this to the cypress service:
external_links:
- "ddev-router:${DDEV_HOSTNAME}"
and then your http URL will be accessed through the router via ".ddev.site".
If you need a trusted https URL it's a little more complicated, but for http this should work fine.

Docker-in-Docker issues with connecting to internal container network (Anchore Engine)

I am having issues when trying to connect to a docker-compose network from inside of a container. These are the files I am working with. The whole thing runs when I ./run.sh.
Dockerfile:
FROM docker/compose:latest
WORKDIR .
# EXPOSE 8228
RUN apk update
RUN apk add py-pip
RUN apk add jq
RUN pip install anchorecli
COPY dockertest.sh ./dockertest.sh
COPY docker-compose.yaml docker-compose.yaml
CMD ["./dockertest.sh"]
docker-compose.yaml
services:
# The primary API endpoint service
engine-api:
image: anchore/anchore-engine:v0.6.0
depends_on:
- anchore-db
- engine-catalog
#volumes:
#- ./config-engine.yaml:/config/config.yaml:z
ports:
- "8228:8228"
..................
## A NUMBER OF OTHER CONTAINERS THAT ANCHORE-ENGINE USES ##
..................
networks:
default:
external:
name: anchore-net
dockertest.sh
echo "------------- INSTALL ANCHORE CLI ---------------------"
engineid=`docker ps | grep engine-api | cut -f 1 -d ' '`
engine_ip=`docker inspect $engineid | jq -r '.[0].NetworkSettings.Networks."cws-anchore-net".IPAddress'`
export ANCHORE_CLI_URL=http://$engine_ip:8228/v1
export ANCHORE_CLI_USER='user'
export ANCHORE_CLI_PASS='pass'
echo "System status"
anchore-cli --debug system status #This line throws error (see below)
run.sh:
#!/bin/bash
docker build . -t anchore-runner
docker network create anchore-net
docker-compose up -d
docker run --network="anchore-net" -v //var/run/docker.sock:/var/run/docker.sock anchore-runner
#docker network rm anchore-net
Error Message:
System status
INFO:anchorecli.clients.apiexternal:As Account = None
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 172.19.0.6:8228
Error: could not access anchore service (user=user url=http://172.19.0.6:8228/v1): HTTPConnectionPool(host='172.19.0.6', port=8228): Max retries exceeded with url: /v1
(Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))
Steps:
run.sh builds container image and creates network anchore-net
the container has an entrypoint script, which does multiple things
firstly, it brings up the docker-compose network as detached FROM inside the container
secondly, nstalls anchore-cli so I can run commands against container network
lastly, attempts to get a system status of the anchore-engine (d.c network) but thats where I am running into HTTP request connection issues.
I am dynamically getting the IP of the api endpoint container of anchore-engine and setting the URL of the request to do that. I have also tried passing those variables from command line such as:
anchore-cli --u user --p pass --url http://$engine_ip/8228/v1 system status but that throws the same error.
For those of you who took the time to read through this, I highly appreciate any input you can give me as to where the issue may be lying. Thank you very much.

Resources