I have two microservices (Transactions and Payments) that are gonna be accessed through an ApiGateway, and each microservice is inside a docker container. Also, I have implemented my own SwaggerResourcesProvider in order to access both Swagger from a single point: the ApiGateway Swagger, as you can see in this question: Single Swagger
In order to enable CORS in each microservice, all of them (including ApiGateway) have the following code:
#Bean
public CorsFilter corsFilter() {
final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
CorsConfiguration config = new CorsConfiguration();
config.setAllowCredentials(true);
config.addAllowedOrigin("*");
config.addAllowedHeader("*");
config.addAllowedMethod("*");
source.registerCorsConfiguration("/**", config);
return new CorsFilter(source);
}
If I execute each microservice using the IDE, I can access Swagger of microservices from the ApiGateway without any problem, as you can see here (ApiGateway is executed on 8070 port):
However, when I execute them using docker-compose, and I try to access Swagger of any microservice through ApiGateway (mapping internal port to 8070), I am receiving the following error:
The weird thing is that, if I enter inside the ApiGateway docker using bash, and I execute curl transactionservice/api/v2/api-docs then I am receiving the corresponding json, so docker of ApiGateway is accessing to the Swagger of the rest of the dockers, but it is not able to access from my web browser.
Question: Why Swagger is unable to access other Swaggers when being executed using docker?
I finally found the problem: when executing using docker-compose, each microservice communicates to others using the service name, and docker-compose is able to translate that to the corresponding IP (that is the reason that in the second image, the Transactions link shows http://transactionservice/..., because in docker-compose.yml I was using that URL as the Resource URL).
So, when I access to swagger in ApiGateway, it returns that URL as the Resource. However, that html is being executed in my machine, not inside the docker, so when it tries to access to http://transactionservice/api/v2/api-docs, my machine does not know anything about transactionservice.
The solution was playing a bit with redirections in ApiGateway, using this configuration:
zuul.routes.transaction.path: /transaction/**
zuul.routes.transaction.url: http://transactionservice/api/transaction
zuul.routes.transaction-swagger.path: /swagger/transaction/**
zuul.routes.transaction-swagger.url: http://transactionservice/api
zuul.routes.payment.path: /payment/**
zuul.routes.payment.url: http://paymentservice/api/payment
zuul.routes.payment-swagger.path: /swagger/payment/**
zuul.routes.payment-swagger.url: http://paymentservice/api
swagger.resources[0].name: transactions
swagger.resources[0].url: /swagger/transaction/v2/api-docs
swagger.resources[0].version: 2.0
swagger.resources[1].name: payments
swagger.resources[1].url: /swagger/payment/v2/api-docs
swagger.resources[1].version: 2.0
This way, all the requests are performed using the ApiGateway, even the swagger ones.
Related
I am unable to get the OPA middleware to execute on a service to service invocation.
I am using the simple OPA example online and cannot seem to get it to trigger when invoking it from another services using service invocation. It seems I can hit it if i curl from the service to the sidecar over localhost.
My intent is to add this so that calling services (via Dapr invocation) will pass through the pipeline before reaching my service. The desire is to add authz to my services via configuration and inject them into the Dapr ecosystem. I have not been able to get this to trigger if i call the service via another Dapr service using invocation. ie. service A calls http://localhost:3500/v1.0/invoke/serviceb/method/shouldBlock and ServiceB has a configuration with an http pipeline that defaults to allow=false, however, it doesnt get called. If i shell into ServiceB and call that same method via curl, it will get triggered
for more clarity, I am using this model https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/ except this blog post is putting the ingress in the default namespace while i am using it from the default ingress-nginx namespace. the call is made successfully from the ingress, but never propagates the pipeline and gets denied by OPA
Here is a repo that sets up a kind k8s cluster, ingress controller and a dapr app with an opa policy. The setup.sh script should demonstrate the issue https://github.com/ewassef/opa-dapr
As the title states, I'm trying to figure out how I can get my .NET Core worker service deployed inside a Docker container to communicate with a REST API on my local machine (Docker host) via HTTPS.
NOTE: The Docker container is able to communicate with the REST API via plain-old HTTP just fine, so this seems to purely be an issue with HTTPS/SSL.
For background, I have a ASP.NET Core REST API deployed on my local machine via IIS. I have one binding for HTTP (port 8001) and one for HTTPS (port 8101). Additionally, said API also interfaces with my companies (model) IdentityServer instance.
Additionally, I have a .NET Core worker service which communicates with the aforementioned REST API (using an HttpClient under the hood). I've packaged this into a Docker image/container (using a Dockerfile, etc.).
When I run the container using the HTTP (not HTTPS) endpoints, everything is fine and the container is able to interface with the REST API on the local machine (provided I use host.docker.internal in place of localhost in URLs).
However, when I switch to HTTPS, things go haywire and I receive the following error:
System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception. ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure.
I searched around for solutions but nothing seems to be working, but maybe I'm doing something wrong.
I've tried exporting (via Windows Certificate Manager) all the relevant certificates I could come up with as .cer files, changing their extensions to .crt and adding them to the Docker container via the Dockerfile:
COPY FooBar.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates
ENTRYPOINT ["dotnet", "My.Project.dll"]
(Omitted the extraneous build/publish steps as I know those are working fine, but this snippet is part of a larger multi-stage Dockerfile).
NOTE: update-ca-certificates did recognize the certificates, so I'm under the impression they were added. I thought that there might have been an issue since I'm using a multi-stage Docker build, but even adding the certificates right before the ENTRYPOINT seems to make no difference.
I'm honestly not sure what else it could be at this point (or if my use case is even supported).
I have a docker container running an open source identity server UI. This consists of several web applications, and I am running them from the same docker container.
One web application, calls an API endpoint in another web application to get a config file. It then does a redirect to a URL found in that config file. The config file is dynamically generated using the domain name in the request.
I make a call from my local host, to the exposed port. This page then calls another webapi using the docker-compose service name for the URL: ex: https://webapi2/well-known/openid-configuration. This returns a config file with URLs that use webapi2 as the domain.
This causes a browser redirect to https://webapi2/singin. This fails, because my localhost does not know about wepapi1, it needs to use uses localhost:44310.
I am running a docker container consisting of a asp.net core 2.2 api. This api needs access to Azure key vault and I have signed in into Visual studio with a user that has the right access policies on the Key Vault to retrieve secrets. However, when I use visual studio tools for docker to debug the container, this particular sign in does not seem to propogate inside the container running locally. But when i run the application locally(without running it in docker container) the asp net core configuration provider seems to pick up my visual studio login. Any pointers on this is helpful
I had the same problem with docker and MSI on my mac. I ended up doing the following workaround:
First get an access token from CLI and set it to environment (and remember pass it to docker)
export ACCESS_TOKEN=$(az account get-access-token --resource=https://vault.azure.net | jq -r .accessToken)
In the code, pick it up if token is in environment:
KeyVaultClient keyVaultClient;
var accessToken = Environment.GetEnvironmentVariable("ACCESS_TOKEN");
if (accessToken!=null)
{
keyVaultClient = new KeyVaultClient(
async (string a, string r, string s)=> accessToken);
}
else
{
var azureServiceTokenProvider = new AzureServiceTokenProvider();
keyVaultClient = new KeyVaultClient(
new KeyVaultClient.AuthenticationCallback(
azureServiceTokenProvider.KeyVaultTokenCallback));
}
I read this post ~ month ago. I was looking for answer on the similar question. I found that Docker can run Kubernetes and there is AAD-Pod-Identity https://github.com/Azure/aad-pod-identity which doesn't work for Docker Kubernetes. I forked their repository and make modification for mic component. Now it works for Docker Kubernetes, not sure whether Azure team has plans get these modifications on board or not.
You can get detailed instructions how to get things running here:
https://github.com/Wallsmedia/aad-pod-identity
One more option, which avoids secret injection, is to use the device code authentication flow to obtain a user_impersonation access token. The downside, the developer must manually complete the flow every time the container starts up.
These posts outline the process,
https://joonasw.net/view/device-code-flow
https://blog.simonw.se/getting-an-access-token-for-azuread-using-powershell-and-device-login-flow/
Use the powershell clientId to avoid registering a new tenant app. Works like a charm.
We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure