I am unable to get the OPA middleware to execute on a service to service invocation.
I am using the simple OPA example online and cannot seem to get it to trigger when invoking it from another services using service invocation. It seems I can hit it if i curl from the service to the sidecar over localhost.
My intent is to add this so that calling services (via Dapr invocation) will pass through the pipeline before reaching my service. The desire is to add authz to my services via configuration and inject them into the Dapr ecosystem. I have not been able to get this to trigger if i call the service via another Dapr service using invocation. ie. service A calls http://localhost:3500/v1.0/invoke/serviceb/method/shouldBlock and ServiceB has a configuration with an http pipeline that defaults to allow=false, however, it doesnt get called. If i shell into ServiceB and call that same method via curl, it will get triggered
for more clarity, I am using this model https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/ except this blog post is putting the ingress in the default namespace while i am using it from the default ingress-nginx namespace. the call is made successfully from the ingress, but never propagates the pipeline and gets denied by OPA
Here is a repo that sets up a kind k8s cluster, ingress controller and a dapr app with an opa policy. The setup.sh script should demonstrate the issue https://github.com/ewassef/opa-dapr
Related
I am trying to run Ambassador API gateway on my local dev environment so I would simulate what I'll end up with on production - the difference is that on prod my solution will be running in Kubernetes. To do so, I'm installing Ambassador into Docker Desktop and adding the required configuration to route requests to my microservices. Unfortunately, it did not work for me and I'm getting the error below:
upstream connect error or disconnect/reset before headers. reset reason: connection failure
I assume that's due to an issue in the mapping file, which is as follows:
apiVersion: ambassador/v2
kind: Mapping
name: institutions_mapping
prefix: /ins/
service: localhost:44332
So what I'm basically trying to do is rewrite all requests coming to http://{ambassador_url}/ins to a service running locally in IIS Express (through Visual Studio) on port 44332.
What am I missing?
I think you may be better off using another one of Ambassador Labs tools called Telepresence.
https://www.telepresence.io/
With Telepresence you can take your local service you have running on localhost and project it into your cluster to see how it performs. This way you don't need to spin up a local cluster, and can get real time feedback on how your service operates with other services in the cluster.
We are using PCF to run our applications, To build data pipelines we thought of leveraging the Spring cloud data flow server, which is given as service inside PCF.
We created a DataFlow server by giving SQL server and maven repo details, and for the scheduler, we didn't provide any extra parameters while creating service, so by default, it is disabled.
Got some info from here, how to enable scheduler: https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_enabling_scheduling
So I tried updating the existing Data Flow service with the below command:
cf updat-service my-service -c '{"spring.cloud.dataflow.features.schedules-enabled":true}'
the Data Flow server is restarted, but still the scheduler is not enabled to schedule the jobs.
When I check with this endpoint GET /about from the Data Flow server, I am still getting
"schedulesEnabled": false
in response body.
I am not sure why the SCDF service isn't updated with the schedules enabled property even after you update service (as it is expected to have it enabled).
Irrespective of that you can try setting the following as environment property for SCDF service instance as well:
SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED: true
Once the schedule is enabled, you need to make sure that you have the following properties set correctly as well:
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_TASK_SERVICES: <all-the-services-for-tasks-along-with-the-scheduler-service-instance>
SPRING_CLOUD_SCHEDULER_CLOUDFOUNDRY_SCHEDULER_URL: <scheduler-url>
Prepare the environment in jenkins to integrate sonarqube and gitlab, with sonarqube I have no problem but when I try to create Webhooks, it does not let me enter a URL localhost.
If someone can help me to give access to my URL.
This was reported in gitlab-ce issue 49315, and linked to the documentation "Webhooks and insecure internal web services"
Because Webhook requests are made by the GitLab server itself, these have complete access to everything running on the server (http://localhost:123) or within the server’s local network (http://192.168.1.12:345), even if these services are otherwise protected and inaccessible from the outside world.
If a web service does not require authentication, Webhooks can be used to trigger destructive commands by getting the GitLab server to make POST requests to endpoints like http://localhost:123/some-resource/delete.
To prevent this type of exploitation from happening, starting with GitLab 10.6, all Webhook requests to the current GitLab instance server address and/or in a private network will be forbidden by default.
That means that all requests made to 127.0.0.1, ::1 and 0.0.0.0, as well as IPv4 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 and IPv6 site-local (ffc0::/10) addresses won’t be allowed.
If you really needs this:
This behavior can be overridden by enabling the option “Allow requests to the local network from hooks and services” in the “Outbound requests” section inside the Admin area under Settings (/admin/application_settings/network):
We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure
I have two microservices (Transactions and Payments) that are gonna be accessed through an ApiGateway, and each microservice is inside a docker container. Also, I have implemented my own SwaggerResourcesProvider in order to access both Swagger from a single point: the ApiGateway Swagger, as you can see in this question: Single Swagger
In order to enable CORS in each microservice, all of them (including ApiGateway) have the following code:
#Bean
public CorsFilter corsFilter() {
final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
CorsConfiguration config = new CorsConfiguration();
config.setAllowCredentials(true);
config.addAllowedOrigin("*");
config.addAllowedHeader("*");
config.addAllowedMethod("*");
source.registerCorsConfiguration("/**", config);
return new CorsFilter(source);
}
If I execute each microservice using the IDE, I can access Swagger of microservices from the ApiGateway without any problem, as you can see here (ApiGateway is executed on 8070 port):
However, when I execute them using docker-compose, and I try to access Swagger of any microservice through ApiGateway (mapping internal port to 8070), I am receiving the following error:
The weird thing is that, if I enter inside the ApiGateway docker using bash, and I execute curl transactionservice/api/v2/api-docs then I am receiving the corresponding json, so docker of ApiGateway is accessing to the Swagger of the rest of the dockers, but it is not able to access from my web browser.
Question: Why Swagger is unable to access other Swaggers when being executed using docker?
I finally found the problem: when executing using docker-compose, each microservice communicates to others using the service name, and docker-compose is able to translate that to the corresponding IP (that is the reason that in the second image, the Transactions link shows http://transactionservice/..., because in docker-compose.yml I was using that URL as the Resource URL).
So, when I access to swagger in ApiGateway, it returns that URL as the Resource. However, that html is being executed in my machine, not inside the docker, so when it tries to access to http://transactionservice/api/v2/api-docs, my machine does not know anything about transactionservice.
The solution was playing a bit with redirections in ApiGateway, using this configuration:
zuul.routes.transaction.path: /transaction/**
zuul.routes.transaction.url: http://transactionservice/api/transaction
zuul.routes.transaction-swagger.path: /swagger/transaction/**
zuul.routes.transaction-swagger.url: http://transactionservice/api
zuul.routes.payment.path: /payment/**
zuul.routes.payment.url: http://paymentservice/api/payment
zuul.routes.payment-swagger.path: /swagger/payment/**
zuul.routes.payment-swagger.url: http://paymentservice/api
swagger.resources[0].name: transactions
swagger.resources[0].url: /swagger/transaction/v2/api-docs
swagger.resources[0].version: 2.0
swagger.resources[1].name: payments
swagger.resources[1].url: /swagger/payment/v2/api-docs
swagger.resources[1].version: 2.0
This way, all the requests are performed using the ApiGateway, even the swagger ones.