"Rest Assured" requests log: Duplicate requests prints - rest-assured

I am working with InteliJ although I don't know if that is important...
When I debug a my code which uses "Rest Assured", every request is being printed twice to the Intelij Run\Debug window.
For example:
#testR
Feature: tests Feature
Run Before Feature
**Request method: POST**
Request URI: https://10.188.10.30:443/auth/api/login
Proxy: <none>
Request params: <none>
Query params: <none>
Form params: username=admin
password=admin
**Request method: POST**
Request URI: https://10.188.10.30:443/auth/api/login
Proxy: <none>
Request params: <none>
Query params: <none>
Form params: username=admin
password=admin
10:34:44: Step: Given Login to tenant "system" with username "admin" and password "admin"(Scenario: New Login)
I defined my requestSpecification as:
requestSpecification =
RestAssured
.with()
.baseUri(baseUri)
.port(port)
.filter(new ResponseLoggingFilter())
.filter(new RequestLoggingFilter())
.log().all();

Use either only one
.filter(new RequestLoggingFilter())
.log().all();

Related

oauth2: cannot fetch token: 400 Bad Request Response: in Prometheus targets, while federating metrics

I am trying to use oauth2 in the kube-prom-stack for the authetication for metrics federate from a https node.
Below is my configuration:
`additionalScrapeConfigs:
- job_name: 'test-federation'
scrape_interval: 20s
scrape_timeout: 20s
scheme: https
oauth2:
client_id: 'auth-server'
client_secret: 'XXXXXXXXXXX'
token_url: 'http://XXX.XXX.XX.XX:80/auth/token '
endpoint_params:
grant_type: 'client_credentials'
metrics_path: /federate
honor_labels: true
tls_config:
insecure_skip_verify: true
metric_relabel_configs:
- source_labels: [id]
regex: '^static-agent$'
action: drop
params:
match[]:
- '{job="xyz"}'
static_configs:
- targets: ['XXX.XX.XX.XX:9090']`
But, when i checked my prometheus tarhgets i see below error:
oauth2: cannot fetch token: 400 Bad Request Response: {"code":"400","description":"Invalid credentials"}
Please help.
oauth2:
client_id: 'auth-server'
client_secret: 'XXXXXXXXXXX'
token_url: 'http://XXX.XXX.XX.XX:80/auth/token '
endpoint_params:
grant_type: 'client_credentials'
I tried the above for Oauth2 authentication, but i see the below error in prometheues targets while scraping metrics from other node.
oauth2: cannot fetch token: 400 Bad Request Response: {"code":"400","description":"Invalid credentials"}

Implement Envoy OAuth2 filter with disabled routes

I deployed an envoy as a side car to manage oauth2. Everything work fine for all the resources and the client is redirected to the OIDC in order to authenticate.
Here is a part of my conf (managed in a Helm chart):
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
access_log:
- name: envoy.access_loggers.file
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/stdout
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: my-service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: my-service
http_filters:
- name: envoy.filters.http.oauth2
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2
config:
token_endpoint:
cluster: {{ .Values.back.envoy.oidc.name }}
uri: https://{{ .Values.back.envoy.oidc.address }}/oidc/token
timeout: 5s
authorization_endpoint: https://{{ .Values.back.envoy.oidc.address }}/oidc/authorize
redirect_uri: "%REQ(x-forwarded-proto)%://%REQ(:authority)%/oidc/callback"
redirect_path_matcher:
path:
exact: /oidc/callback
signout_path:
path:
exact: /oidc/signout
credentials:
client_id: {{ required "back.envoy.oidc.client_id is required" .Values.back.envoy.oidc.client_id }}
token_secret:
name: token
sds_config:
resource_api_version: V3
path: "/etc/envoy/token-secret.yaml"
hmac_secret:
name: hmac
sds_config:
resource_api_version: V3
path: "/etc/envoy/hmac-secret.yaml"
forward_bearer_token: true
# (Optional): defaults to 'user' scope if not provided
auth_scopes:
- user
- openid
- email
- homelan_devices_read
- homelan_topology_read
- homelan_devices_write
# (Optional): set resource parameter for Authorization request
#resources:
#- oauth2-resource
#- http://example.com
- name: envoy.filters.http.router
typed_config: {}
Now I'd like that some of the exposed resources don't need to be authenticated.
I see in the doc the Oauth filter doc "Leave this empty to disable OAuth2 for a specific route, using per filter config." (see https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/http/oauth2/v3/oauth.proto#envoy-v3-api-msg-extensions-filters-http-oauth2-v3-oauth2config)
This phrase make me think that it may be possible.
I tried to manage it changing my conf throught virtual_hosts this way :
virtual_hosts:
- name: no-oauth
domains: ["*"]
typed_per_filter_config:
envoy.filters.http.oauth2:
"#type": type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2
routes:
- match:
prefix: "/api/v1/myResource1"
route:
cluster: my-service
- name: my-service
domains: ["*"]
routes:
- match:
prefix: "/api/v1/myResource2"
route:
cluster: my-service
I have the error : [critical][main] [source/server/server.cc:117] error initializing configuration '/etc/envoy/envoy.yaml': The filter envoy.filters.http.oauth2 doesn't support virtual host-specific configurations
Any idea ? Did someone implement Envoy OAuth2 filter with disabled routes ?
After looking at my envoy logs, I realized that path is know as header ":path".
The pass_through_matcher math the header.
Then only adding:
pass_through_matcher:
- name: ":path"
prefix_match: "/healthz"
- name: ":path"
prefix_match: "/api/v1/myResource1"
in my conf without the lua filter (see my previous answer) it works.
For information, I found a workaround:
I added a LUA filter before my OAuth2 one:
- name: envoy.filters.http.lua
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inline_code: |
function envoy_on_request(request_handle)
request_handle:headers():add("X-Path", request_handle:headers():get(":path"))
end
In order to add the path in a header.
Then I can use this element of conf Oauth2:
pass_through_matcher
(repeated config.route.v3.HeaderMatcher) Any request that matches any of the provided matchers will be passed through without OAuth validation.
So I add this to my OAuth2 filter:
pass_through_matcher:
- name: "X-path"
prefix_match: "/healthz"
- name: "X-path"
prefix_match: "/api/v1/myResource1"
Then my /api/v1/myResource1 requests (and healthz also) don't need authentication (are disable from the OAuth2) while my /api/v1/myResource2 requests need it.
I still have got the unanswered question:
What do the OAuth filter doc means with :"Leave this empty to disable OAuth2 for a specific route, using per filter config."

Is there a way to use action.yml with container repository's GitHub action

I want to create a github action that will be used as a github action from container repository like described here:
https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions#example-using-a-docker-public-registry-action
Is there a way to have action.yml describing this container?
Is no, is there a way to forward an input with default value from "expression" for getting the GitHub token like here on regular action.yml:
...
inputs:
...
token:
description: 'GitHub Temporary Token'
required: false
default: ${{ github.token }}
...
...
Couldn't find a documentation on internet.
You would pass in the input pretty much as you've defined it. However, I would advise you make the token required and remove the default setting.
Instead, have the workflow that calls your action pass in the token from there:
Your action:
...
inputs:
...
token:
description: 'GitHub Temporary Token'
required: true
...
...
The workflow:
...
steps:
- uses: your-username/your-action#v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
...
Also, check here for documentation: https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions
Specifically here:
https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#runs-for-docker-actions
So you would call a public image like this:
runs:
using: 'docker'
image: public-image#tag

RestAssured. Is it possible to log the body in a ResponseSpecification for all the tests?

I try to setup #BeforeClass for test inheriting and I need always to log the request body in all of my tests so I try to do so:
import static io.restassured.filter.log.LogDetail.*;
#BeforeClass
public void config() {
RequestSpecification jsonServerRequestSpecification =
new RequestSpecBuilder()
.setBaseUri("http://localhost")
.setPort(3000)
.log(METHOD).log(URI).log(PARAMS)
.setContentType(ContentType.JSON)
.build();
ResponseSpecification jsonServerResponseSpecification =
new ResponseSpecBuilder()
.expectContentType(ContentType.JSON)
.log(STATUS).log(BODY)
.build();
requestSpecification = jsonServerRequestSpecification;
responseSpecification = jsonServerResponseSpecification;
}
but after running one of my tests I'm getting this:
Request method: GET
Request URI: http://localhost:3000/comments?postId=1&id=1
Request params: <none>
Query params: postId=1
id=1
Form params: <none>
Path params: <none>
Multiparts: <none>
===============================================
Default Suite
Total tests run: 1, Passes: 1, Failures: 0, Skips: 0
===============================================
Process finished with exit code 0
As you can see body don't log but I expected it from the #BeforeClass method.
RestAssured - v4.2.0 | Java - v8

Conditional resource in serverless

I would like to add an AWS resource conditionally based on presence of an env var. I tried serverless-cloudformation-parameter-setter but I get a generic error on deployment and I don't see what I need to do to fix it
I'm trying to deploy a simple lambda + SQS stack and if a env var is defined also subscribe the queue to the topic denoted by the env var - or if the var is not defined then not do that part at all, just the lambda and the queue
This is what I tried:
plugins:
- serverless-cloudformation-parameter-setter
provider:
name: aws
runtime: nodejs8.10
region: eu-west-1
functions:
update:
handler: index.update
events:
- sqs:
arn:
Fn::GetAtt:
- Queue
- Arn
custom:
cf-parameters:
SourceTopicArn: "${env:UPDATE_SNS_ARN}"
resources:
Parameters:
SourceTopicArn:
Type: string
Resources:
Queue:
Type: "AWS::SQS::Queue"
Subscription:
Type: "AWS::SNS::Subscription"
Condition: SourceTopicArn
Properties:
TopicArn:
Ref: SourceTopicArn
Endpoint:
Ref: Queue
The error I receive is: The CloudFormation template is invalid: Template format error: Unrecognized parameter type: string
If I remove all the parameter stuff it works fine
The Type has to be String, not string. See the supported parameter data types section in the docs.

Resources