We are running set of micro-services and each are exposing open-api spec to url like the following.
https://{domain}/v1/membership/v3/api-docs
https://{domain}/v1/storage/v3/api-docs
https://{domain}/v1/order/v3/api-docs
The url opens open-api json (not UI).
Does anyone know of a tool where I can combine these and be able serve all the apis via url like https://{domain}/v1/apis with UI?
I looked Google and all I saw required me to create a single json file with all apis instead of dynamically serving these.
You can use the property urls:
springdoc.swagger-ui.urls[0].name = first
springdoc.swagger-ui.urls[0].url = /firstAPI.yaml
springdoc.swagger-ui.urls[1].name = second
springdoc.swagger-ui.urls[1].url = /secondAPI.yaml
You can find this property in the documentation. There is also a nice FAQ for this question:
The properties springdoc.swagger-ui.urls.*, are suitable to configure external (/v3/api-docs url). For example if you want to agreagte all the endpoints of other services, inside one single application. Don’t forget that CORS needs to be enabled as well.
Related
Context:
I have a keycloak inside a docker, I understand that there is a "proxy reverse" doing something like transforming this url for example: "http://example.com" into "http://171.20.2.97:8082" (this is the actual place where the Keycloak is "deployed" or "up"). It is just an example, my clients when they need to consume an endpoint from one microservice of mine do not use numbers, they use example.com.
so in the Keycloak when you want to see the metadata of the realm for SAML2.0 you can do it by following this link which is in the REALM settings section:
https://example.com/auth/realms/REALM-NAME/protocol/saml/descriptor
as you can see I am using "example.com" not "171.20.2.97:8082" to access the metadata link.
The problem is that inside the METADATA, the endpoints for SingleSignOnService, SingleLogoutService, etc. Are all configured to be "http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml" (notice it is using the numbers and not example.com) and this causes that when the clients that want to use SAML.
Send inside their SAML REQUEST "Destination" attribute like so: "http://example.com/auth/realms/REALM-NAME/protocol/saml" and this causes an invalid request error, with reason invalid_destination, because the request attribute Destination was expected to be:
"http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml" like is inside the Metadata.
So my question is, how can I edit the metadata to change the endpoints numbers to example.com or if that is not possible, how can I make example.com get translated to 171.20.2.97:8082 inside my keycloak server? Or if you know another way to solve/figure out this it is very welcome
I feel like a BEAST after finding out how to achieve what I needed after like 3 weeks of searching about keycloak and SAML (I overcame many obstacles this was the lastone), finally I managed to fix this by using the "Frontend URL" setting in my REALM settings, there I can put anything I want so that it changes "http://171.20.2.97:8082/auth/" (inside the metadata urls) for whatever I configure there, so for example if I set Frontend URL to:
https://example.com/auth/
now all my metadata endpoints will be like so:
https://example.com/auth/realms/REALM-NAME/protocol/saml
instead of:
http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml
now my client is being able to properly login with SAML2 using keycloak.
how did I manage to find out this? Well there is not much info so this was what gave me the hint: Keycloak behind nginx reverse proxy: SAML Integration invalid_destination
The person asking said that he configured frontend-url, and I wanted to give a try to that, and after checking if that changed metadata urls, surprise it did =)
I have been using response templating to give dynamic responses, given that all the request and query parameters are associated with that request itself. However, I wanted to make a POST request with several parameters, and later use those parameters in a stubbed GET method's body response by using response templating. Is this something possible to do in wiremock? Any input is greatly appreciated, thank you!
Storing state between requests is not a default feature of WireMock outside of mocking the behavior through Stateful Behaviour, which is different from being actually stateful.
Without a custom plugin being able to share information between several requests is therefor not possible. In the WireMock documentation there is a section in the documentation on how to create such a plugin yourself. With a little development experience this is certainly doable.
On GitHub there are several plugin that create a storage mechanism to store information
WireMockCsv: store and retrieve information using HSQL Database.
wiremock-redis-extension does something similar using Redis.
An alternative to these approaches is to create mappings/data just before the test starts. For example generating all the responses beforehand and then using Templated BodyFileName tag to retrieve the just-in-time created file. Another way of achieving this result is to use the Admin API to create the mappings themselves directly.
In JSBin, I don't see an option to add service worker. Is it possible? Or, is there any other options?
I don't think it's possible to put together an example/demo that registers your own service worker using JSBin.
In terms of other options, what I tend to do is use GitHub's Gists to store my HTML and service worker JavaScript, and then use RawGit to serve the resources. RawGit gives you HTTPS plus proper Content-Type headers, both of which are necessary in order to register a service worker.
Here's an example of a Gist that uses this setup.
You need to get the "Raw" URL for your HTML (click on the "Raw" button in the Gist interface), and then paste that URL into https://rawgit.com/. When registering your service worker from your HTML, always use a relative URL (like navigator.serviceWorker.register('sw.js');), and include the code for your service worker in another file that's part of the same Gist.
You'll end up with a URL served by RawGit that will let you access your HTML and can register and use your service worker file.
I am trying to stream a file from a remote storage service (not s3 :-)) to the client using Ruby on Rails 4.2.
My server needs to stay in the middle of things to authenticate the client request but also to build up the request to the remote storage service since all requests to that service need to be authenticated using a custom header param. This makes it not possible to do a simple redirect_to and let the client download the file directly (but do let me know if this IS in fact possible using rails!). Also I want to keep the url of the file cloaked for the client.
Up until now I am using a gem called ZipLine but this also does not work as it still buffers the remote file before sending it to the client. As I am using unicorn/nginx, this might also be due to a setting in either of those two, that prevents proper streaming.
As per rails doc's instructions I have tried adding
listen 3000, tcp_nopush: false
to config/unicorn.rb but to no avail.
A solution might be to cache the remote file locally for a certain period and just serve that file. This would make some things easier but also creating new headaches like keeping the remote and cached files in sync, setting the right triggers for cache expiration, etc.
So to sum up:
1) How do I accomplish the scenario above?
2) If this is not a intelligent/efficient way of doing things, should I just cache a remote copy?
3) What are your experiences/recommendations in given scenario?
I have come across various solutions scattered around the interweb but none inspire a complete solution.
Thanks!
I am assuming you the third party storage service has an HTTP access. If you did consider using redirect_to, I assume the service also provides a means to allow per download authorization. Like unique key in the header that expires and does not expose your secret api keys or HMAC signed URL with expiration time as a param.
Anyhow, most cloud storage services provide this kind of file access. I would highly recommend let the service stream the file. Your app should simply authorize the user and redirect to the service. Rails allows you to add custom headers while redirecting. It is discussed in Rails guides.
10.2.1 Setting Custom Headers
If you want to set custom headers for a response then response.headers
is the place to do it. The headers attribute is a hash which maps
header names to their values, and Rails will set some of them
automatically. If you want to add or change a header, just assign it
to response.headers
So your action code would end up being something like this:
def download
# do_auth_check
response.headers["Your-API-Auth-Key"] = "SOME-RANDOM-STRING"
redirect_to url
end
Don't use up unnecessary server resources by streaming through them all those downloads. We are paying cloud services to that after all :)
how can i construct a artificial request to login to twitter or any site for that matter that accpets post forms.
what i've been trying is to extract the headers and post request parameters from the origional request(directed at the action atribute of the form) and copy it to the outgoing url object that i am making.but it just won't work.
And i am aware of the apis and i don't wanna use them i am trying this to write a web proxy site.
I don't fully understand your question (e.g. "aware of the APIs and I don't want to use them") but urlib may be useful, particularly urllib.FancyURLopener(...).
Are you looking for libcurl ?
It's a library that allows you to interact with servers using a bunch of different protocoles, including HTTP. So, for instance, you can simulate POST or GET request.
You can use it as a command line tool or as a library from many languages (PHP, C, etc ...)