Right now my OpenAPI 2.0 YAML file has only one host URL:
host: petstore.test.com
basePath: /
Can I use multiple hosts like this?
host1: petstore.test.com
host2: petstore1.test.com
host3: petstore2.dev.com
OpenAPI 2.0 (Swagger 2.0) only supports a single host with multiple schemes (HTTP/HTTPS/etc.), so you can effectively have two hosts that only vary in the scheme:
host: petstore.test.com
schemes:
- http
- https
But OpenAPI 3.x supports multiple hosts with different schemes and base paths:
servers:
- url: https://petstore.prd.com
description: Production server
- url: {scheme}://petstore.dev.com/subpath
description: Development server
templates:
scheme:
enum:
- http
- https
default: https
For more examples, see this answer.
It is now possible in OpenApi 3.0
Here is a description:
Multiple hosts are supported in OpenAPI 3.0. 2.0 supports only one
host per API specification (or two if you count HTTP and HTTPS as
different hosts). A possible way to target multiple hosts is to omit
the host and schema from your specification and serve it from each
host. In this case, each copy of the specification will target the
corresponding host.
Related
I am using Traefik v2 and I would like to write a rule to apply headers if the Client IP is a specific one. This rule must apply to all proxied services and I wonder if I can define this rule only once in the yaml file.
The rule should be something like:
http:
routers:
from-legacy-client:
entryPoints:
- web
rule: ClientIP(`192.168.1.1`)
middlewares:
- legacy-headers
middlewares:
legacy-headers:
headers:
customRequestHeaders:
X-dco-role: "DCO"
Iknow it currently does not work because I have not defined the destination service. Is it possible to "wildcard" the involved services meaning "all services"?
I have a microservice that sends HTTP requests to an external non-dockerized service.
Can anybody point me to a docker image of a simple web servicer, that I can start as part of my test environment? Ideally, it should be simple to customize (endpoints, ports, etc) and provide some meaningful logging of the incoming requests.
It depends on your preference for vendor. Here are some to choose from:
Linux:
Nginx
Apache httpd
Microsoft:
IIS
The links to those pages show you a few different distro's for each and contain the configuration information.
You can look at my project: https://github.com/mateuszgruszczynski/cinema its very crude and simple setup I use for performance test trainings. It contains few containers:
cinema-http / cinema-gateway - scala/akka based microservices
frontend - apache http server + simple php/js webpage
haproxy - haproxy as loadbalancer
plus some extra containers: postgres, mysql, jenkins, graphite, grafana
When it comes to dockerfiles and composer file it strongly depends on what technology you want to use for http server.
It does not have any extra logging but it should be easy to add, or maybe standard apache http logs will be enough for you.
I'm testing Realm database using test application RealmTasks and found out that synchronization with the server doesn't work. Authentication works well, but sync not. Realm server is installed on CentOS 7 server. Default port 9080 is busy so I changed Realm server config file:
http:
enable: true
listen_address:'0.0.0.0'
listen_port:6666
network:
http:
listen_address:'0.0.0.0'
listen_port:27080
As a result I can connect to 27080 from outside but can not connect to port 6666. All ports are opened for outside connection. Does it possible that such a configuration doesn't allow to synchronize database?
Update
That config file is just wrong - if that's exactly what you have. The yaml is nested wrongly because your first http is not nested.
Experimenting with the Mac Developer Edition, here's a minimal working configuration.yml file:
storage:
root_path: 'root_dir'
auth:
public_key_path: 'keys/token-signature.pub'
private_key_path: 'keys/token-signature.key'
proxy:
http:
listen_address: '::'
listen_port: 9666
Important - it seems port numbers are constrained the [configuration documentation(https://realm.io/docs/realm-object-server/#configuring-the-server) mentions the need to use 1024 or higher as the server doesn't run as root. I am not sure why I could not get 6666 to run although that is supposedly commonly used for IRC. Multiple failure messages appear in the Terminal window of the process launching the server with that port.
Earlier questions
Are you telling the RealmTasks app to connect to that port? (Obvious question but I had to ask.)
Please supply logs from the server or look at the logs, which you can view and adjust the level of in the web dashboard eg at http://localhost:9080/#!/logs
I want to use the Swagger editor to test my REST service deployed with Grizzly.
My service is on a different port (8081) then the Swagger editor (8080).
How can I tell the editor (local or online) to use another port?
Thanks
Found the answer, haven't tested it yet:
Under the swagger object, there's a fixed field called host:
The host (name or ip) serving the API. This MUST be the host only and
does not include the scheme nor sub-paths. It MAY include a port. If
the host is not included, the host serving the documentation is to be
used (including the port). The host does not support path templating.
taken from:
https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md
Dropbox requires the callback URL to be over HTTPS (when not using localhost).
Using Mule 3.6.0 with the latest dropbox connector, the callback defaults to http - thus only working with localhost. For production I need to use https for the OAuth dance.
What is the correct way to specify a https callback URL?
I've tried:
<https:connector name="connector.http.mule.default">
<https:tls-key-store path="${ssl.certfile}" keyPassword="${ssl.keyPass}" storePassword="${ssl.storePass}"/>
</https:connector>
<dropbox:config name="Dropbox" appKey="${dropbox.appKey}" appSecret="${dropbox.appSecret}" doc:name="Dropbox">
<dropbox:oauth-callback-config domain="production.mydomain.com" path="callback" />
</dropbox:config>
But it errors:
Endpoint scheme must be compatible with the connector scheme. Connector is: "https", endpoint is "http://production.mydomain.com:8052/callback"
Here's what I ended up with that solved the problem:
<https:connector name="connector.http.mule.default" doc:name="HTTP-HTTPS">
<https:tls-key-store path="${ssl.certfile}" keyPassword="${ssl.keyPass}" storePassword="${ssl.storePass}"/>
</https:connector>
<dropbox:config name="Dropbox" appKey="${dropbox.appKey}" appSecret="${dropbox.appSecret}" doc:name="Dropbox">
<dropbox:oauth-callback-config domain="myserver.domain.com" path="callback" connector-ref="connector.http.mule.default" localPort="8052" remotePort="8052"/>
</dropbox:config>
This works great for localhost, but not if you need the callback to go to something other than localhost (e.g. myserver.domain.com)
Reviewing mule.log you can see that the connector binds to localhost (127.0.0.0) despite the config pointing to:
domain="myserver.domain.com"
Log Entry:
INFO ... Attempting to register service with name: Mule.Ops:type=Endpoint,service="DynamicFlow-https://localhost:8052/callback",connector=connector.http.mule.default,name="endpoint.https.localhost.8052.callback"
INFO ... Registered Endpoint Service with name: Mule.Ops:type=Endpoint,service="DynamicFlow-https://localhost:8052/callback",connector=connector.http.mule.default,name="endpoint.https.localhost.8052.callback"
INFO ... Registered Connector Service with name Mule.Ops:type=Connector,name="connector.http.mule.default.1"
The workaround is to force Mule to listen to 0.0.0.0 for connectors which define localhost as the endpoint.
In wrapper.conf set
wrapper.java.additional.x=-Dmule.tcp.bindlocalhosttoalllocalinterfaces=TRUE