I want to use the Swagger editor to test my REST service deployed with Grizzly.
My service is on a different port (8081) then the Swagger editor (8080).
How can I tell the editor (local or online) to use another port?
Thanks
Found the answer, haven't tested it yet:
Under the swagger object, there's a fixed field called host:
The host (name or ip) serving the API. This MUST be the host only and
does not include the scheme nor sub-paths. It MAY include a port. If
the host is not included, the host serving the documentation is to be
used (including the port). The host does not support path templating.
taken from:
https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md
Related
the isolated hostname is not displaying in the FILE adapter handlers list
when I try to add a new send\receive handler to an isolated host I cant see the isolated hostname in the FILE adapter options.
any suggestions about what shall I do?
The Isolated Host instances should only be used for Receive Locations that run in IIS. So the fact that it is not allowing you to configure it for FILE is correct behavior.
You should not be trying to run the FILE adapter on an Isolated Host, you need to use a In-Process host.
I am using the ActiveMQ extension of AppDynamics. It is good to start. With JMXRemote(enabled in artemis.profile) it is OK. But, I want it from localhost. JMX is enabled by default for localhost for AMQ. AMQ management console use jmx internally and it works without JMXRemote enabled. What service URL jolokia use internally to connect using JMX from localhost? I have tryed with following URL:
serviceUrl: "service:jmx:rmi:///jndi/rmi://:1099/jmxrmi"
The first step is to add a username and password in the etc/users.properties file. For most purposes, it is ok to just
use the default settings provided out of the box. For this, just uncomment the following line:
admin=admin,admin,manager,viewer,Operator, Maintainer, Deployer, Auditor, Administrator, SuperUser
Then, you must bypass credential checks on BrokeViewMBean by adding it to the whitelist ACL configuration. You can do so by replacing this line:
org.apache.activemq.Broker;getBrokerVersion=bypass
with this:
org.apache.activemq.Broker=bypass
In addition to being the correct way, it also enables several different configuration options (eg: port, listen address, etc) by just changing the file org.apache.karaf.management.cfg on broker's etc directory.
Please keep in mind that JMX access is made through a different JMX connector root in this case: it uses karaf-root instead of jmxrmi, which was previously used in the older method. It also uses port 1099 by default, instead of 1616.
Therefore, the uri should be
service:jmx:rmi:///jndi/rmi://<host>:<port>/karaf-root
I have two express web apps (server and client) that I am using docker-compose and / or docker stack to deploy in docker swarm. They both have APIs that communicate with each other via their service names, as they are both connected to the same overlay network. A snippet of the config file that client uses to make REST calls to server follows:
"server": {
"url":"http://server:8085",
"endpoints": {
"devices": "/devices",
"temperature": "/temperature",
"mock": "/mock"
}
}
Finding the server by host name is no issue from the node side as it is running directly inside the docker container. However, both express apps serve web pages. Both client and server's css and js dependencies are almost identical and I do not want to write each stylesheet twice. I'd rather server a single copy from server that both index.html files from server and client can use.
In the index.html, of server I can use relative paths because the host is the same, and thus implied. But, in index.html of client I need a fully qualified url. Something like:
<link rel="stylesheet" href="http://server:8085/style.css">
Obviously this would not work once I serve index.html from client to a browser because the browser is going to look for http://server over the internet, rather than in the docker overlay network for these services.
I thought about downloading the files in client's node app before it serves index.html but, that's not the cleanest solution.
Is there an elegant way to accomplish this without binding server to a static ip / domain or programmatically downloading these files first?
If your external users' browser needs to access files on client and server then you will need to publish both Swarm Services to the external IP's of the Swarm nodes, and then put those IP's in DNS names or an external LB, and only use those URL's for remote connectivity.
When you do that, you'll likely need to bind both services to the same port (443). If that's the case, then you also need another layer of proxy that routes traffic to the proper container based on path or DNS name.
Both http://proxy.dockerflow.com/ and https://traefik.io/ work for that purpose.
By default rails uses localhost:3000 in development mode. This one is not written in any projects config files. I am currently trying to edit ./config/environments/development.rb file to use CORS.
There is host_and_port method which may be used in contollers to get the HTTP requests HOST value as defined in its heading (point me if I am wrong).
I can write my host:port in config files manually and change it as long as my development host and port changes... But I want to configure my development environment as rarely as possible, so I need to access host and port configurations in config files.
So... how do I access my HOST and PORT in config files?
I have defined an asp.net mvc app on server x. I added the sitename in the hostfile:
127.0.0.1 weeral.com
Also in IIS 7 i have added this as a sitebinding hostname weeral.com
When I hit http://weeral.com it responds find on the server.
However when I ping weeral.com from a different machine in the network it goes:
Ping request could not find host weeral.com. Please check the name...
what am i doing wrong?
The different machine doesn't have the same host file.
You need to map 127.0.0.1 to weeral.com on every one of the machines you use. If you used Dropbox (or something similar) you could symbolically link the machine's host file to the one in your Dropbox. I've done this for other config files, so I would think it would work for the host file.