I'm using Swashbuckler ver. 5.6.0 in webapi project
configuration says:
c.Schemes(new[] { "https" });
Despite of this, when I access the swagger documentation page for my site,
it tries to load:
http://{swagger-docs-url}/swagger/v1, instead of https, and fails because of mixed content
What am I doing wrong here?
Can th
The c.Schemes(new[] { "https" }); in the configuration only impacts the generated swagger:
That will force the following output:
{
"swagger": "2.0",
"info": {
"version": "V1",
"title": "Swagger_Test",
},
"host": "yourhost.azurewebsites.net",
"schemes": [
"https"
],
Your question is not complete but I believe that what you are looking for is the rootUrl: https://github.com/domaindrivendev/Swashbuckle#rooturl
Actually, Swashbuckle will detect Uri on host and use the Uri Scheme / port as your {swagger-docs-url} , so if your host running on HTTP it only got HTTP, you can custom the rooturl to fix it.
//SwaggerConfig.cs
config.EnableSwagger(c =>
{
c.RootUrl(ResolveBasePath);
c.Schemes(new[] { "http", "https" });
}
internal static string ResolveBasePath(HttpRequestMessage message)
{
//fix for Cloudflare Flexible SSL and localhost test
var scheme = message.RequestUri.Host.IndexOf("localhost") > -1 ? "http://" : "https://";
return scheme + message.RequestUri.Authority;
}
I have a site using Cloudflare Flexible SSL, server running on HTTP but inbound connection to Cloudflare was HTTPS, so I need to force the Uri scheme as HTTPS, unless I test on my localhost.
Related
I'm trying to run my aspnet 6.0 app using docker(Linux Container on Windows system) and having issues. it runs perfectly fine when I'm not trying to configure kestrel. But whenever i'm trying to add below code, i'm getting issue saying "This site can’t be reached localhost unexpectedly closed the connection."
builder.WebHost.ConfigureKestrel(serverOptions =>
{
serverOptions.Listen(IPAddress.Any, 5005, options =>
{
options.Protocols = HttpProtocols.Http2;
});
serverOptions.Listen(IPAddress.Any, 7173, options =>
{
options.Protocols = HttpProtocols.Http1AndHttp2;
});
});
I'm trying to use port 5005 for GRpc purpose and 7173 to expose rest api endpoints. I'm using visual studio 2022 and generated DockerFile by adding docker support.
Here are the docker compose,compose-override yaml and container snaps.
I have also tried adding https support, but no luck.
serverOptions.Listen(IPAddress.Any, 7173, options =>
{
options.Protocols = HttpProtocols.Http1AndHttp2;
options.UseHttps("appname.pfx", "password");
});
Please Note: all of the above lines of code works great when I'm not running on docker.
I think you can configure this in appsettings.json too:
{
"Logging": {
"LogLevel": {
"Default": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"AllowedHosts": "*",
"Kestrel": {
"Endpoints": {
"WebApi": {
"Url": "http://localhost:7173",
"Protocols": "Http1"
},
"gRPC": {
"Url": "http://localhost:5005",
"Protocols": "Http2"
}
}
}
}
Had to expose same ports in DockerFile as pointed out in comment by #CodingMytra
I have two services, a client and a server. I am using Next.js and React on the client and express for my server. And I have a docker-compose file. I need to implement some endpoints on the backend and make requests from the client to the backend using axios.
During development I am running docker-compose up. While working on the app I created an address form in the client and wanted to see the results in the browser. When I try to submit the form and send the request to the server I am getting 404. This is the code in the client that makes a request to the backend:
import axios from 'axios'
const postNewAddress = async (address) => {
axios.post('/address', address)
.then(function (response) {
console.log(response);
})
.catch(function (error) {
console.log(error);
});
}
module.exports = {
postNewAddress
}
And this is what I currently have on the backend:
const express = require( 'express' );
const app = express();
const port = process.env.PORT || 3001
app.use(express.json())
app.get( '/', ( req, res ) => {
res.send({ greeting: 'Hello world!' });
});
app.post('/address', ( req, res ) => {
const address = req.body
console.log(address)
res.json(address)
})
app.listen(port, err => {
if (err) throw err;
console.log(`Listening on PORT ${port}!`)
})
When I change the URL to http://server:3001/address in axios request, I am getting net::ERR_NAME_NOT_RESOLVED error. I did some research and that happens probably because the browser and docker containers are running in different networks. But I couldn't find any solution that would allow me to make requests to container from a browser.
Here is the gist for docker-compose.yml
Docker compose file
Let's say you have one property in variable address with which post request was made.
address = { 'id': 123 };
Now to fetch that in your backend code you need to do something like this
app.post('/address', ( req, res ) => {
const id = req.body.id
console.log(id)
res.json(id)
})
Browser applications can never use the Docker-container hostnames. Even if the application is being served from an inside Docker, it ultimately runs from inside the browser, and outside Docker space.
If this is a development system, so the backend container and your development environment are on the same system, you can generally connect to localhost and the published ports: of your container. If your docker-compose.yml declares ports: [3001:3001] for the backend, then you can connect to http://localhost:3001/address.
You can also set this address in the Webpack dev server proxy configuration, so the /address relative URL you have in your code now continues to work.
As per Saturn docs, to have HSTS in Saturn, one needs to specify force_ssl in the application:
application {
url ("http://0.0.0.0:" + port.ToString() + "/")
force_ssl
...
}
This works for the deployed version of the web, however it breaks local development. Server does not return responses, in the log it writes Request redirected to HTTPS and that's all.
Is it possible to force SSL and keep local dev convenient at the same time?
SAFE-stack assumes usage of webpack and webpack-dev-server and that works as a proxy to the real server which means one needs to do some adjustments there as well.
So the webpack config should now have https in the target of the proxy section:
devServer: {
proxy: {
'/api/*': {
target: 'https://localhost:<port>',
...
},
...
},
...
},
This is not enough - as per docs, to avoid security exceptions, one needs to unset secure flag:
devServer: {
proxy: {
'/api/*': {
target: 'https://localhost:<port>',
secure: false,
...
},
...
},
...
},
And the last thing is to modify server application accordingly:
application {
url ("https://0.0.0.0:" + port.ToString() + "/")
force_ssl
...
That should do it both for dev and prod versions of the web.
In Ghost 0.x, config was provided via a single config.js file with keys for each env.
In Ghost 1.0, config is provided via multiple config.json files
How do you provide environment variables in Ghost 1.0?
I would like to dynamically set the port value using process.env.port on Cloud9 IDE like so.
config.development.json
{
"url": "http://localhost",
"server": {
"port": process.env.port,
"host": process.env.IP
}
}
When I run the application using ghost start with the following config, it says You can access your publication at http://localhost:2368, but when I go to http://localhost:2368 in http://c9.io it gives me an error saying No application seems to be running here!
{
"url": "http://localhost:2368",
"server": {
"port": 2368,
"host": "127.0.0.1"
}
}
I managed to figure out how to do this.
Here is the solution incase if someone else is also trying to figure out how to do the same thing.
In your config.development.json file, add the following.
{
"url": "http://{workspace_name}-{username}.c9users.io:8080",
"server": {
"port": 8080,
"host": "0.0.0.0"
}
}
Alternatively, run the following command in the terminal. This will dynamically get the value for the port and host environment variable and add the above content to the config.development.json file.
ghost config url http://$C9_HOSTNAME:$PORT
ghost config server.port $PORT
ghost config server.host $IP
I'm trying to get nightwatch tests to work in Microsoft edge. I'm getting an error saying connection refused. What's the right configuration to get tests to work on edge? Windows 10 edge 13
Could you post your configuration?
Make sure you have installed microsoft edge webdriver (you may download it here https://developer.microsoft.com/en-us/microsoft-edge/tools/webdriver/).
You have to specify path to your edge driver using webdriver.edge.driver param, ex.
"selenium": {
"start_process": true,
"server_path": "./node_modules/file_dependencies/selenium-server-standalone.jar",
"log_path": "",
"host": "127.0.0.1",
"port": seleniumPort,
"cli_args": {
"webdriver.chrome.driver": "./node_modules/file_dependencies/chromedriver.exe",
"webdriver.ie.driver": "./node_modules/file_dependencies/IEDriverServer.exe",
"webdriver.edge.driver": "C:/Program Files (x86)/Microsoft Web Driver/MicrosoftWebDriver.exe",
"webdriver.gecko.driver": "./node_modules/file_dependencies/geckodriver.exe",
"webdriver.firefox.profile": ""
}
}
and example capabilities
"edge": {
"desiredCapabilities": {
"browserName": "MicrosoftEdge",
"javascriptEnabled": true,
"acceptSslCerts": true,
"pageLoadStrategy": "eager"
}
}