I am having below docker-compose script
version: '3.1'
services:
generator:
image: my-registry:55000/gen:ci-8
ports:
- "8080:80"
mail:
image: mailhog/mailhog
ports:
- "8025:8025"
integration:
image: my-registry:55000/gen:integration-9
build: .
from integration service, I am calling generator service as below
public const string GeneratorApiRoot = "http://generator:80";
var client = new HttpClient();
var sendEmail = new HttpRequestMessage
{
Method = HttpMethod.Post,
RequestUri = new Uri($"{GeneratorApiRoot}/EmailRandomNames")
};
using (var response = await client.SendAsync(sendEmail))
{
response.EnsureSuccessStatusCode();
}
I am getting status code 404 not found error msg. But when I access from host with http://localhost:8080/EmailRandomNames I am getting 200 status code.
What am I doing wrong? Please help.
Related
I have a working dotnet application that I can run locally, as well, the same code runs in an azure web app. I have been able to containerize it. However, when I run it in the container it fails to read the environment variable:
Code to get/check environment variable in the controller:
public ReportController(ILogger<ReportController> logger, IConfiguration iconfig)
{
_logger = logger;
_config = iconfig;
_storageConnString = Environment.GetEnvironmentVariable("AzureWebJobsStorage");
_containerName = Environment.GetEnvironmentVariable("ReportContainer");
string CredentialConnectionString = Environment.GetEnvironmentVariable("CredentialConnectionString");
if(CredentialConnectionString == null)
{
throw new Exception("Credential connection string is null");
}
}
code in start up:
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
})
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddEnvironmentVariables();
});
```
my docker compose that is setting the variables:
services:
myreports:
image: myreports
build:
context: .
dockerfile: myreports/Dockerfile
ports: [5000:5000]
environment:
- "APPSETTINGS_AzureWebJobsStorage = DefaultEndpointsProtocol=https;AccountName=mystorage;AccountKey=xxxx+xx/xx==;EndpointSuffix=core.windows.net"
- "APPSETTINGS_HarmonyConnectionString = Data Source=mydb.database.windows.net;AttachDbFilename=;Initial Catalog=Harmony;Integrated Security=False;Persist Security Info=False;User ID=sqlreporter;Password=mypass"
- "APPSETTINGS_CredentialConnectionString = Data Source=mydb.database.windows.net;AttachDbFilename=;Initial Catalog=Credential;Integrated Security=False;Persist Security Info=False;User ID=sqlreporter;Password=mypass"
- "CredentialConnectionString = Data Source=mydb.database.windows.net;AttachDbFilename=;Initial Catalog=Credential;Integrated Security=False;Persist Security Info=False;User ID=sqlreporter;Password=mypass"
- "APPSETTINGS_ReportContainer = taxdocuments"
As you can see I'm attempting both the APPSETTINGS_ prefix and not
but when I hit the port in the app the container returns:
myreports-1 | System.Exception: Credential connection string is null
the code works fine the in the app service getting the variables
You don't need to add APPSETTINGS_ in front of the variable names. What's causing the issue is the spaces around the equals sign in your docker-compose file. The quotes are not needed, so I'd remove them.
This should work
services:
myreports:
image: myreports
build:
context: .
dockerfile: myreports/Dockerfile
ports: [5000:5000]
environment:
- AzureWebJobsStorage=DefaultEndpointsProtocol=https;AccountName=mystorage;AccountKey=xxxx+xx/xx==;EndpointSuffix=core.windows.net
- HarmonyConnectionString=Data Source=mydb.database.windows.net;AttachDbFilename=;Initial Catalog=Harmony;Integrated Security=False;Persist Security Info=False;User ID=sqlreporter;Password=mypass
- CredentialConnectionString=Data Source=mydb.database.windows.net;AttachDbFilename=;Initial Catalog=Credential;Integrated Security=False;Persist Security Info=False;User ID=sqlreporter;Password=mypass
- ReportContainer=taxdocuments
I have two .NET core web APIs as docker containers. I want with NetMQ one to send messages and other listen to it. But I think I have some problems connectiong those two with tcp connection.
I use Docker compose.
version: '3.4'
services:
gateway:
image: ${DOCKER_REGISTRY-}gateway
build:
context: .
dockerfile: Gateway/Dockerfile
ports:
- 5000:80
pictureperfect:
image: ${DOCKER_REGISTRY-}pictureperfect
build:
context: .
dockerfile: PicturePerfect/Dockerfile
ports:
- 5001:80
- 5002:5002
I want this one to send message
private void Send() {
using (var requester = new RequestSocket("tcp://0.0.0.0:5002")) {
try {
requester.SendFrame("message from pp");
Console.WriteLine(requester.ReceiveFrameString());
} catch (Exception) {
throw;
}
}
}
This is the one that I want to be listening
public static void Main(string[] args) {
CreateHostBuilder(args).Build().Run();
using (var responder = new ResponseSocket()) {
responder.Bind("tcp://pictureperfect:5002");
while (true) {
Console.WriteLine(responder.ReceiveFrameString());
Thread.Sleep(1000);
responder.SendFrame("message from gateway");
}
}
}
My goal is to create seeds of users when the database is created.
I'm using idserver4, with npgsql, docker-compose.
The current behavior creates the database and as well the identityserver user manager tables (AspNetUsers, AspNetUserTokens, AspNetUserRoles, etc..). So I know it's migrating that data to the database. But it skips over the Task of running the User seed because it throws a password exception:
Npgsql.NpgsqlException (0x80004005): No password has been provided but the backend requires one (in MD5)
Here's the code in my Program.cs.
public static void Main(string[] args)
{
var host = CreateHostBuilder(args).Build();
using (var scope = host.Services.CreateScope())
{
var services = scope.ServiceProvider;
try
{
var userManager = services.GetRequiredService<UserManager<User>>();
var roleManager = services.GetRequiredService<RoleManager<IdentityRole>>();
var context = services.GetRequiredService<ApplicationDbContext>();
context.Database.Migrate(); // ERROR HAPPENS HERE
Task.Run(async () => await UserAndRoleSeeder.SeedUsersAndRoles(roleManager, userManager)).Wait(); // I NEED THIS TO RUN
}
catch (Exception ex)
{
var logger = services.GetRequiredService<ILogger<Program>>();
logger.LogError(ex, "Error has occured while migrating to the database.");
}
}
host.Run();
}
Here is the code where it gets the connection string in Startup.cs:
services.AddDbContext<ApplicationDbContext>(options =>
{
options.UseNpgsql(Configuration.GetConnectionString("DefaultConnection"),
b =>
{
b.MigrationsAssembly("GLFManager.App");
});
});
If I use a breakpoint here, it shows that the connection string was obtained along with the user id and password. I verified the password was correct. Or else I don't think it would initially commit the Idserver user manager tables.
Here is my appsettings.json file where the connection string lives:
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"AllowedHosts": "*",
"ConnectionStrings": {
"DefaultConnection": "Host=localhost;Port=33010;Database=glfdb;User Id=devdbuser;Password=devdbpassword"
}
}
I'm thinking it's somewhere in the docker-compose file where some configuration is not registering. This is the docker-compose file:
version: '3.4'
services:
glfmanager.api:
image: ${DOCKER_REGISTRY-}glfmanagerapi
container_name: "glfmanager.api"
build:
context: .
dockerfile: GLFManager.Api/Dockerfile
ports:
- "33000:80"
- "33001:443"
environment:
- ConnectionStrings__DefaultConnection=Server=glfmanager.db;Database=glfdb;User Id=devdbuser:password=devdbpassword;
- Identity_Authority=http://glfmanager.auth
volumes:
- .:/usr/src/app
depends_on:
- "glfmanager.db"
glfmanager.auth:
image: ${DOCKER_REGISTRY-}glfmanagerauth
container_name: "glfmanager.auth"
build:
context: .
dockerfile: GLFManager.Auth/Dockerfile
ports:
- "33005:80"
- "33006:443"
environment:
- ConnectionStrings__DefaultConnection=Server=glfmanager.db;Database=glfdb;User Id=devdbuser:password=devdbpassword;
volumes:
- .:/usr/src/app
depends_on:
- "glfmanager.db"
glfmanager.db:
restart: on-failure
image: "mdillon/postgis:11"
container_name: "glfmanager.db"
environment:
- POSTGRES_USER=devdbuser
- POSTGRES_DB=glfdb
- POSTGRES_PASSWORD=devdbpassword
volumes:
- glfmanager-db:/var/lib/postresql/data
ports:
- "33010:5432"
volumes:
glfmanager-db:
I used this code from a class I took on backend developing and the code is Identitcal to the project I've built in that, and it works. So I'm stumped as to why this is giving me that password error.
Found the problem. I used a ':' instead of ';' in my docker file between User Id and password
I am working on a project where we would like to use IdentityServer4 as a token server and have other services authenticated within this token server. I have dev env on Windows using Docker and linux containers. I configured IdentityServer and it's working, I configured Api client and it's working but, when I configured MVC client to authenticate, it's failing to access token server through docker. OK, I realized that Docker works in a way of having external/internal ports, so I configured the api and mvc client this way.
MVC Client
services.AddAuthentication(opts =>
{
opts.DefaultScheme = "Cookies";
opts.DefaultChallengeScheme = "oidc";
})
.AddCookie("Cookies", opts =>
{
opts.SessionStore = new MemoryCacheTicketStore(
configuration.GetValue<int>("AppSettings:SessionTimeout"));
})
.AddOpenIdConnect("oidc", opts =>
{
opts.ResponseType = "code id_token";
opts.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme;
opts.ClientId = "Mvc.Application";
opts.ClientSecret = "Secret.Mvc.Application";
opts.Authority = "http://authorization.server/";
//opts.Authority = "http://localhost:5001/";
//opts.MetadataAddress = "http://authorization.server/";
opts.UsePkce = true;
opts.SaveTokens = true;
opts.RequireHttpsMetadata = false;
opts.GetClaimsFromUserInfoEndpoint = true;
opts.Scope.Add("offline_access");
opts.Scope.Add("Services.Business");
opts.ClaimActions.MapJsonKey("website", "website");
});
This part is working, because document discovery is working. However it'll fail to access http://authorization.server url, because it's internal container address, not external accessible through web browser. So I tried to set 2 different urls: MetadataAddress from which document from OpenId server should be fetched and Authority, where all Unauthorized requests are redirected. However when I set both MetadataAddress and Authority in OpenIdConnectOptions when calling AddOpenIdConnect, it'll use MetadataAddress instead of Authority. I checked logs, discovery of document is successfull, because I'm hitting http://authorization.server/.well-known..., but it's also initiating request to the IdentityServer to authenticate with the same url http://authorization.server/connect...
Api Client
services.AddAuthorization()
.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddIdentityServerAuthentication(opts =>
{
opts.RequireHttpsMetadata = false;
opts.ApiName = "Api.Services.Business";
opts.ApiSecret = "Secret.Api.Services.Business";
opts.Authority = "http://authorization.server/";
});
This it's working fine using the internal container address.
IdentityServer configuration
services.AddIdentityServer(opt =>
{
opt.IssuerUri = "http://authorization.server/";
})
.AddAspNetIdentity<User>()
.AddSigningCredential(Certificate.Get())
.AddProfileService<IdentityProfileService>()
.AddInMemoryApiResources(Configuration.ApiResources())
.AddInMemoryIdentityResources(Configuration.IdentityResources())
.AddInMemoryClients(Configuration.Clients());
Configuration.cs
public static IEnumerable<Client> Clients(string redirectUri, string allowedCorsOrigins)
{
return new List<Client>
{
new Client
{
ClientId = "Services.Business",
ClientName = "Api Business",
AllowedGrantTypes = GrantTypes.ResourceOwnerPassword,
AllowedScopes =
{
"Services.Business"
},
ClientSecrets =
{
new Secret("Secret.Services.Business".Sha256())
}
},
new Client
{
ClientId = "Mvc.Application",
ClientName = "Mvc Application",
RequireConsent = false,
AllowOfflineAccess = true,
AllowedGrantTypes = GrantTypes.Hybrid,
AllowedScopes =
{
"Services.Business",
IdentityServerConstants.StandardScopes.OpenId,
IdentityServerConstants.StandardScopes.Profile
},
ClientSecrets =
{
new Secret("Secret.Mvc.Application".Sha256())
},
RedirectUris =
{
$"{redirectUri}/signin-oidc"
},
PostLogoutRedirectUris =
{
$"{redirectUri}/signout-callback-oidc"
}
}
};
}
Docker-compose.yml
version: '3.4'
networks:
fmnetwork:
driver: bridge
services:
authorization.server:
image: authorization.server
container_name: svc.authorization.server
build:
context: .
dockerfile: Authorization.Server/Dockerfile
ports:
- "5000:80"
- "5100:443"
environment:
ASPNETCORE_HTTPS_PORT: 5100
ASPNETCORE_ENVIRONMENT: Staging
ASPNETCORE_URLS: "https://+;http://+"
ASPNETCORE_Kestrel__Certificates__Default__Password: "devcertaspnet"
ASPNETCORE_Kestrel__Certificates__Default__Path: /root/.dotnet/https/aspnetapp.pfx
depends_on:
- sql.server
volumes:
- D:\Docker\Data\Fm:/root/.dotnet/https
- D:\Docker\Data\Fm\Logs:/Fm.Logs
networks:
- fmnetwork
services.business:
image: services.business
container_name: api.services.business
build:
context: .
dockerfile: Services.Business/Dockerfile
ports:
- "5001:80"
- "5101:443"
environment:
ASPNETCORE_ENVIRONMENT: Staging
ASPNETCORE_HTTPS_PORT: 5101
ASPNETCORE_URLS: "https://+;http://+"
ASPNETCORE_Kestrel__Certificates__Default__Password: "devcertaspnet"
ASPNETCORE_Kestrel__Certificates__Default__Path: /root/.dotnet/https/aspnetapp.pfx
depends_on:
- sql.server
volumes:
- D:\Docker\Data\Fm:/root/.dotnet/https
- D:\Docker\Data\Fm\Logs:/Fm.Logs
networks:
- fmnetwork
mvc.application:
image: mvc.application
container_name: svc.mvc.application
build:
context: .
dockerfile: Mvc.Application/Dockerfile
ports:
- "5002:80"
- "5102:443"
environment:
ASPNETCORE_ENVIRONMENT: Staging
ASPNETCORE_HTTPS_PORT: 5102
ASPNETCORE_URLS: "https://+;http://+"
ASPNETCORE_Kestrel__Certificates__Default__Password: "devcertaspnet"
ASPNETCORE_Kestrel__Certificates__Default__Path: /root/.dotnet/https/aspnetapp.pfx
volumes:
- D:\Docker\Data\Fm:/root/.dotnet/https
- D:\Docker\Data\Fm\Logs:/Fm.Logs
networks:
- fmnetwork
I just faced this same issue and was able to solve it as follows.
Some things to keep in mind:
This is not an issue with Identity Server itself but with the mismatch between the internal Docker URL (http://authorization.server) that your container sees and the local host URL (http://localhost:5001) that your browser sees.
You should keep using the local URL for Identity Server (http://localhost:5001) and add a special case to handle the container to container communication.
The following fix is only for development when working with Docker (Docker Compose, Kubernetes), so ideally you should check for the environment (IsDevelopment extension method) so the code is not used in production.
IdentityServer configuration
services.AddIdentityServer(opt =>
{
if (Environment.IsDevelopment())
{
// It is not advisable to override this in production
opt.IssuerUri = "http://localhost:5001";
}
})
MVC Client
services.AddAuthentication(... /*Omitted for brevity*/)
.AddOpenIdConnect("oidc", opts =>
{
// Your working, production ready configuration goes here
// It is important this matches the local URL of your identity server, not the Docker internal URL
options.Authority = "http://localhost:5001";
if (Environment.IsDevelopment())
{
// This will allow the container to reach the discovery endpoint
opts.MetadataAddress = "http://authorization.server/.well-known/openid-configuration";
opts.RequireHttpsMetadata = false;
opts.Events.OnRedirectToIdentityProvider = context =>
{
// Intercept the redirection so the browser navigates to the right URL in your host
context.ProtocolMessage.IssuerAddress = "http://localhost:5001/connect/authorize";
return Task.CompletedTask;
};
}
})
You can tweak the code a little bit by passing said URLs via configuration.
My tests are running against a docker grid with selenium docker images for hub and chrome. What I am trying to do is access chrome devtools protocols in the chrome node so that I can access/intercept a request.Any help is appreciated
I was able to get it working without docker in my local. But could not figure out a way to connect the devtools in chrome node of docker grid. Below is my docker-compose and code
docker compose
version: "3.7"
services:
selenium_hub_ix:
container_name: selenium_hub_ix
image: selenium/hub:latest
environment:
SE_OPTS: "-port 4445"
ports:
- 4445:4445
chrome_ix:
image: selenium/node-chrome-debug:latest
container_name: chrome_node_ix
depends_on:
- selenium_hub_ix
ports:
- 5905:5900
- 5903:5555
- 9222:9222
environment:
- no_proxy=localhost
- HUB_PORT_4444_TCP_ADDR=selenium_hub_ix
- HUB_PORT_4444_TCP_PORT=4445
- NODE_MAX_INSTANCES=5
- NODE_MAX_SESSION=5
- TZ=America/Chicago
volumes:
- /dev/shm:/dev/shm
Here is sample code how I got it working in local without grid (chrome driver in my mac)
const CDP = require('chrome-remote-interface');
let webDriver = require("selenium-webdriver");
module.exports = {
async openBrowser() {
this.driver = await new webDriver.Builder().forBrowser("chrome").build();
let session = await this.driver.session_
let debuggerAddress = await session.caps_.map_.get("goog:chromeOptions").debuggerAddress;
let AddressString = debuggerAddress.split(":");
console.log(AddressString)
try {
const protocol = await CDP({
port: AddressString[1]
});
} catch (err) {
console.log(err.message)
const {
Network,
Fetch
} = protocol;
await Fetch.enable({
patterns: [{
urlPattern: "*",
}]
});
}
await Fetch.requestPaused(async ({
interceptionId,
request
}) => {
console.log(request)
})
}
return this.driver;
},
}
When it is grid, I just change the way build the driver to below
this.driver = await new webDriver.Builder().usingServer(process.env.SELENIUM_HUB_IP).withCapabilities(webDriver.Capabilities.chrome()).build();
With that I am getting the port number but could not create a CDP session and I get a connection refused error.