I'm using Azurite to run tests locally about some functionality that uploads file to an Azure Blob Storage. I'm running it by using docker compose, and I would like to run it in a non-default port for the tests. The configuration I came up with is the following:
storage:
image: mcr.microsoft.com/azure-storage/azurite
environment:
- AZURITE_ACCOUNTS=account:QUJDRA==
ports:
- "10020:10000"
I'm using the following configuration to register the BlobServiceClient service in Asp.Net Core:
services.AddAzureClients(builder =>
{
builder.AddBlobServiceClient(
new Uri("http://localhost:10020/account"),
new StorageSharedKeyCredential("account", "QUJDRA=="));
});
And the code that uploads files is as follows:
public async Task<string> UploadFile(BlobServiceClient blobServiceClient, Stream file)
{
var blobContainerClient = blobServiceClient.GetBlobContainerClient("container");
await blobContainerClient.CreateIfNotExistsAsync(PublicAccessType.BlobContainer);
var blobClient = blobContainerClient.GetBlobClient("blob");
await blobClient.UploadAsync(file);
return blobClient.Uri.ToString();
}
If I run this configuration in the default port (10000), it all works as expected, and I get the following logs from the Azurite container:
storage-1 | 172.21.0.1 - - [20/Jan/2023:11:02:35 +0000] "PUT /account/container?restype=container HTTP/1.1" 409 -
storage-1 | 172.21.0.1 - - [20/Jan/2023:11:02:37 +0000] "PUT /account/container/blob?comp=block&blockid=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA HTTP/1.1" 201 -
storage-1 | 172.21.0.1 - - [20/Jan/2023:11:02:37 +0000] "PUT /account/container/blob?comp=blocklist HTTP/1.1" 201 -
However, if I try to run it in the non-default port (10020), the line in which the file is uploaded await blobClient.UploadAsync(file) produces the following exception:
Azure.RequestFailedException : Service request failed.
Status: 400 (Bad Request)
storage-1 | 172.25.0.1 - - [20/Jan/2023:11:18:43 +0000] "PUT /account/container?restype=container HTTP/1.1" 201 -
storage-1 | 172.25.0.1 - - [20/Jan/2023:11:18:44 +0000] "PUT /account/blob?comp=block&blockid=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA HTTP/1.1" 400 -
If you look closely to the second line of the logs, which corresponds to the upload of the file, in this case the url is missing the /container part after the name of the account. I guess that's the reason for the 400 error.
Why is it that a change in the port is changing the url in this way? Is there any configuration that I'm missing?
The issue comes does to how the BlobContainerClient.GetBlobClient() attempts to determine the account name from the URI in Azure.Storage.Blobs (12.14.1) and Azure.Storage.Common (12.13.0).
The GetBlobClient method creates an instance of BlobUriBuilder internally, receiving the container client's URI. It then attempts to deconstruct the URI but as the placement of the account name in Azure Storage Containers (before the domain) and Azurite (first segment of the path) differ, the port is used to determine how this should be done.
The whitelisted ports to determine if it is an Azurite instance are:
10000, 10001, 10002, 10003, 10004,
10100, 10101, 10102, 10103, 10104,
11000, 11001, 11002, 11003, 11004,
11100, 11101, 11102, 11103, 11104
I have not found any documentation that exposes this beyond the source code.
Source: BlobUriBuilder constructor calls uri.IsHostIPEndPointStyle()
...
if (uri.IsHostIPEndPointStyle())
{
_isPathStyleUri = true;
var accountEndIndex = path.IndexOf("/", StringComparison.InvariantCulture);
// Slash not found; path has account name & no container name
if (accountEndIndex == -1)
{
AccountName = path;
startIndex = path.Length;
}
else
{
AccountName = path.Substring(0, accountEndIndex);
startIndex = accountEndIndex + 1;
}
}
else
{
AccountName = uri.GetAccountNameFromDomain(Constants.Blob.UriSubDomain) ?? string.Empty;
}
...
Source: which references Constants.Sas.PathStylePorts
// See remarks at https://docs.microsoft.com/en-us/dotnet/api/system.net.ipaddress.tryparse?view=netframework-4.7.2
/// <summary>
/// Check to see if Uri is using IP Endpoint style.
/// </summary>
/// <param name="uri">The Uri.</param>
/// <returns>True if using IP Endpoint style.</returns>
public static bool IsHostIPEndPointStyle(this Uri uri) =>
(!string.IsNullOrEmpty(uri.Host) &&
uri.Host.IndexOf(".", StringComparison.InvariantCulture) >= 0 &&
IPAddress.TryParse(uri.Host, out _)) ||
Constants.Sas.PathStylePorts.Contains(uri.Port);
Source: for the whitelisted ports
/// <summary>
/// List of ports used for path style addressing.
/// Copied from Microsoft.Azure.Storage.Core.Util
/// </summary>
internal static readonly int[] PathStylePorts = { 10000, 10001, 10002, 10003, 10004, 10100, 10101, 10102, 10103, 10104, 11000, 11001, 11002, 11003, 11004, 11100, 11101, 11102, 11103, 11104 };
Related
I run RabbitMQ through Docker Desktop with the following settings:
rabbitmq:
container_name: rabbitmq
restart: always
ports:
- "5672:5672"
- "15672:15672"
Second port number is for the RabbitMQ Dashboard. And, I have a basic REST API endpoint which is supposed to publish a RabbitMQ message as follows:
private readonly IMediator _mediator;
private readonly IPublishEndpoint _publish;
public FlightController(IMediator mediator, IPublishEndpoint publish)
{
_mediator = mediator;
_publish = publish;
}
[HttpPost(Name = "CheckoutCrew")]
[ProducesResponseType((int)HttpStatusCode.Accepted)]
public async Task<IActionResult> CheckoutCrew([FromBody] ScheduleFlightCommand command)
{
var crewIds = new List<string>() { command.SeniorCrewId, command.Crew1Id, command.Crew2Id, command.Crew3Id };
var hasSchedule = true;
var crewCheckoutEvent = new CrewCheckoutEvent() { EmployeeNumbers = crewIds, HasSchedule = hasSchedule };
await _publish.Publish(crewCheckoutEvent);
return Accepted();
}
And, below codes represent the configurations regarding RabbitMQ:
services.AddMassTransit(config => {
config.UsingRabbitMq((ctx, cfg) => {
cfg.Host(Configuration["EventBusSettings:HostAddress"]);
cfg.UseHealthCheck(ctx);
});
});
services.AddMassTransitHostedService();
This Configuration["EventBusSettings:HostAddress"] line points here on appsettings.json:
"EventBusSettings": {
"HostAddress": "amqp://guest:guest#localhost:5672"
}
After I have run my API (named Flight.API), I check RabbitMQ logs via DockerDesktop and see these:
2022-03-31 12:52:41.794701+00:00 [info] <0.1020.0> accepting AMQP connection <0.1020.0> (xxx.xx.x.x:45292 -> xxx.xx.x.x:5672)
2022-03-31 12:52:41.817563+00:00 [info] <0.1020.0> Connection <0.1020.0> (xxx.xx.x.x:45292 -> xxx.xx.x.x:5672) has a client-provided name: Flight.API
2022-03-31 12:52:41.820704+00:00 [info] <0.1020.0> connection <0.1020.0> (xxx.xx.x.x:45292 -> xxx.xx.x.x:5672 - Flight.API): user 'guest' authenticated and granted access to vhost '/'
Everything seems okay, do not they?
I have also wrap .Publish method with try...catch but it also doesn't throw any exceptions. When my endpoint returns Accepted without any issue, I go and check RabbitMQ dashboard but it shows Connections: 0, Channels: 0 etc. Message rates section is also staying on idle.
I cannot see what I am missing.
(Currently, I do not have any consumers, but I should still see some life signs, am I right? Those Connections and Channels counters shouldn't be staying at 0 after I have successfully published my payload)
Thank you in advance.
Edit after adding a consumer class
Still no changes on RabbitMQ Management screens. Everything is on their default values, empty, or idle. Below is my configuration on the consumer project:
services.AddMassTransit(config => {
config.AddConsumer<CrewChecoutConsumer>();
config.UsingRabbitMq((ctx, cfg) => {
cfg.Host(Configuration["EventBusSettings:HostAddress"]);
cfg.UseHealthCheck(ctx);
cfg.ReceiveEndpoint(EventBusConstants.CrewCheckoutQueue, config => {
config.ConfigureConsumer<CrewChecoutConsumer>(ctx);
});
});
});
services.AddMassTransitHostedService();
services.AddScoped<CrewChecoutConsumer>();
appsettings.json file on consumer project is changed accordingly:
"EventBusSettings": {
"HostAddress": "amqp://guest:guest#localhost:5672"
}
And, below is my complete consumer class:
public class CrewChecoutConsumer : IConsumer<CrewCheckoutEvent>
{
private readonly IMapper _mapper;
private readonly IMediator _mediator;
public CrewChecoutConsumer(IMapper mapper, IMediator mediator)
{
_mapper = mapper;
_mediator = mediator;
}
public async Task Consume(ConsumeContext<CrewCheckoutEvent> context)
{
foreach (var employeeNumber in context.Message.EmployeeNumbers)
{
var query = new GetSingleCrewQuery(employeeNumber);
var crew = await _mediator.Send(query);
crew.HasSchedule = context.Message.HasSchedule;
var updateCrewCommand = new UpdateCrewCommand();
_mapper.Map(crew, updateCrewCommand, typeof(CrewModel), typeof(UpdateCrewCommand));
var result = await _mediator.Send(updateCrewCommand);
}
}
}
If you do not have any consumers, the only thing you will see is a message rate on the published message exchange as messages are delivered to the exchange, but then discarded as there are no receive endpoints (queues) bound to that message type exchange.
Until you have a consumer, you won't see any messages in any queues.
Also, you should pass the controller's CancellationToken to the Publish call.
I added a grails filter to redirect urls with www to non www url. After this change many errors have been triggered of this nature.
The change was to add a filter as shown below
class DomainFilters {
def filters = {
wwwCheck(uri:'/**') {
before = {
if (request.getServerName().toLowerCase().startsWith("www.")) {
int port = request.getServerPort();
if (request.getScheme().equalsIgnoreCase("http") && port == 80) {
port = -1;
}
URL redirectURL = new URL(request.getScheme(), request.getServerName().replaceFirst("www.",""), port, request.forwardURI);
response.setStatus(301)
response.setHeader("Location", redirectURL.toString())
response.flushBuffer()
}
}
}
}
}
The point where error occurs is
session['products-ids'] = sizes.join(",")
and the error is as follows
ERROR 2021-07-15 13:48:48,478 [ajp-bio-8109-exec-720] errors.GrailsExceptionResolver: IllegalStateException occurred when processing request: [GET] /payment/productsPurchaseSummary/976634
Cannot create a session after the response has been committed. Stacktrace follows:
java.lang.IllegalStateException: Cannot create a session after the response has been committed
at registration.PaymentController.productsPurchaseSummary(PaymentController.groovy:621)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I think the cause is linked to the added filter but i am not sure what is causing the cannot create session error. I appreciate any insights. Thanks!
UPDATE:
In the logs at the point of error is this request which was 301 redirected because of the filter above.
185.191.171.18 - - [15/Jul/2021:13:48:48 -0600] "GET /payment/productsPurchaseSummary/976634 HTTP/1.1" 301 950
185.191.171.5 - - [15/Jul/2021:13:48:49 -0600] "GET /payment/productsPurchaseSummary/976634 HTTP/1.1" 200 6341
It seems to have fixed it after adding return false. I think if there is no return then it will continue to execute the controller action.
class DomainFilters {
def filters = {
wwwCheck(uri:'/**') {
before = {
if (request.getServerName().toLowerCase().startsWith("www.")) {
int port = request.getServerPort();
if (request.getScheme().equalsIgnoreCase("http") && port == 80) {
port = -1;
}
URL redirectURL = new URL(request.getScheme(), request.getServerName().replaceFirst("www.", ""), port, request.forwardURI);
response.setStatus(301)
response.setHeader("Location", redirectURL.toString())
response.flushBuffer()
return false
}
}
}
}
}
I have an Ingress / Terraform / NGINX / Kubernetes setup that has issues with properly redirecting, it's currently serving a vue.js frontend and a .NET Core backend, both of these work online. However, when adding another Vue.JS instance it doesn't seem to properly redirect to said URL.
My terraform setup
resource "kubernetes_ingress" "ingress" {
metadata {
name = "ingress"
namespace = var.namespace_name
annotations = {
"nginx.ingress.kubernetes.io/force-ssl-redirect" = true
"nginx.ingress.kubernetes.io/from-to-www-redirect" = true
"nginx.ingress.kubernetes.io/ssl-redirect": true
"kubernetes.io/ingress.class": "nginx"
}
}
spec {
tls {
hosts = [var.domain_name, "*.${var.domain_name}"]
secret_name = "tls-secret"
}
rule {
host = var.domain_name
http {
path {
path = "/"
backend {
service_name = "frontend"
service_port = 80
}
}
path {
path = "/api"
backend {
service_name = "api"
service_port = 80
}
}
path {
path = "/backend/*"
backend {
service_name = "backend"
service_port = 80
}
}
path {
path = "/payment/*"
backend {
service_name = "payment"
service_port = 80
}
}
}
}
}
wait_for_load_balancer = true
}
When running kubectl describe the following is returned
Name: ingress
Namespace: [redacted]
Address: [ip-address]
Default backend: default-http-backend:80 (<none>)
TLS:
tls-secret terminates [url-name],*.[url-name]
Rules:
Host Path Backends
---- ---- --------
[url-name]
/ frontend:80 (10.244.0.97:80)
/api api:80 (10.244.0.121:80)
/backend/ backend:80 (10.244.0.96:80)
/payment/ payment:80 (10.244.0.32:80)
Annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: true
nginx.ingress.kubernetes.io/from-to-www-redirect: true
nginx.ingress.kubernetes.io/ssl-redirect: true
I was thinking I might be missing proxy settings but I have no idea how to redirect that. Furthermore, this entire solution is deployed with CI and to digital ocean. I've tried various other configurations such as removing the asterix in the paths /backend/ but this didn't change anything.
In annotations adding nginx.ingress.kubernetes.io/rewrite-target: / only broke the /api URL and didn't fix the others.
EDIT* adding "kubernetes.io/ingress.class": "nginx" like #Vitalii mentioned unfortunately did not fix the issue. Question has been updated for completeness sake
Adding nginx.ingress.kubernetes.io/rewrite-target: / was actually part of the solution, it did break the .NET C# API which made me ask a separate question that can be found here for consistency & future searches sake the solution I've used was as follows. apart from adding the rewrite target line in my annotations changing the API path from
path {
path = "/api(.*)"
backend {
service_name = "api"
service_port = 80
}
}
Into
path {
path = "/(api.*)"
backend {
service_name = "olc-api"
service_port = 80
}
}
With this it matches the /api to my .NET core app, instead of it trying to find a URL within the vue.js container(s)
I am working on a project where we would like to use IdentityServer4 as a token server and have other services authenticated within this token server. I have dev env on Windows using Docker and linux containers. I configured IdentityServer and it's working, I configured Api client and it's working but, when I configured MVC client to authenticate, it's failing to access token server through docker. OK, I realized that Docker works in a way of having external/internal ports, so I configured the api and mvc client this way.
MVC Client
services.AddAuthentication(opts =>
{
opts.DefaultScheme = "Cookies";
opts.DefaultChallengeScheme = "oidc";
})
.AddCookie("Cookies", opts =>
{
opts.SessionStore = new MemoryCacheTicketStore(
configuration.GetValue<int>("AppSettings:SessionTimeout"));
})
.AddOpenIdConnect("oidc", opts =>
{
opts.ResponseType = "code id_token";
opts.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme;
opts.ClientId = "Mvc.Application";
opts.ClientSecret = "Secret.Mvc.Application";
opts.Authority = "http://authorization.server/";
//opts.Authority = "http://localhost:5001/";
//opts.MetadataAddress = "http://authorization.server/";
opts.UsePkce = true;
opts.SaveTokens = true;
opts.RequireHttpsMetadata = false;
opts.GetClaimsFromUserInfoEndpoint = true;
opts.Scope.Add("offline_access");
opts.Scope.Add("Services.Business");
opts.ClaimActions.MapJsonKey("website", "website");
});
This part is working, because document discovery is working. However it'll fail to access http://authorization.server url, because it's internal container address, not external accessible through web browser. So I tried to set 2 different urls: MetadataAddress from which document from OpenId server should be fetched and Authority, where all Unauthorized requests are redirected. However when I set both MetadataAddress and Authority in OpenIdConnectOptions when calling AddOpenIdConnect, it'll use MetadataAddress instead of Authority. I checked logs, discovery of document is successfull, because I'm hitting http://authorization.server/.well-known..., but it's also initiating request to the IdentityServer to authenticate with the same url http://authorization.server/connect...
Api Client
services.AddAuthorization()
.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddIdentityServerAuthentication(opts =>
{
opts.RequireHttpsMetadata = false;
opts.ApiName = "Api.Services.Business";
opts.ApiSecret = "Secret.Api.Services.Business";
opts.Authority = "http://authorization.server/";
});
This it's working fine using the internal container address.
IdentityServer configuration
services.AddIdentityServer(opt =>
{
opt.IssuerUri = "http://authorization.server/";
})
.AddAspNetIdentity<User>()
.AddSigningCredential(Certificate.Get())
.AddProfileService<IdentityProfileService>()
.AddInMemoryApiResources(Configuration.ApiResources())
.AddInMemoryIdentityResources(Configuration.IdentityResources())
.AddInMemoryClients(Configuration.Clients());
Configuration.cs
public static IEnumerable<Client> Clients(string redirectUri, string allowedCorsOrigins)
{
return new List<Client>
{
new Client
{
ClientId = "Services.Business",
ClientName = "Api Business",
AllowedGrantTypes = GrantTypes.ResourceOwnerPassword,
AllowedScopes =
{
"Services.Business"
},
ClientSecrets =
{
new Secret("Secret.Services.Business".Sha256())
}
},
new Client
{
ClientId = "Mvc.Application",
ClientName = "Mvc Application",
RequireConsent = false,
AllowOfflineAccess = true,
AllowedGrantTypes = GrantTypes.Hybrid,
AllowedScopes =
{
"Services.Business",
IdentityServerConstants.StandardScopes.OpenId,
IdentityServerConstants.StandardScopes.Profile
},
ClientSecrets =
{
new Secret("Secret.Mvc.Application".Sha256())
},
RedirectUris =
{
$"{redirectUri}/signin-oidc"
},
PostLogoutRedirectUris =
{
$"{redirectUri}/signout-callback-oidc"
}
}
};
}
Docker-compose.yml
version: '3.4'
networks:
fmnetwork:
driver: bridge
services:
authorization.server:
image: authorization.server
container_name: svc.authorization.server
build:
context: .
dockerfile: Authorization.Server/Dockerfile
ports:
- "5000:80"
- "5100:443"
environment:
ASPNETCORE_HTTPS_PORT: 5100
ASPNETCORE_ENVIRONMENT: Staging
ASPNETCORE_URLS: "https://+;http://+"
ASPNETCORE_Kestrel__Certificates__Default__Password: "devcertaspnet"
ASPNETCORE_Kestrel__Certificates__Default__Path: /root/.dotnet/https/aspnetapp.pfx
depends_on:
- sql.server
volumes:
- D:\Docker\Data\Fm:/root/.dotnet/https
- D:\Docker\Data\Fm\Logs:/Fm.Logs
networks:
- fmnetwork
services.business:
image: services.business
container_name: api.services.business
build:
context: .
dockerfile: Services.Business/Dockerfile
ports:
- "5001:80"
- "5101:443"
environment:
ASPNETCORE_ENVIRONMENT: Staging
ASPNETCORE_HTTPS_PORT: 5101
ASPNETCORE_URLS: "https://+;http://+"
ASPNETCORE_Kestrel__Certificates__Default__Password: "devcertaspnet"
ASPNETCORE_Kestrel__Certificates__Default__Path: /root/.dotnet/https/aspnetapp.pfx
depends_on:
- sql.server
volumes:
- D:\Docker\Data\Fm:/root/.dotnet/https
- D:\Docker\Data\Fm\Logs:/Fm.Logs
networks:
- fmnetwork
mvc.application:
image: mvc.application
container_name: svc.mvc.application
build:
context: .
dockerfile: Mvc.Application/Dockerfile
ports:
- "5002:80"
- "5102:443"
environment:
ASPNETCORE_ENVIRONMENT: Staging
ASPNETCORE_HTTPS_PORT: 5102
ASPNETCORE_URLS: "https://+;http://+"
ASPNETCORE_Kestrel__Certificates__Default__Password: "devcertaspnet"
ASPNETCORE_Kestrel__Certificates__Default__Path: /root/.dotnet/https/aspnetapp.pfx
volumes:
- D:\Docker\Data\Fm:/root/.dotnet/https
- D:\Docker\Data\Fm\Logs:/Fm.Logs
networks:
- fmnetwork
I just faced this same issue and was able to solve it as follows.
Some things to keep in mind:
This is not an issue with Identity Server itself but with the mismatch between the internal Docker URL (http://authorization.server) that your container sees and the local host URL (http://localhost:5001) that your browser sees.
You should keep using the local URL for Identity Server (http://localhost:5001) and add a special case to handle the container to container communication.
The following fix is only for development when working with Docker (Docker Compose, Kubernetes), so ideally you should check for the environment (IsDevelopment extension method) so the code is not used in production.
IdentityServer configuration
services.AddIdentityServer(opt =>
{
if (Environment.IsDevelopment())
{
// It is not advisable to override this in production
opt.IssuerUri = "http://localhost:5001";
}
})
MVC Client
services.AddAuthentication(... /*Omitted for brevity*/)
.AddOpenIdConnect("oidc", opts =>
{
// Your working, production ready configuration goes here
// It is important this matches the local URL of your identity server, not the Docker internal URL
options.Authority = "http://localhost:5001";
if (Environment.IsDevelopment())
{
// This will allow the container to reach the discovery endpoint
opts.MetadataAddress = "http://authorization.server/.well-known/openid-configuration";
opts.RequireHttpsMetadata = false;
opts.Events.OnRedirectToIdentityProvider = context =>
{
// Intercept the redirection so the browser navigates to the right URL in your host
context.ProtocolMessage.IssuerAddress = "http://localhost:5001/connect/authorize";
return Task.CompletedTask;
};
}
})
You can tweak the code a little bit by passing said URLs via configuration.
I'm quite new to Masstransit/RabbitMq and I encountered a problem cannot deal with.
I have a Rabbitmq server running in docker, also a small microservice in docker container which consumes an event. Beside this I run a windows service on the host machine, which has the task to send the event via the masstransit Request/Response model to the microservice. The interesting thing is that the event arrives to the consumer as supposed but when I try to response the context.RespondAsync from the consume method I get an exception
R-FAULT rabbitmq://autbus/exi_bus 80c60000-eca5-3065-0093-08d62a09d168 HwExi.Extensions.Events.ReservationCreateOrUpdateEvent HwExi.Api.Consumers.ReservationCrateOrUpdateConsumer(00:00:07.8902444) The host was not found for the specified address: rabbitmq://127.0.0.1/bus-SI-GEPE-HwService.Api-oddyyy8cwwagkoscbdmnwncfrg?durable=false&autodelete=true, MassTransit.EndpointNotFoundException: The host was not found for the specified address: rabbitmq://127.0.0.1/bus-SI-GEPE-HwService.Api-oddyyy8cwwagkoscbdmnwncfrg?durable=false&autodelete=true
I'm using this model to messaging between microservices without any problem and its working properly in another queue.
Here is the yaml of microservice / Bus
exiapi:
image: exiapi
build:
context: .
dockerfile: Service/HwExi.Api/Dockerfile
ports:
- "54542:80"
environment:
"BUS_USERNAME": "guest"
"BUS_PASSWORD": "guest"
"BUS_HOST": "rabbitmq://autbus"
"BUS_URL": "exi_bus"
autbus:
image: rabbitmq:3-management
hostname: autbus
ports:
- "15672:15672"
- "5672:5672"
- "5671:5671"
volumes:
- ~/rabbitmq:/var/lib/rabbitmq/mnesia
the config of the windows service:
"Bus": {
"Username": "guest",
"Password": "guest",
"Host": "rabbitmq://127.0.0.1",
"Url": "exi_bus"
},
The windows service connects like this:
var builder = new ContainerBuilder();
builder.Register(context =>
{
return Bus.Factory.CreateUsingRabbitMq(rmq =>
{
var host = rmq.Host(new Uri(options.Value.Bus.Host), "/", h =>
{
h.Username(options.Value.Bus.Username);
h.Password(options.Value.Bus.Password);
});
rmq.ExchangeType = ExchangeType.Fanout;
});
}).As<IBusControl>().As<IBus>().As<IPublishEndpoint>().SingleInstance();
The microservice inside container connects like this
public static class BusExtension
{
public static void InitializeBus(this ContainerBuilder builder, Assembly assembly)
{
builder.Register(context =>
{
return Bus.Factory.CreateUsingRabbitMq(rmq =>
{
var host = rmq.Host(new Uri(Constants.Bus.Host), "/", h =>
{
h.Username(Constants.Bus.UserName);
h.Password(Constants.Bus.Password);
});
rmq.ExchangeType = ExchangeType.Fanout;
rmq.ReceiveEndpoint(host, Constants.Bus.Url, configurator =>
{
configurator.LoadFrom(context);
});
});
}).As<IBusControl>().As<IBus>().As<IPublishEndpoint>().SingleInstance();
builder.RegisterConsumers(assembly);
}
public static void StartBus(this IContainer container, IApplicationLifetime lifeTime)
{
var bus = container.Resolve<IBusControl>();
var busHandler = TaskUtil.Await(() => bus.StartAsync());
lifeTime.ApplicationStopped.Register(() => busHandler.Stop());
}
}
than windows service fires the event like this:
var reservation = ReservationRepository.Get(message.KeyId, message.KeySource);
var operation = await ReservationCreateOrUpdateClient.Request(new ReservationCreateOrUpdateEvent { Reservation = reservation });
if (!operation.Success)
{
Logger.LogError("Fatal error while sending reservation create or update message to exi web service");
return;
}
Finally the microservice catches the event like this.
public class ReservationCrateOrUpdateConsumer : IConsumer<ReservationCreateOrUpdateEvent>
{
public async Task Consume(ConsumeContext<ReservationCreateOrUpdateEvent> context)
{
await context.RespondAsync(new MessageOperationResult<bool>
{
Result = true,
Success = true
});
}
}
I'm using autofac to register the requestclient in windows service:
Timeout = TimeSpan.FromSeconds(20);
ServiceAddress = new Uri($"{Configurarion.Bus.Host}/{Configurarion.Bus.Url}");
builder.Register(c => new MessageRequestClient<ReservationCreateOrUpdateEvent, MessageOperationResult<bool>>(c.Resolve<IBus>(), ServiceAddress, Timeout))
.As<IRequestClient<ReservationCreateOrUpdateEvent, MessageOperationResult<bool>>>().SingleInstance();
Can anybody help debug this out? Also share opinion if this structure is a proper one, maybe I should use https for sending message from the client machine to my microservice environment, and convert it to the bus via a gateway or similar approach more suitable? Thanks