Docker-RabbitMQ-NestJS microservices error 406 PRECONDITION_FAILED - docker

I'm new to Docker and RabbitMQ and I've been trying for 2 days to solve an error in my docker containers which contains: api_client, api_consumer, RabbitMQ. I've done a research and tried to read as many threads with this problem as I found but unfortunately nothing helped.
So here is my code:
compose.yml
services:
api_client:
build:
context: ""
dockerfile: apps/api_client/Dockerfile
env_file:
- ./config/.env.local
restart: always
ports:
- "3000:3000"
depends_on:
- rabbitmq
api_consumer:
build:
context: ""
dockerfile: apps/api_consumer/Dockerfile
env_file:
- ./config/.env.local
restart: always
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.9.2-management
container_name: rabbitmq
hostname: rabbitmq
volumes:
- /var/lib/rabbitmq
- ./rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
ports:
- "5672:5672"
- "15672:15672"
main.ts (in api_consumer)
async function bootstrap() {
const app = await NestFactory.createMicroservice<MicroserviceOptions>(
ApiConsumerModule,
{
transport: Transport.RMQ,
options: {
queue: 'test_queue',
urls: ['amqp://guest:guest#rabbitmq:5672'],
queueOptions: {
durable: true
}
}
},
);
const AWSAppConfig = app.get(AwsAppconfigLoaderService);
const Log = new Logger(ApiClientService.name);
await AWSAppConfig.loadAWSAppConfig()
.then((_) => {
Log.log(AWSAppConfig.getAppName());
})
.catch((err) => {
Log.error(
`Error occured while downloading AWS Config: ${JSON.stringify(
err,
)}`,
);
});
await app.listen();
}
bootstrap();
api-client.module.ts (in api_client)
#Module({
imports: [
ConfigModule.forRoot({
isGlobal: true,
load: [AppConfig],
}),
ClientsModule.register([{
name: GET_MATCHED_DEVICES,
transport: Transport.RMQ,
options: {
queue: 'test_queue',
urls: ['amqp://guest:guest#rabbitmq:5672'],
queueOptions: {
durable: true
}
}
},
]),
AwsAppconfigLoaderModule,
],
controllers: [ApiClientController],
providers: [ApiClientService],
})
export class ApiClientModule {}
Functionality is simple- when GET on localhost:3000 (api_client) is called, it calls (in controller) return this.client.send('getSample', "hello") and then in api_consumer it should call (in controller)
#MessagePattern('getSample')
getSample(data): string {
Logger.debug(data)
return "It works!";
}
When all docker services start there is the first error:
Disconnected from RMQ. Trying to reconnect.
{
"err": {
"code": 406,
"classId": 60,
"methodId": 40
}
}
And then when I try to access the localhost:3000, this error always occur:
Error: Channel closed by server: 406 (PRECONDITION-FAILED) with message "PRECONDITION_FAILED - fast reply consumer does not exist"
Both errors come from api_client.
What I've tried and didn't help:
-change durable to false or remove durable options completely
-add noAck
-remove queue in adminer on localhost:15672 (which works fine)
-remove port from urls in both microservices
-as you can see the queue options are the same in both microservices
Now the most absurd thing is that this code did work absolutely fine until I started to work on second compose file (and dockerfiles) for local (faster) development with volumes. Then suddenly these errors have started to occur and even if I undid all my code changes the errors are still there. Because of this I've wiped all my volumes (with docker system prune -a --volumes) many times but still nothing. My OS is Ubuntu 20.04
I am completely out of ideas so I've written it here in hope for some help, please.

The failure occurs because the app can not connect to rabbitmq, the problem resides in your docker compose. Make sure the services are using the same network in docker and are able to communicate.

I know it sounds strange but I have had the same problem yesterday, also using a docker-compose file and without making any changes to it or to the rabbitmq logic, it broke. I tried many things, and when i changed
return this.client.send('getSample', "hello") and #MessagePattern
to
return this.client.emit('getSample', "hello") and #EventPattern
thinking it wouldn't make sense for it to fix the issue, it actually did.
I suggest you try that and tell me if it works, sorry if i can't help you more.

So after 5 days of figuring out what causes the error I've found out the problem lies in Nest itself. The bug has been documented here. I've deleted all the code associated with #nestjs/microservices and tried to use this approach using amqlib only and the request/response functionality finally works.

Related

How can you conditionally set services in the docker-compose.yml file?

I'm new to using Docker and Cake. At the moment we have a simple Cake task that runs the DockerComposeUp() method that takes a DockerComposeUpSettings object. The docker-compose.yaml file holds some info on a service that I want to conditionally run (serviceA):
version: "1.0"
services:
serviceA:
image: someImage
ports:
-"000000"
-"000001"
serviceB:
image: someOtherImage
anotherProperty: somethingElse
ports:
-"111111"
I've tried splitting out serviceA into a separate docker-compose file called 'docker-compose.serviceA.yaml' and calling it by adding to the DockerComposeUpSettings.ArgumentCustomization the following:
if(some setting)
{
dockerComposeUpSettings.ArgumentCustomization = builder => builder.Append("-f docker-compose.yaml -f docker-compose.serviceA.yaml");
}
However, Cake throws the following error:
"unknown shorthand flag: 'f' in -f"
How can I merge to docker-compose files as part of the DockerComposeUp method using Cake?
Update
I've found there is a 'Files' property on the DockerComposeUpSettings object (inherited from DockerComposeSettings object), where you can declare the configuration files. So I've added:
if(some flag)
{
dockerComposeSettings.Files = new[]{ "docker-compose.yaml", "docker-compose.serviceA.yaml" };
}
I don't know much about docker, but looking at the docs here and here it seems it would be important to have the -f option set before the command specified on the commandline. Your customization (builder.Append()) puts them at the end of the commandline.
Have you tried setting the Files property of the DockerComposeUpSettings? That looks like what you are looking for.

Login does not work in docker container for abp io app

i have created abp io app from blazor sql server not tierd template. i have ran app locally and its working fine. Then ive build docker image for it and made compose for sql server and my app image. Container is working fine and i can connect to db.
Problem is that auth is not working. When i try to log in nothing happens. Not sure whats wrong or where to look.
dockerfile
WORKDIR /app
COPY . .
ENTRYPOINT ["dotnet", "SimplyAir.Blazor.dll"]
p.s. there is a ps1 build script that builds app dlls. this dockerfile just copies them and uses aspnet 5 runtime.
docker-compose.yml
services:
simply-air-ms-sql-server:
image: mcr.microsoft.com/mssql/server:2017-latest-ubuntu
environment:
ACCEPT_EULA: "Y"
SA_PASSWORD: "Pa55word."
MSSQL_PID : Express
ports:
- "1445:1433"
simplyair_host:
image: simplyair/host
environment:
ASPNETCORE_ENVIRONMENT: Release
ports:
- 8081:80
volumes:
- "./Host-Logs:/app/Logs"
appsettings.Release.json
{
"App": {
"SelfUrl": "http://localhost:8081",
"CorsOrigins": "http://localhost:8081"
},
"ConnectionStrings": {
"Default": "Server=simply-air-ms-sql-server;Database=SimplyAirRelease;User Id=SA;Password=Pa55word."
},
"AuthServer": {
"Authority": "https://localhost:8081",
"RequireHttpsMetadata": "true"
}
}
only error in Log.txt in container is
[ERR] An exception was thrown while deserializing the token.
Microsoft.AspNetCore.Antiforgery.AntiforgeryValidationException: The antiforgery token could not be decrypted.
---> System.Security.Cryptography.CryptographicException: The key {0f13b215-a101-449a-8a97-389b992dc5fd} was not found in the key ring.
at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.UnprotectCore(Byte[] protectedData, Boolean allowOperationsOnRevokedKeys, UnprotectStatus& status)
at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.DangerousUnprotect(Byte[] protectedData, Boolean ignoreRevocationErrors, Boolean& requiresMigration, Boolean& wasRevoked)
at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.Unprotect(Byte[] protectedData)
at Microsoft.AspNetCore.Antiforgery.DefaultAntiforgeryTokenSerializer.Deserialize(String serializedToken)
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Antiforgery.DefaultAntiforgeryTokenSerializer.Deserialize(String serializedToken)
at Microsoft.AspNetCore.Antiforgery.DefaultAntiforgery.GetCookieTokenDoesNotThrow(HttpContext httpContext)
and some warning
[WRN] The cookie 'XSRF-TOKEN' has set 'SameSite=None' and must also set 'Secure'.
[WRN] The cookie 'idsrv.session' has set 'SameSite=None' and must also set 'Secure'.
[WRN] The cookie '.AspNetCore.Identity.Application' has set 'SameSite=None' and must also set 'Secure'.
and also there is something in dev console
not sure if i have provided enough info so please if u need more tell me
I had a similar issue with abp.io MVC Non-Tiered app. I built a docker container and the login screen would not work as you describe. I used the network tool in chrome and saw that the Account/Login api endpoint was coming up as 404. I had to change my docker configuration to utilize https endpoints instead of http and then it worked. I also had to pass in the following environment variables in docker-compose for kestrel to work with https.
- ASPNETCORE_Kestrel__Certificates__Default__Path=/etc/ssl/certs/localhost.pfx
- ASPNETCORE_Kestrel__Certificates__Default__Password=YOURPASSWORD

Posting API request from one Docker container to another

I've been following this post on Medium to learn how to create and run a dotnet core console app in a docker container, and post to a dotnet core API in another container.
When I run the two applications side-by-side (without docker, i.e. just debugging in vscode), everything works OK - the console app can post to the API. However, when I run the applications in containers using docker-compose up --build, I get an error when the application tries to post to the api:
Unhandled exception. System.AggregateException: One or more errors occurred. (The SSL connection could not be established, see inner exception.)
System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception.
System.IO.IOException: The handshake failed due to an unexpected packet format.
Searching for solutions to this error hasn't helped much, and I feel that the problem may simply be connectivity between the two containers, but I've had no luck trying to resolve it.
My docker-compose file is as follows:
version: '3.4'
services:
publisher_api:
image: my_publisher_api:latest
container_name: my_publisher_api_container
build:
context: ./publisher_api
dockerfile: Dockerfile
worker:
image: my_worker
container_name: my_worker_container
depends_on:
- "publisher_api"
build:
context: ./worker
dockerfile: Dockerfile
My console app code (or at least the relevant part) is:
public static async Task PostMessage(object postData)
{
var json = JsonConvert.SerializeObject(postData);
var content = new StringContent(json, UnicodeEncoding.UTF8, "application/json");
using (var httpClientHandler = new HttpClientHandler())
{
httpClientHandler.ServerCertificateCustomValidationCallback = (message, cert, chain, errors) => { return true; };
using (var client = new HttpClient(httpClientHandler))
{
var result = await client.PostAsync("https://my_publisher_api_container:80/values", content);
string resultContent = await result.Content.ReadAsStringAsync();
Console.WriteLine($"Server returned {resultContent}");
}
}
}
I wont post any of the API code, as I dont think any of it should be relevant, but please let me know if you think it would help.
If anyone has any idea on what the cause of this error is or how to resolve it, I'd appreciate the help.
Edit
Thought it would be useful to include the versions being used:
dotnet core: 3.0.101
docker: 19.03.5, build 633a0ea838
Looks like I had mad a couple of fairly obvious mistakes, however they're not so obvious when you're completely new to Docker, like me.
The hostname to post to should be the name of the service, not the container.In my case, I had to change the console app to post to the name of the API service declared in the docker-compose file, publisher_api.
Use HTTP instead of HTTPS. When I debugged the API locally, it launches with HTTPS by default. I assumed I would use HTTPS when running the container in docker, but this doesn't seem to work by default. Changing to HTTP resolved the issue (although this ideally will be a short-term solution).
So just for completeness, here's my updated code. Only the URL that the console app posts to had to change:
public static async Task PostMessage(object postData)
{
var json = JsonConvert.SerializeObject(postData);
var content = new StringContent(json, UnicodeEncoding.UTF8, "application/json");
using (var httpClientHandler = new HttpClientHandler())
{
httpClientHandler.ServerCertificateCustomValidationCallback = (message, cert, chain, errors) => { return true; };
using (var client = new HttpClient(httpClientHandler))
{
var result = await client.PostAsync("http://publisher_api80/values", content);
string resultContent = await result.Content.ReadAsStringAsync();
Console.WriteLine($"Server returned {resultContent}");
}
}
}

Blazor SSR - Browser error when routing to different pages with docker (https-portal)

I'm currently trying to run a .NET core 3 (preview 8) SSR-blazor project in docker. The pages seem to load fine until you start navigating, uses NavLink, which gives me the following error in the browser console:
Error: There was an exception invoking 'NotifyLocationChanged' on
assembly 'Microsoft.AspNetCore.Components.Server'. For more details
turn on detailed exceptions in 'CircuitOptions.DetailedErrors'
My current docker-compose.yml file looks like this:
version: '2'
services:
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- "database"
database:
image: "mcr.microsoft.com/mssql/server:2017-latest-ubuntu"
environment:
MSSQL_SA_PASSWORD: "Hidden"
ACCEPT_EULA: "Y"
https-portal:
image: steveltn/https-portal:1
ports:
- '80:80'
- '443:443'
links:
- app
restart: always
environment:
- WEBSOCKET: true
- DOMAINS: 'somesite.com -> http://app:5000'
# - STAGE: 'local'
- STAGE: 'staging'
# - STAGE: 'production'
I thought it had something to do with the - WEBSOCKET: true environment or app.UseForwardedHeaders(); in code. But the results are the same.
Edit 1:
So I added the following code in my startup and it started working:
services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders =
ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
});
Got it from https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-2.2
Edit 2:
Nevermind it stopped working. Seems like it only works the first time or after a long time of inactivity. After that, I get the same error.
Edit 3:
So I created a brand new Blazor (preview 8) project with the same docker structure, and what's weird is that it works in this project. I tried comparing this new project with mine (made in preview 5 but upgraded over time), however, I could not find any big major differences. I'm currently migrating some old code over to the new project and see when it's not working anymore. I hope this gives me the answer because I'm absolutely lost at this point.
I finally got it working. So i had to add the following to my startup
app.Use((ctx, next) =>
{
ctx.Request.Scheme = "https";
return next();
});
Now the routing works perfect

ECONNRESET when opening a large number of connection in small time period

I have situation where I want to create large number of entities on orion. I am using docker version of Orion and mongo with this docker-compose.
version: "3"
services:
mongo:
image: mongo:3.4
volumes:
- /data/docker-mongo/db:/data/db
- /data/docker-mongo/log/mongodb.log:/var/log/mongodb/mongod.log
command: --nojournal
orion:
image: fiware/orion
volumes:
- /data/docker-mongo/log/contextBroker.log:/tmp/contextBroker.log
links:
- mongo
ports:
- "1026:1026"
command: -dbhost mongo
Now problems happens when I want to upload 2000 entities (opening new connection for each, I know it can be done different but for now this is request), I successfully create no more than 600 (or less never exact number) of them rest fail to create with error:
"error": {
"errno": "ECONNRESET",
"code": "ECONNRESET",
"syscall": "read"
},
So I assume this issue has something to do with maxConnections, reqPoolSize etc settings in Orion. But in docker I failed to locate Orion config file, I have no way of knowing when I type commands like contextBroker -maxConnections 123456 that setting is being accepted by Orion and docker container.
Also log of Orion is empty, and i cannot determined what is causing this issue when Orion is running on docker.
So main questions:
Can Orion running on docker be used in same manner as Orion running on VM (are there some fallbacks)
And how do I check this problem when Orion is running in docker, because I read a lot of docs/issues but no luck (or I missed something).
If you have some advice/soultion it would really help.
Thanks
{
"orion" : {
"version" : "1.13.0-next",
"uptime" : "2 d, 15 h, 46 m, 34 s",
"git_hash" : "ae72acf9e8eeaacaf4eb138f7de37bfee4514c6b",
"compile_time" : "Fri May 4 10:12:18 UTC 2018",
"compiled_by" : "root",
"compiled_in" : "1901fd6bb51a",
"release_date" : "Fri May 4 10:12:18 UTC 2018",
"doc" : "https://fiware-orion.readthedocs.org/en/master/"
}
}
{ Error: socket hang up
at createHangUpError (_http_client.js:313:15)
at Socket.socketOnEnd (_http_client.js:416:23)
at Socket.emit (events.js:187:15)
at endReadableNT (_stream_readable.js:1090:12)
at process._tickCallback (internal/process/next_tick.js:63:19) code: 'ECONNRESET' }
error:
{ Error: connect ECONNREFUSED ipofvirtualm:1026
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'read',
address: 'ipofvm',
port: 1026 },
options:
{ method: 'POST',
uri: 'http://ip:1026/v2/entities?options=keyValues',
headers:
{ 'Fiware-Service': 'some service',
'Fiware-ServicePath': 'some servicepath' },
body:
{ id: 'F0B935',
type: 'Transaction',
refEmitter: 'F0B935',
refReceiver: '7501JXG',
refCapturer: 'testtdata',
date: '12/12/2017 13:25',
refTransferredResources: 'testtdata',
transferredLoad: 92 },
json: true,
callback: [Function: RP$callback],
transform: undefined,
simple: true,
resolveWithFullResponse: false,
transform2xxOnly: false },
I am using request promise library for making calls, i try others they had same issue. Now since i cannot send u all 2000 responses i will try to describe. So it when i start to send this it behave. It create like 30 entities then next few or more return response saying ECONNRESET then it start creating again and so on.
What confuse me is that it is not failing totally meaning it works but not as intended. Also it seem that Orion close socket or hang it up some period then he is open again and create as normal and so on. If u need any more info ask, and thanks for quick answer.
instead of opening a new connection per entity why don't you use
POST /v2/op/update
and create all entities in just one batch? or a couple of batches
See some code at
https://github.com/Fiware/dataModels/blob/master/Weather/WeatherObserved/harvest/spain_weather_observed_harvest.py#L235
With regards to CLI argument passing to CB running inside docker, use the command line in docker compose file, eg:
command: -dbhost mongo -maxConnections 123456
However, I'm not sure that would help to solve the problem, as Orion should deal with your use case without any special customization. Looking to the error message (which seems to be about some problema at TCP layer) I wonder if docker networking layer is acting as bottleneck in some way...
In addition, the suggestion done by Jose Manuel Cantera about using POST /v2/op/update would be a good idea. It would reduce connection stress at network layer and may help to alleviate the problem.
If you cannot change your update strategy, maybe using an inter-request delay (100-200ms) could also help.

Resources