how to set maxConcurrentConnections config in Hystrix Stream in Dropwizard project - dropwizard

I am using com.netflix.hystrix:hystrix-metrics-event-stream:1.5.12 maven dependency to get hystrix metric stream. If I keep on reloading page continuously, some-times application is throwing 503. reason for that I found https://github.com/Netflix/Hystrix/issues/1644 is that maxConcurrentConnections limit is breached. I need to increase hystrix.stream.maxConcurrentConnections but I am not able to get how to provide this config?

Related

dotnet-isolated azure function container loads 0 of 1 function from metadata and then gives http status 204 (content not found)

I have .net 6 isolated function docker container that works locally but not in azure. The docker file copies the build output binaries to the home/site/wwwroot directory of the container based on the image mcr.microsoft.com/azure-functions/dotnet-isolated:4-dotnet-isolated6.0.
When I look at the live log stream I can see
the configuration is setup correctly as far as I can see but I don't have full access. its setup as dotnet-isolated and functions version 4. I can see its pointing at the right docker image.
I'm not sure what else to check troubleshoot why it doesn't start properly. Are the files in the correct location in the docker file? does it need anything else that I have missed?
Any advice will be greatly appreciated.
Thanks
Thanks, i should have mentioned that this is for a timer trigger only so there is no http triggers
In the Azure Functions:
For the Http Triggers, response comes in Http Status Codes.
For Timer Triggers, failed responses can be thrown in the form of exceptions but not the status codes.
I found an article in dontcodetired site where the author mentioned that we can write the status code manually which returns automatically during some situations which is taken care of by Azure Functions Runtime.
One of those situations is returning the status codes automatically in the case of failed operation like without the exception, function completes the execution without proper result which is a kind of internal server problem- it means the request of (any trigger type) is processed/succeeded but without proper response or operation result.

How to debug Microservice in cumulocity platform

I wrote microservices using spring boot. some time showing its active in status and sometimes showing inactive, I can't understand the behaviour of microservice and how can debug it
Have you tested running the microservice locally?
I've been getting inconsistent reports from the status tab in the UI. Sometimes it says the service is down when it's actually up. I check the /health endpoint to be sure(it's not available right after you upload the zip, takes 5-6 minutes).
The logs in the UI are a bit clunky, so I've added a rolling file appender to logback.xml and a rest endpoint to expose the log file for debugging.
Try to override health check timeout value (timeoutSeconds property of Probe). By default it's 1 second and it's often not enough. Please refer our specification: https://cumulocity.com/guides/reference/microservice-manifest/
In the administration application you will find the status details for each of your applications.
When the status is switching all the time probably the docker container is terminating all the time (probably because the application is crashing). You should the that on the status tab of the application in the event log (container is restarted all the time).
If you are on the newest Cumulocity version (9.19.x) you should also have access to the logs of the microservice at the same place in UI. You need to log to stdout in order to be able to get the logs through administration application.

Neo4j http.log empty

I'm trying to turn on http logging for an Enterprise 2.0 Neo4j server.
After following this documentation, and adding the following likes to neo4j-server.properties:
org.neo4j.server.http.log.enabled=true
# Logging policy file that governs how HTTP log output is presented and
# archived. Note: changing the rollover and retention policy is sensible, but
# changing the output format is less so, since it is configured to use the
# ubiquitous common log format
org.neo4j.server.http.log.config=conf/neo4j-http-logging.xml
the data/log/http.log file is still zero bytes even after restarting the server and then running a basic Ruby script that inserts nodes (upon request if needed).
I'm guessing I'm missing something completely obvious here so bear with me. Thanks.
UPDATE on 9/26/14
I'm still seeing this issue for Neo4j 2.1.2
has anyone managed to get the http logs to work?
There was a possible solution on google groups that you could touch the http.log file before starting the server, but still get an empty log file.
For the time being we might try to put a reverse proxy in to log the req and response.
I am seeing this problem in Neo4j 2.0.1. I added an issue to the Neo4j Github issue tracker in hopes of a resolution.
https://github.com/neo4j/neo4j/issues/2219

Web deployment failing due to file in use

I'm using Microsoft's Web Deploy Remote Agent service to allow me to easily publish code to the server from within Visual Studio.
The web site I am deploying is using log4net to log messages to log files, and every time I try to deploy a new version of the code, I get this error in Visual Studio stating that the current log4net log file is in use:
An error occurred when the request was processed on the remote
computer. The file 'Web.log' is in use.
The process cannot access 'C:\inetpub\wwwroot\Logs\Web.log' because it
is being used by another process.
I can solve this by going onto the server and doing an iisreset before publishing... but that is kind of defeating the point of 'easy' publishing from Visual Studio :)
Is there some way I can get the publish task to issue an iisreset automatically, or some other way I can work round this?
I kept poking around and found some tidbits around the file being locked in a few other forums. Have you tried adding
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
To your <appender> element in the web.config file? From the Apache docs
Opens the file once for each AcquireLock/ReleaseLock cycle,
thus holding the lock for the minimal amount of time. This method of
locking is considerably slower than FileAppender.ExclusiveLock but
allows other processes to move/delete the log file whilst logging
continues.
As far as the performance considerations, I suppose you would need to test if this will affect you or not as I am assuming it really depends on how often you are writing to the log file as to how much this will impact performance. I can't believe that getting/releasing a lock could take all that much time though.
There is a MSDEPLOY provider called recycleApp which is used exactly for this. You can include this in your deployment manifest.
Another option is to use ignoreOnErrors flag which will skip the file in use and continue with the deployment.

Gerrit - Application Error - Intraline difference not available due to server error

For one of our gerrit projects, while navigating the file differences we get this error:
Application Error
Intraline difference not available due to server error
[Continue]
It doesn't happen for all projects, currently we've detected the error on only one project.
I looked on Google and on the gerrit documentation. Found a reference on their source code, but don't know what causes it and how it can be resolved.
The web page with the error contains a "Continue" button. Once clicked it will take you to the file you selected, but the error is annoying.
Do you know how to fix this?
That is caused while cache the intraline difference of one file, when compared between two commits. The default timeout value is 5 seconds. If the file is huge, and computation takes longer than the timeout, the worker thread is terminated, an error message is shown, and no intraline difference is displayed for the file pair.
A solution could fix this.
Add config in gerrit.conf.
[cache "diff_intraline"]
timeout = 15000 ms # Or other time length as you want.
restart Gerrit service
run SSH command "gerrit flush-caches", using a user with ViewCaches global capability.
ssh -p port userxxx#host gerrit flush-caches
Then it would work.
Cause of the error:
It is a result of Gerrit taking too long to diff the file, and marking the diff in one of its caches as non-available.
The relevant error log is here:
[2012-06-08 11:14:08,547] WARN com.google.gerrit.server.patch.IntraLineLoader : 5000 ms timeout reached for IntraLineDiff in project xxxxxxx on commit 354dd67ad54578cf801d8cda64a4ae8484ebb0b7 for path xxxxxxx.java comparing bf9fbc21520af7bfd0841c8b9f955ca6e215b059..f6b9c7992c12cfdca253acd033966f98f70f3543. Killing IntraLineDiff-6

Resources