I have various loggers with different log levels.
Initially loaded from a log4j2.xml
<Logger name="a.b.c" level="INFO">
I want to provide an option to modify or update the log levels at runtime using an API. The following lines of code work fine. I am just mentioning the relevant lines.
LoggerContext cxt = (LoggerContext) LogManager.getContext(false);
for (Logger l : cxt.getLoggers()) {
....
// I am getting logger config of the logger (I am doing this update only once per unique logger config)
LoggerConfig lc = l.get();
// This won't be hardcoded but will be obtained from request
lc.setLevel(Level.ERROR);
}
cxt.updateLoggers();
As mentioned this works fine when I test the application locally. It is updating all the loggers.
I was going through log4j2 configuration update documentation
https://logging.apache.org/log4j/2.x/manual/cloud.html
Managing Logging Configuration
Also, in a micro-services, clustered environment it is quite likely that these changes will need to be propagated to multiple servers at the same time. Trying to achieve this via REST calls could be difficult.
I am just wondering how can this propagation be achieved in a clustered environment or with multiple docker containers of the application running. I know I can call the update on individual containers but my question is about any possibility to update log levels in all containers programmatically in one go?
Thanks for your help.
I would suggest reviewing Logging in the Cloud and taking a look at the Log4j Spring Cloud Sample Application. If you are using a Spring Boot application it can automatically be notified when changes are made to the logging configuration hosted in Spring Cloud Config. Multiple applications can share a single configuration file.
You can accomplish the same thing without Spring Boot but it will be more difficult as you would have to implement your own RabbitMQ event listener and trigger Log4j's Watcher.
Related
I'm looking for ideas to send Docker Logs for each runs to be sent to my application in realtime. I'm looking ways this can be done. Please let me know how this can be done.
Let me know if you have done this already or know how this can be achieved. I want to build feature similar to Netlify or vercel where they show you all build log on UI in realtime. I want something similar for my node application.
You can achieve this with Vercel and Log Drains.
Log Drains make it easy to collect logs from your deployments and forward them to archival, search, and alerting services by sending them via HTTPS, HTTP, TLS, and TCP once a new log line is created.
At the time of writing, we currently support 3 types of Log Drains:
JSON
NDJSON
Syslog
Along with Log Drains, we are introducing two new open-source integrations with logging services for you to start using them today: LogDNA and Datadog.
Install the integration: https://vercel.com/integrations?category=logging
See the announcement blog post: https://vercel.com/blog/log-drains
Note that Vercel does not allow Docker deployments, but does support Serverless Functions.
I am upgrading our old application for Serilog... One of the existing functionality is ... When log level = ERROR, it will log into local file and send 'WCF' request to the remote server, remote server will update database...
Basically it will log into multiple source(local file, remote database by sending wcf request) if it level is 'ERROR'.
I understand using 'rollingfile' appender to logging into local file.
However, i do not know how to configure 'WCF Service' for Serilog... is there any 'WCF SINK' can help me achieve this?
As of this writing there's no generic sink that makes "WCF" calls... You'd have to build your own sink, implementing the calls you need.
You can see a list of documented sinks in the "Provided Sinks" page on Serilog's wiki, and you can also see available sinks in NuGet.org.
We have a couple of web applications written on Java Spring, we are using spring-data-redis and #EnableRedisHttpSession. I was wondering what are the spring session internals. Would it check redis database for duplicate session keys before creating a new session?
I looked at spring documentation and also did a google search but couldn't get a definitive answer.
Found the solution after going through spring session project's github issues. Answer provided by #Avnish doesn't work because in cluster configuration redis does not provide databases, there is just a single database 0 and SELECT commands are not supported.
spring-session#1.1.0.RELEASE solves this issue by providing session namespaces. If you are using #EnableRedisHttpSession annotation, you can add redisNamespace property to it. Or you can add the key in spring.session.redis.namespace property in your .properties or .yml file.
As far as as spring-session is concerned, it'll assume that another application is part of the cluster and will try to reuse existing session if found for given id, although very unlikely that two different applications will generate same session ids considering it's generated via random UUID. Following are the options that you can go with to safe guard yourself anyway.
If you are using spring boot, use different value of spring.redis.database property for each of your application (details here, search for "# REDIS")
If you are using spring-data-redis directly then you should be setting this value directly in the JedisConnectionFactory bean that you are using in your application. For XML configuration, following would do:
<bean id="jedisConnectionFactory"
class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory">
<property name="database" value="1" />
</beans>
Hope it helps!!
I want to enforce HTTPS for a Spring Boot application to be hosted at Pivotal CloudFoundry, and I think most of the applications would want this today. The common way of doing it, as I know, is using
http.requiresChannel().anyRequest().requiresSecure()
But this is causing a redirect loop. The cause, as I understand by refering to posts like this, is that the load balancer converts back https to http. That means, it has to be done at the load balancer level.
So, is there some option to tell CloudFoundry to enforce HTTPS for an application? If not, shouldn't this be a feature request? And, what could be a good way to have this today?
Update: Did any of you from Cloud Foundry or Spring Security team see this post? I think this is an essential feature before one can host an application on CloudFoundry. Googling, I found no easy solution but to tell the users to use https instead of http. But, even if I tell so, when an anonymous user tries to access a restricted page, Spring Security is redirecting him back, to the http login page.
Update 2: Of course, we have the x-forwarded-proto header as many answers suggest, but I don't know how hard it would be to customize the features of Spring Security to use that. Then, we have other things like Spring Social integrating with Spring Security, and I just faced an issue there as well. I think either Spring Security and tons of other other frameworks will need to come out with solutions to use x-forwarded-proto, or CloudFoundry needs to have some way to handle it transparently. I think the later would be far convenient.
Normally, when you push a WAR file to Cloud Foundry, the Java build pack will take that and deploy it to Tomcat. This works great because the Java build pack can configure Tomcat for you and automatically include a RemoteIpValve, which is what takes the x-forwarded-* headers and reconfigures your request object.
If you're using Spring Boot and pushing as a JAR file, you'll have an embed Tomcat in your application. Because Tomcat is embedded in your app, the Java build pack cannot configure it for the environment (i.e. it cannot configure the RemoteIpValve). This means you need to configure it. Instructions for doing that with Spring Boot can be found here.
If you're deploying an web application as a JAR file but using a different framework or embedded container, you'll need to look up the docs for your framework / container and see if it has automatic handling of the x-forwarded-* headers. If not, you'll need to manually handle that, like the other answers suggest.
You need to check the x-forwarded-proto header. Here is a method to do this.
public boolean isSecure (HttpServletRequest request) {
String protocol = request.getHeader("x-forwarded-proto");
if (protocol == null) {
return false;
}
else if (protocol.equals("https")) {
return true;
}
else {
return false;
}
}
Additionally, I have created an example servlet that does this as well.
https://hub.jazz.net/git/jsloyer/sslcheck
git clone https://hub.jazz.net/git/jsloyer/sslcheck
The app is running live at http://sslcheck.mybluemix.net and https://sslcheck.mybluemix.net.
Requests forwarded by the load balancer will have an http header called x-forwarded-proto set to https or http. You can use this to affect the behavior of your application with regard to SSL termination.
We have custom Docker web app running in Elastic Beanstalk Docker container environment.
Would like to have application logs be available for viewing outside. Without downloading through instances or AWS console.
So far neither of solutions been acceptable. Maybe someone achieved centralised logging for Elastic Benastalk Dockerized apps?
Solution 1: AWS Console log download
not acceptable - requires to download logs, extract every time. Non real-time.
Solution 2: S3 + Elasticsearch + Fluentd
fluentd does not have plugin to retrieve logs from S3
There's excellent S3 plugin, but it's only for log output to S3. not for input logs from S3.
Solution 3: S3 + Elasticsearch + Logstash
cons: Can only pull all logs from entire bucket or nothing.
The problem lies with Elastic Beanstalk S3 Log storage structure. You cannot specify file name pattern. It's either all logs or nothing.
ElasticBeanstalk saves logs on S3 in path containing random instance and environment ids:
s3.bucket/resources/environments/logs/publish/e-<random environment id>/i-<random instance id>/my.log#
Logstash s3 plugin can be pointed only to resources/environments/logs/publish/. When you try to point it to environments/logs/publish/*/my.log it does not work.
which means you can not pull particular log and tag/type it to be able to find in Elasticsearch. Since AWS saves logs from all your environments and instances in same folder structure, you cannot chose even the instance.
Solution 4: AWS CloudWatch Console log viewer
It is possible to forward your custom logs to CloudWatch console. Do achieve that, put configuration files in .ebextensions path of your app bundle:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html
There's a file called cwl-webrequest-metrics.config which allows you to specify log files along with alerts, etc.
Great!? except that configuration file format is neither yaml,xml or Json, and it's not documented. There is absolutely zero mentions of that file, it's format either on AWS documentation website or anywhere on the net.
And to get one log file appear in CloudWatch is not simply adding a configuration line.
The only possible way to get this working seem to be trial and error. Great!? except for every attempt you need to re-deploy your environment.
There's only one reference to how to make this work with custom log: http://qiita.com/kozayupapa/items/2bb7a6b1f17f4e799a22 I have no idea how that person reverse engineered the file format.
cons:
Cloudwatch does not seem to be able to split logs into columns when displaying, so you can't easily filter by priority, etc.
AWS Console Log viewer does not have auto-refresh to follow logs.
Nightmare undocumented configuration file format, no way of testing. Trial and error requires re-deploying whole instance.
Perhaps an AWS Lambda function is applicable?
Write some javascript that dumps all notifications, then see what you can do with those.
After an object is written, you could rename it within the same bucket?
Or notify your own log-management service about the creation of a new object?
Lots of possibilities there...
I've started using Sumologic for the moment. There's a free trial and then a free tier (500mb /day, 7 day retention). I'm not out of the trial period yet and my EB app does literally nothing (it's just a few HTML pages serve by Nginx in a docker container. Looks like it could get expensive once you hit any serious amount of logs though.
It works ok so far. You need to create an IAM user that has access to the S3 bucket you want to read from and then it sucks the logs over to Sumologic servers and does all the processing and searching over there. Bit fiddly to set up, but I don't really see how it could be simpler and it's reasonably well-documented.
It lets you provide different path expressions with wildcards, then assign a "sourceCategory" to those different paths. You then use those sourceCategories to filter your log searching to a specific type of logging.
My plan long-term is to use something like your solution 3, but this got me going in very short order so I can move on to other things.
You can use a Multicontainer environment, sharing the log folder to another docker container with the tool of your preference to centralize the logs, in our case we connected an Apache Flume to move the files to an HDFS. Hope this helps you with this.
The easiest method I found to do this was using papertrail via rsyslog and .ebextensions, however it is very expensive for logging everything.
The good part is with rsyslog you can essentially send your logs anywhere and you are not tied to papertrail.
example ebextension
I've found loggly to be the most convenient.
It is a hosted service which might not be what you want. However if you check out their setup page you can see a number of ways your situation is supported (docker specific solutions, as well as like 10 amazon specific options). Even if loggly isn't to your taste, you can look at those solutions and easily see how some of them could be applied to most any centralized logging solution you might use or write.