I am using a SLURM cluster and want to be able to be able to add custom logs inside my task that should appear in the logs on the dashboard when inspecting a particular worker.
Alternatively I would like to be able to extract the name of the worker to use the log_event function and include the name of the worker in such a log in a way that matches the name on the dashboard.
The reason is so that I can see the logs of any long running workers that seem to be hanging or having issues.
Related
I wanted to ask "How to get the location of the service worker from the inside service worker?" but I've found this question get service worker id or date from within service worker, I was looking at the spec but didn't found anything useful.
So my question is are there any information about the service worker from inside the worker. Or to have any metadata I need to post a message to the worker.
I'm interested in the URL of the worker or the URL of the website where the worker was installed since I need this URL for routing inside the worker.
It seems that there is a location inside service worker that points to the URL of the worker filename.
To the path of the root where worker was installed (which is what I need) you can use:
const root_url = location.pathname.replace(/[^\/]+$/, '');
I want to write error logs to GCP. But can't find out how to filter messages with string, ex: level='error'
I have read this documentation.
Currently, I can't think anyway but write it to Fluentd => filter message => write GCP. But it has added an unnecessary step in my case.
Do we have a straightforward way to filter and send logs directly to GCP?
Most simple way is to just go to logs explorer and change "severity" to "error" - like so:
This way you will only see error messages for all your VM's.
It's another matter if you want fluentd to send just errors to GCP. In this case you need to reconfigure it. Have a look at the documentation on how to send structured logs to GCP and make proper changes.
Depending on your needs fist method will work out of the box. Second one needs some tinkering but will also work.
I register service worker like this:
navigator.serviceWorker.register('/sw.js', {
scope: '/'
}).then(function(registration) {}).catch(function(err) {console.log(err)})
In production environment, I caught some errors like The request to fetch the script was interrupted. and The Service Worker system has shutdown.
What's the possible reasons for above errors?
There is a comment explaining them. I think it is useful. https://github.com/w3c/ServiceWorker/issues/1275
Its because the path to your service worker might not be correct. If your service worker is on the same level as the page which is trying to load the service worker then do navigator.serviceWorker.register('sw.js'}). /sw.js tries to load from the root of the project for e.g. http://localhost:8080/sw.js. Also look at network logs in the developer console to figure out the path the browser is using to fetch the service worker.
I am developing a UI in which I need to show the live logs (stdout and stderr) of jobs running in a mesos slave. I am finding out a way in which I will be able to generate a URL which will point to the mesos logs for the job. Is there a way to do the same? Basically, I need to know the slave id, executor id, master id etc. for generating the URL. Is there a way to find these information?
The sandbox URL is of the form http://
$slave_url:5050/read.json?$work_dir/work/slaves/$slave_id/frameworks/$framework_id/executors/$executor_id/runs/$container_id/stdout, and you can even use the browse.json endpoint to browse around within the sandbox.
Alternatively, you can use the mesos tail $task_id CLI command to access these logs.
For more details, see the following mailing list thread: http://search-hadoop.com/m/RFt15skyLE/Accessing+stdout%252Fstderr+of+a+task+programmattically
How about using reverse approach. You need to present live logs from stderr and stdout. How about storing them outside of mesos slave e.g., elastic-search? You will get nearly live updates, old logs available after, nice search options.
From version 0.27.0 Mesos supports ContainerLogger. You can write your own implementation of ContainerLogger that will push logs to central logs repository (Graylog, Logstash, e.t.c) and then expose it in your UI.
Mesos offers a REST interface where you get the information you want. Visit with your browser http://<MESOS_MASTER_IP>:5050/help (using default port) to check the options you have to query (for example, you can get the information you need from http://<MESOS_MASTER_IP>:5050/master/state.json). Check this link to see an example using it.
We have custom Docker web app running in Elastic Beanstalk Docker container environment.
Would like to have application logs be available for viewing outside. Without downloading through instances or AWS console.
So far neither of solutions been acceptable. Maybe someone achieved centralised logging for Elastic Benastalk Dockerized apps?
Solution 1: AWS Console log download
not acceptable - requires to download logs, extract every time. Non real-time.
Solution 2: S3 + Elasticsearch + Fluentd
fluentd does not have plugin to retrieve logs from S3
There's excellent S3 plugin, but it's only for log output to S3. not for input logs from S3.
Solution 3: S3 + Elasticsearch + Logstash
cons: Can only pull all logs from entire bucket or nothing.
The problem lies with Elastic Beanstalk S3 Log storage structure. You cannot specify file name pattern. It's either all logs or nothing.
ElasticBeanstalk saves logs on S3 in path containing random instance and environment ids:
s3.bucket/resources/environments/logs/publish/e-<random environment id>/i-<random instance id>/my.log#
Logstash s3 plugin can be pointed only to resources/environments/logs/publish/. When you try to point it to environments/logs/publish/*/my.log it does not work.
which means you can not pull particular log and tag/type it to be able to find in Elasticsearch. Since AWS saves logs from all your environments and instances in same folder structure, you cannot chose even the instance.
Solution 4: AWS CloudWatch Console log viewer
It is possible to forward your custom logs to CloudWatch console. Do achieve that, put configuration files in .ebextensions path of your app bundle:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html
There's a file called cwl-webrequest-metrics.config which allows you to specify log files along with alerts, etc.
Great!? except that configuration file format is neither yaml,xml or Json, and it's not documented. There is absolutely zero mentions of that file, it's format either on AWS documentation website or anywhere on the net.
And to get one log file appear in CloudWatch is not simply adding a configuration line.
The only possible way to get this working seem to be trial and error. Great!? except for every attempt you need to re-deploy your environment.
There's only one reference to how to make this work with custom log: http://qiita.com/kozayupapa/items/2bb7a6b1f17f4e799a22 I have no idea how that person reverse engineered the file format.
cons:
Cloudwatch does not seem to be able to split logs into columns when displaying, so you can't easily filter by priority, etc.
AWS Console Log viewer does not have auto-refresh to follow logs.
Nightmare undocumented configuration file format, no way of testing. Trial and error requires re-deploying whole instance.
Perhaps an AWS Lambda function is applicable?
Write some javascript that dumps all notifications, then see what you can do with those.
After an object is written, you could rename it within the same bucket?
Or notify your own log-management service about the creation of a new object?
Lots of possibilities there...
I've started using Sumologic for the moment. There's a free trial and then a free tier (500mb /day, 7 day retention). I'm not out of the trial period yet and my EB app does literally nothing (it's just a few HTML pages serve by Nginx in a docker container. Looks like it could get expensive once you hit any serious amount of logs though.
It works ok so far. You need to create an IAM user that has access to the S3 bucket you want to read from and then it sucks the logs over to Sumologic servers and does all the processing and searching over there. Bit fiddly to set up, but I don't really see how it could be simpler and it's reasonably well-documented.
It lets you provide different path expressions with wildcards, then assign a "sourceCategory" to those different paths. You then use those sourceCategories to filter your log searching to a specific type of logging.
My plan long-term is to use something like your solution 3, but this got me going in very short order so I can move on to other things.
You can use a Multicontainer environment, sharing the log folder to another docker container with the tool of your preference to centralize the logs, in our case we connected an Apache Flume to move the files to an HDFS. Hope this helps you with this.
The easiest method I found to do this was using papertrail via rsyslog and .ebextensions, however it is very expensive for logging everything.
The good part is with rsyslog you can essentially send your logs anywhere and you are not tied to papertrail.
example ebextension
I've found loggly to be the most convenient.
It is a hosted service which might not be what you want. However if you check out their setup page you can see a number of ways your situation is supported (docker specific solutions, as well as like 10 amazon specific options). Even if loggly isn't to your taste, you can look at those solutions and easily see how some of them could be applied to most any centralized logging solution you might use or write.