I'm trying to get telegraf working with influxdb and I've just hit a wall. I added the following block in my telegraf config file:
[[inputs.win_perf_counters.object]]
# Process metrics, in this case for IIS only
ObjectName = "Process"
Instances = ["W3SVC"]
Counters = ["% Processor Time","Handle Count","Private Bytes","Thread Count","Virtual Bytes","Working Set"]
Measurement = "win_proc"
However, when I search my db, I never see that measurement. I know that process is running, so it should be outputting something. The problem is that even though I have logging turned on, there's no logfile. There's also nothing in the event viewer. Short of downloading the source code and running the program in a local debugger, I have no idea how to proceed. Does anyone have any ideas?
[agent]
## Default data collection interval for all inputs
interval = "10s"
## Log at debug level.
debug = true
## Log only error level messages.
quiet = false
## Log target controls the destination for logs and can be one of "file",
## "stderr" or, on Windows, "eventlog". When set to "file", the output file
## is determined by the "logfile" setting.
# logtarget = "file"
## Name of the file to be logged to when using the "file" logtarget. If set to
## the empty string then logs are written to stderr.
# logfile = ""
You can specify debug = true in the agent config to print the debug logs. If you don't specify any log file, the logs will be printed on terminal.
You have probably solved it by now, but for further reference. You could add a file output.
[[outputs.file]]
files = ["stdout"]
In your telegraf.conf and then watch the console (stdout) for output.
Related
When I save a new entry, I get this huge error, where the last line says You're seeing this error because you have DEBUG = True in your Django settings file. Change that to False, and Django will display a standard page generated by the handler for this status code.
When I do so, I get another error CommandError: You must set settings.ALLOWED_HOSTS if DEBUG is False.
I am following a tutorial to learn Django and I think I have done as explained in the tutorial
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'products.apps.ProductsConfig'
]
snippet of settings.py
from django.contrib import admin
from .models import Product
# Register your models here.
admin.site.register(Product)
admin.py
from django.db import models
class Product(models.Model):
name = models.CharField(max_length=255)
price = models.FloatField()
stock = models.IntegerField()
image_url = models.CharField(max_length=2083)
class Offer(models.Model):
code = models.CharField(max_length=10)
description = models.CharField(max_length=255)
discount = models.FloatField()
models.py
This is where I give my entry
This is when I click the save button
You don't say which error the debug shows, however you'll want to read the debug and solve that error as well.
Meantime, since you've turned debug off, you still want your app to run, although it will still have the above error.
Django has to know the host that is running the app in the runserver, as a redirect protection mechanism. If ALLOWED_HOSTS is blank, it only assumes localhost, so would only work from a browser or proxy on the same server.
To enable runserver to work, set the IP address on your server in
(project directory)/settings.py
ALLOWED_HOSTS = [ '10.1.1.10', ] # Use IP address of your host, not this example IP.
Since you've turned debug off, you'll need to restart your runserver
(venv)$ python manage.py runserver 0:8080
Your port number may be different.
Now you may be able to browse to your app from another system.
A similar question is also discussed here
I want to save the standard out from my dag separately from the logging that airflow generates.
My scripts print to standard out and I want to capture that (and only that) into a specified log file each time it runs. I've tried
logfile = 'mylog_01.log'
myfunc = def()...
log = open(logfile, "ab")
sys.stdout = log
print_params = PythonOperator(task_id="print_params",
python_callable=myfunc,
provide_context=True,
dag=dag)
... but std.out is reset within each PythonOperator. How do I control where standard out is printing to within all python operators?
Also, I am using hundreds of print statements within the scripts so modifying each of those isn't practical in this case.
I am using flume to write to Google Cloud Storage. Flume listens on HTTP:9000. It took me some time to make it work (add gcs libaries, use a credentials file...) but now it seems to communicate over the network.
I am sending very small HTTP request for my tests, and I have plenty of RAM available:
curl -X POST -d '[{ "headers" : { timestamp=1417444588182, env=dev, tenant=myTenant, type=myType }, "body" : "some body ONE" }]' localhost:9000
I encounter this memory exception on first request (then of course, it stops working):
2014-11-28 16:59:47,748 (hdfs-hdfs_sink-call-runner-0) [INFO - com.google.cloud.hadoop.util.LogUtil.info(LogUtil.java:142)] GHFS version: 1.3.0-hadoop2
2014-11-28 16:59:50,014 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:467)] process failed
java.lang.OutOfMemoryError: Java heap space
at java.io.BufferedOutputStream.<init>(BufferedOutputStream.java:76)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.<init>(GoogleHadoopOutputStream.java:79)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.create(GoogleHadoopFileSystemBase.java:820)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
(see complete stack trace as a gist for full details)
The strange part is that folders and files are created the way I want, but files are empty.
gs://my_bucket/dev/myTenant/myType/2014-12-01/14-36-28.1417445234193.json.tmp
Is it something wrong with the way I configured flume + GCS or is it a bug in GCS.jar ?
Where should I check to gather more data ?
ps : I am running flume-ng inside docker.
My flume.conf file:
# Name the components on this agent
a1.sources = http
a1.sinks = hdfs_sink
a1.channels = mem
# Describe/configure the source
a1.sources.http.type = org.apache.flume.source.http.HTTPSource
a1.sources.http.port = 9000
# Describe the sink
a1.sinks.hdfs_sink.type = hdfs
a1.sinks.hdfs_sink.hdfs.path = gs://my_bucket/%{env}/%{tenant}/%{type}/%Y-%m-%d
a1.sinks.hdfs_sink.hdfs.filePrefix = %H-%M-%S
a1.sinks.hdfs_sink.hdfs.fileSuffix = .json
a1.sinks.hdfs_sink.hdfs.round = true
a1.sinks.hdfs_sink.hdfs.roundValue = 10
a1.sinks.hdfs_sink.hdfs.roundUnit = minute
# Use a channel which buffers events in memory
a1.channels.mem.type = memory
a1.channels.mem.capacity = 10000
a1.channels.mem.transactionCapacity = 1000
# Bind the source and sink to the channel
a1.sources.http.channels = mem
a1.sinks.hdfs_sink.channel = mem
related question in my flume/gcs journey: What is the minimal setup needed to write to HDFS/GS on Google Cloud Storage with flume?
When uploading files, the GCS Hadoop FileSystem implementation sets aside a fairly large (64MB) write buffer per FSDataOutputStream (file open for write). This can be changed by setting "fs.gs.io.buffersize.write" to a smaller value, in bytes, in core-site.xml. I imagine 1MB would suffice for low-volume log collection.
In addition, check what the maximum heap size is set to when launching the JVM for flume. The flume-ng script sets a default JAVA_OPTS value of -Xmx20m to limit the heap to 20MB. This can be set to a larger value in flume-env.sh (see conf/flume-env.sh.template in the flume tarball distribution for details).
I've set up a custom logger in an initializer:
# /config/initializers/logging.rb
log_file = File.open("#{Example::Application.config.root}/log/app.log", "a")
AppLogger = ActiveSupport::BufferedLogger.new(log_file)
AppLogger.level = Logger::DEBUG
AppLogger.auto_flushing = true
AppLogger.debug "App Logger Up"
Although it creates the log file when I start the application, it doesn't write to the log file. Either from AppLogger.debug "App Logger Up" in the initializer or similar code elsewhere in the running application.
However, intermittently I do find log statements in the file, though seemingly without any pattern. It would seem that it is buffering the log messages and dumping them when it feels like it. However I'm setting auto_flushing to true which should cause it to flush the buffer immediately.
What is going on and how can I get it working?
Note: Tailing the log with $ tail -f "log/app.log" also doesn't pick up changes.
I'm using POW, but I've also tried with WEBrick and get the same (lack of) results.
Found the answer in a comment to the accepted answer to on this question:
I added:
log_file.sync = true
And it now works.
I couldn't find anywhere a solution on how to make queue_classic write logs to a file. Scrolls, which Queue_Classic uses for logging, doesn't seem to have any example either.
Could someone provide a working example?
The logging within the method called by QC will be the source for the logging. For example, in rails. Any call to Rails.logger will go to the log file appropriate to your RAILS_ENV. The log data coming from scrolls goes to stdout, so you could pipe STDOUT from your queues to a log file when you start them.
You could control your queues with god.rb giving a god.rb configuration instance similar to this (I've left your configuration for number of queues, directories, etc up to you):
number_queues.times do |queue_num|
God.watch do |w|
w.name = "QC-#{queue_num}"
w.group = "QC"
w.interval = 5.minutes
w.start = "bundle exec rake queue:work" # This is your rake task to start QC listening
w.gid = 'nginx'
w.uid = 'nginx'
w.dir = rails_root
w.keepalive
w.env = {"RAILS_ENV" => rails_env}
w.log = "#{log_dir}/qc.stdout.log" # Or.... "#{log_dir}//qc-#{queue_num}.stdout.log"
# determine the state on startup
w.transition(:init, { true => :up, false => :start }) do |on|
on.condition(:process_running) do |c|
c.running = true
end
end
end
end
FWIW, I find the STDOUT log data to be less than helpful and may end up just sending it to the bitbucket.
If the log data is useless, we should consider removing it.
DEBUG is very nice when I'm running it from a commandline. For times when I am trying to figure out what I've done to bork everything up (usually bundling issues, path issues, etc). Or times when I am going to demo what is going on in a queue.
For me, INFO logging consists of the standard lib=queue_classic level=info action=insert_job elapsed=16 message and any STDOUT/STDERR from forked executions or from PostgreSQL. I've not extended any of the logging classes since scrolls goes to STDOUT and my tasks are in an environment that provides logging.
Sure, it could be removed. I think that REALLY depends on the environment and what the queue is doing. If I were to be doing something that did not have Rails.logger then I would use the QC.log and scrolls more effectively and instrument my tasks in that manner.
As I play with it, I may keep my configuration as-is just because of the output coming from methods/applications called by the tasks themselves. I might decide to override QC.log code to add date/time. I am still working to determine what fits my needs.
Sorry, my last line was really focused on the environment example I had given.
original here: enter link description here