Daphne not logging errors - daphne

I have the following systemd config
[Service]
WorkingDirectory=/srv/www/project_a/project_a/
Environment=JSON_SETTINGS=/srv/www/project_a/project_a.json
ExecStart=/srv/www/project_a/bin/daphne -b 0.0.0.0 -p 8000 project_a.asgi:channel_layer
Restart=always
KillSignal=SIGTERM
NotifyAccess=all
StandardOut=file:/tmp/daphne-access.log
StandardError=file:/tmp/daphne-error.log
But the daphne-access.log and daphone-error.log files are both empty, I thought daphne's default logging out be outputting to stdout?
Tried: --access-log=/tmp/daphne-access.log which worked for access logs, but don't know where to get django errors when they happen.

Just add the following to your LOGGING['loggers'] settings:
'daphne': {
'handlers': [
'console',
],
'level': 'DEBUG'
},

For django errors you need to define django logs. please refer the documentation for more information.
https://docs.djangoproject.com/en/2.2/topics/logging/

Related

How to use ruby-debug-ide with unicorn_rails?

I'd like to use VScode as an integrated debugger with Ruby on Rails. There seem to be pretty good guides on how to do this, whether launching the process in VScode or attaching to a running debug server.
However, I cannot find good guides on how to do this when running unicorn. Consider this typical command to start the debug server:
rdebug-ide --host 0.0.0.0 --port 1234 --dispatcher-port 1234 -- ./bin/rails s
It's expecting bin/rails s to start the rails server. This is the command we currently use to start unicorn:
bundle exec unicorn_rails -E "develop_against_staging" -p 3010 -c "${PWD}/config/unicorn.rb"
Is there a way to start unicorn from within rails? Or is there another way to tell rdebug-ide what to do? I can't even find good documentation for rdebug-ide. I'll keep fiddling and answer this myself if I figure something out.
First you need to install Ruby
extension
Add following gems in your Gemfile:
gem 'debase'
gem 'ruby-debug-base', :platforms => [:jruby, :ruby_18, :mingw_18]
gem 'ruby-debug-base19x', '>= 0.11.30.pre4', :platforms => [:ruby_19, :mingw_19]
gem 'ruby-debug-ide' , "~>0.6.1"
Then you need to let rdebug-ide know you are using unicorn (multi-process app) by providing the --dispatcher-port option. Please take a look at rdebug-ide file to see all the available options.
--dispatcher-port: It is a same port that you will use to run unicorn. In your case 3010.
So it should look like this:
bundle exec rdebug-ide --debug --port 1234 --dispatcher-port 3010 -- vendor/bundle/ruby/2.6.0/bin/unicorn -E "develop_against_staging" -p 3010 -c "${PWD}/config/unicorn.rb
Running above command alone wont start the debugging, infact your Unicorn server wont be starting yet. When looking at the logs after running the above command in the terminal window you will notice a message something like this
Fast Debugger (ruby-debug-ide 0.6.1, debase 0.2.4.1, file filtering is supported) listens on 127.0.0.1:1234
The logs telling us rdebug-ide is ready to be connected at port 1234. Create a launch.json file if it is not already created and add this configuration.
{
"version": "0.2.0",
"configurations": [
{
"name": "1234 Listen for rdebug-ide",
"type": "Ruby",
"request": "attach",
"remoteHost": "127.0.0.1",
"remotePort": "1234",
"remoteWorkspaceRoot": "${workspaceRoot}",
"cwd": "${workspaceRoot}"
}
]
}
Once you add the entry, go ahead and click on Play Image here button to start debugging.
Now that your unicorn server is started, if you try to access your application you wont be able to access it because the worker process haven't started yet.
Continue to look at logs carefully you will notice
122: Ide process dispatcher notified about sub-debugger which listens on 34865. This is telling us a new sub-debugging process is started on port 34865. This is a randomly generated port find_free_port.
Note: There will a one port per unicorn-worker.
Once you see the above log add another entry into your launch.json file and copy the newly generated port into the file. Like this
{
"version": "0.2.0",
"configurations": [
{
"name": "1234 Listen for rdebug-ide",
"type": "Ruby",
"request": "attach",
"remoteHost": "127.0.0.1",
"remotePort": "1234",
"remoteWorkspaceRoot": "${workspaceRoot}",
"cwd": "${workspaceRoot}"
},
{
"name": "34865 Listen for sub-rdebug-ide",
"type": "Ruby",
"request": "attach",
"remoteHost": "127.0.0.1",
"remotePort": "34865",
"remoteWorkspaceRoot": "${workspaceRoot}",
"cwd": "${workspaceRoot}"
}
]
}
Once added, select the new configuration and click play button. If you had set number of worker to one only in your unicorn.config file, you should see the the log something like this.
I, [2022-07-13T19:44:26.914412 #122] INFO -- : worker=0 ready. Now put a breakpoint and start using your application, it will break once it reach that code path.
If you have successfully setup everything and got to this point, there will be some gotcha that you may need to deal with.
Worker timing out
Re-linking to sub-debugger with different random port.
...
This complexity is because of unicorn master-worker design.
Answering this in a bit rush, please let me know if you have any questions. I apologies if I made this more confusion for you.

How to capture logs from workers from a Dask-Yarn job?

I have tried using the following in ~/.config/dask/distributed.yaml and ~/.config/dask/yarn.yaml,
logging-file-config: "/path/to/config.ini"
or
logging:
version: 1
disable_existing_loggers: false
root:
level: INFO
handlers: [consoleHandler]
handlers:
consoleHandler:
class: logging.StreamHandler
level: INFO
formatter: sample_formatter
stream: ext://sys.stderr
formatters:
sample_formatter:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
and then in my function that gets evaluated at the worker:
import logging
from distributed.worker import logger
import dask
from dask.distributed import Client
from dask_yarn import YarnCluster
log = logging.getLogger(__name__)
#dask.delayed
def worker_func(args):
logger.info("This will show up in the worker logs")
log.info("This does not show up in worker logs")
return
if __name__ == "__main__":
dag_1 = {'worker_func': (worker_func, arg_1)}
tasks = dask.get(dag_1, 'load-1')
log.info("This also shows up in logs, and custom formatted)
cluster = YarnCluster()
client = Client(cluster)
dask.compute(tasks)
When I try to view the yarn logs using:
yarn logs -applicationId {application_id}
I do not see the log from log.info inside worker_func, but I do see the logs from distributed.worker.logger and from outside that function on the console. I also tried using client.get_worker_logs(), but that returned an empty dictionary. Is there a way to see customized logs from inside the function that gets evaluated at a worker?
There's a lot going on in this question, so I'm going to answer "How do I configure logging for dask-yarn workers" and hope everything else becomes clear by answering that.
Dask's configuration system is loaded locally on the machine you start a dask cluster from (usually the edge node). This configuration is not distributed to the workers automatically, you're responsible for doing that yourself. You have a few options here:
Have admin/IT put configuration in /etc/dask/ on every node, which will affect all users.
Bundle configuration with your packaged environment. Dask will load configuration from {prefix}/etc/dask/, where prefix is sys.prefix.
For example, if you have a conda environment at /path/to/environment, you'd do the following to bundle the configuration
# Create the configuration directory in the environment
mkdir -p /path/to/environment/etc/dask/
# Add your configuration to this directory
mv config.yaml /path/to/environment/etc/dask/config.yaml
# Package the environment
conda pack -p /path/to/environment -o environment.tar.gz
Any configuration values set in config.yaml will now be available on all the worker nodes. An example configuration file setting some logging configuration would be:
logging:
version: 1
root:
level: INFO
handlers: [consoleHandler]
handlers:
consoleHandler:
class: logging.StreamHandler
level: INFO
formatter: sample_formatter
stream: ext://sys.stderr
formatters:
sample_formatter:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
Logs from completed dask-yarn applications can be retrieved using the YARN cli at
yarn logs -applicationId <application-id>
Logs for running dask-yarn applications can be retrieved using client.get_worker_logs(). Note that these logs will only contain logs written to the distributed.worker logger. You cannot write to your own logger and have them appear in the output of client.get_worker_logs(). To write to this logger, get it via
import logging
logger = logging.getLogger("distributed.worker")
logger.info("Writing with the worker logger")
Any logger appropriately configured to log to stdout or stderr will appear in the logs accessed via the yarn CLI, but only the distributed.worker logger output will also be available to get_worker_logs().
Side note
I have tried using the following in ~/.config/dask/distributed.yaml and ~/.config/dask/yarn.yaml
The name of the config files doesn't matter, dask loads all yaml files in all config directories and merges their contents. For more information please read the configuration docs

how to configure jmx exporter in tomcat for prometheus

I am trying to configure jmx monitor for monitor my java metrics. but facing some issue which is described below.
My current process:
I set below parameters in my catalina.sh file.
Prometheus_JMX_OPTS="-javaagent:/home/centos/jmx_prometheus_javaagent-0.11.0.jar=7777:/home/centos/config.yml"
JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=3000 -Dcom.sun.management.jmxremote.rmi.port=3000 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
JAVA_OPTS="-Xms${JVM_MINIMUM_MEMORY} -Xmx${JVM_MAXIMUM_MEMORY} ${JAVA_OPTS} ${OPC_JVM_ARGS} ${JVM_REQUIRED_ARGS} ${DISABLE_NOTIFICATIONS} ${JVM_SUPPORT_RECOMMENDED_ARGS} ${JVM_EXTRA_ARGS} ${JIRA_HOME_MINUSD} ${JMX_OPTS} ${Prometheus_JMX_OPTS}"
I download the jmx_prometheus_javaagent-0.11.0.jar file in /home/centos path.
Create a config file with below content.
startDelaySeconds: 0
ssl: false
lowercaseOutputName: false
lowercaseOutputLabelNames: false
Open 7777 port from Security groups.
Now when I trying to access http://localhost:7777/metrics then it is showing unable to reach now.
Anyone can help me into this, I am stuck here . ☺

WARNING: no logs are available with the 'none' log driver

I am following below url for logging driver
https://docs.docker.com/engine/admin/logging/overview/#configure-the-default-logging-driver
now, I want to remove this logging driver
I have remove file(daemon.json) from /etc/docker folder too.
But when I build container, system should always showing me warning
WARNING: no logs are available with the 'none' log driver
How can I get rid of this warning?
Finally solved.
1) Delete daemon.json file from /etc/docker folder.
2) Restart docker service.
My case.
In file docker-compose.yml it is very possible that you have:
logging:
driver: none
To get rid of the warning WARNING: no logs are available with the 'none' log driver you should comment or remove those 2 lines.
By default /etc/docker/daemon.json does not exist. The default driver is json-file. To verify current driver use
docker info | grep Logging
Logging Driver: fluentd
I use fluentd (td-agent) so my /etc/docker/daemon.json
{
"log-driver": "fluentd",
"log-opts": {
"fluentd-address": "127.0.0.1:24224"
}
}
For details about logging, divers see
https://docs.docker.com/config/containers/logging/

puppet tomcat6 service does not receive environment variables

I am using Debian OS and tomcat6.
I export CATALINA_OPTS="-Xms1024m -Xmx2048m" environment variable and create a puppet service:
class tomcat6::service {
service { 'tomcat6':
ensure => running,
hasstatus => true,
hasrestart => true,
enable => true,
}
}
As /usr/share/tomcat6/bin/catalina.sh reads CATALINA_OPTS variables for starting tomcat6 service, the process should receive CATALINA_OPTS but it does not show in the process command. I execute ps aux|grep catalina to show the command detail:
tomcat6 10658 1.0 2.0 2050044 189572 ? Sl 18:04 0:16 /usr/lib/jvm/default- java/bin/java -Djava.util.logging.config.file=/var/lib/tomcat6/conf/logging.properties -Djava.awt.headless=true -Xmx128m -XX:+UseConcMarkSweepGC -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/usr/share/tomcat6/endorsed -classpath /usr/share/tomcat6/bin/bootstrap.jar -Dcatalina.base=/var/lib/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.io.tmpdir=/tmp/tomcat6-tomcat6-tmp org.apache.catalina.startup.Bootstrap start
Puppet does not receive CATALINA_OPTS properly.
My question is, how can I let puppet read CATALINA_OPTS when executing puppet tomcat6 service?
Thank you.
instead of
hasstatus => true,
put
hasstatus => false,
By doing this, you will force puppet to look up the proc table and find the daemon OR in other words, this will make puppet run ps auxw | grep tomcat6 before doing anything else.
hasstatus => true tells that if puppet receives a status != running it will do as directed, but in some cases several daemons don't return the status correctly (probably due to mutiple threading involved)
I fixed the issue by setting setenv.sh for tomcat6. It works properly.

Resources