I'm debugging my code and I see my thread is being blocked in the following log4j TextEncoderHelper. I'm using log4j 2.8.2
None of my threads was able to run and it basically blocked the whole application.
Does anyone know what the below does? If I have two threads logging, does it mean its deadlock?
(I'm running with parameter
-DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector -DAsyncLogger.RingBufferSize=32768*32768 -DAsyncLogger.WaitStrategy=Sleep -Dlog4j2.AsyncQueueFullPolicy=Discard)
private static void copyDataToDestination(final ByteBuffer temp, final ByteBufferDestination destination) {
61 synchronized (destination) {
62 ByteBuffer destinationBuffer = destination.getByteBuffer();
63 if (destinationBuffer != temp) { // still need to write to the destination
64 temp.flip();
65 if (temp.remaining() > destinationBuffer.remaining()) {
66 destinationBuffer = destination.drain(destinationBuffer);
67 }
68 destinationBuffer.put(temp);
69 temp.clear();
70 }
71 }
72 }
If the debugger shows the application is blocked trying to write to the underlying appender then perhaps the underlying appender cannot keep up with the workload.
The question doesn’t mention what appender is being used so I initially assumed the file appender but from the comments it turns out the JmsAppender is used. (Please mention details like this in future questions: without this information I was thinking in the wrong direction.)
JMS is a big subject by itself but is generally not known for being highly performant. The actual throughput that can be achieved depends on the implementation product and its configuration.
I suggest enabling JMS debug options to confirm that the JMS queue is indeed the bottleneck.
Related
Gradle does not work correctly in a docker environment, it is destined to use too much memory and be killed for using too much memory.
The memory manager gets its snapshots using the following class
https://github.com/gradle/gradle/blob/master/subprojects/process-services/src/main/java/org/gradle/process/internal/health/memory/MemInfoOsMemoryInfo.java
and in particular Gradle determines how much free memory is left by reading /proc/meminfo, which provides an inaccurate reading in a container.
Gradle only kills off Worker Daemons when a request comes in to make a new Worker Daemon with a larger min heap size then is available according to this reading.
Thus, Gradle will keep making workers until it uses up the alotted amount for the container and be killed.
Does anyone have a workaround for this? Don't really understand how this hasn't been a problem for more people. I suppose it only really becomes an issue if your worker daemons can't be reused and so new ones get created, which is the case for me as I have a large number of modules.
I have a temporary workaround wherein I give every jvm spawned a huge -Xms and so it always triggers the min heap size > available and so always removes prior worker daemons, but this is not satisfactory.
-- edit
To preempt some things, --max-workers does not affect the number of Worker Daemons allowed to exist, it merely affects the number which are allowed to be active. Even with --max-workers = 1, it is allowed to have arbitrary many idle Worker Daemons.
Edit - Ignore the below, it somewhat works but I have since patched Gradle by overwriting the MemInfoOsMemoryInfo class and it works a lot better. Will provide a link to the MR onto Gradle soon.
Found a reasonable work around, we listen for the os memory updates, and every time a task is done we request more memory than is determined to be free, ensuring a daemon is stopped.
import org.gradle.process.internal.health.memory.OsMemoryStatus
import org.gradle.process.internal.health.memory.OsMemoryStatusListener
import org.gradle.process.internal.health.memory.MemoryManagertask
task expireWorkers {
doFirst {
long freeMemory = 0
def memoryManager = services.get(MemoryManager.class)
gradle.addListener(new TaskExecutionListener() {
void beforeExecute(Task task) {
}
void afterExecute(Task task, TaskState state) {
println "Freeing up memory"
memoryManager.requestFreeMemory(freeMemory * 2)
}
})
memoryManager.addListener(new OsMemoryStatusListener() {
void onOsMemoryStatus(OsMemoryStatus osMemoryStatus) {
freeMemory = osMemoryStatus.freePhysicalMemory
}
})
}
}
I am running helgrind to check for data races in my program. Helgrind reports 222 errors, all of them are:
Thread #21: Bug in libpthread: pthread_cond_wait succeeded without prior pthread_cond_post
I could not find anything about this error message on google. Within the valgrind source code, this seems to originate here:
if (!timeout && !libhb_so_everSent(cvi->so)) {
/* Hmm. How can a wait on 'cond' succeed if nobody signalled
it? If this happened it would surely be a bug in the threads
library. Or one of those fabled "spurious wakeups". */
HG_(record_error_Misc)( thr, "Bug in libpthread: pthread_cond_wait "
"succeeded"
" without prior pthread_cond_post");
}
However, I can not believe that I got 222 spurious wake-ups in about a second.
What could be the cause of this?
There are two condition variables, both in shared memory. The error always seems to happen with one of them, not with the other.
I need to repetitively send 4 bytes, 00 00 00 F6, every two seconds.
BUT, IdUDPClient.SendBuffer does not return after transmission.
I try sending a few more bytes, and it returns every time.
This works:
UDP.SendBuffer(RawToBytes(#0 + #0 + #0 + #1 + #127 + #128 + #246, 7));
This does not work:
UDP.SendBuffer(RawToBytes(#0 + #0 + #0 + #246, 4));
I have unsuccessfully tried many of the suggestions I have found in various related StackExchange questions.
I have seen at least three scenarios:
Hanging, Wireshark sees 1 transmission.
Working repetitive transmissions, but NOT with 4 bytes of data.
Sometimes bytes > 7F are sent as 3F.
Can someone point out what I am doing wrong?
Edit: The above happens in a thread. If the TIdUDPClient is put as a visible component on a form, then it works fine.
Could this be a threading/reentrancy issue???
It would definitely help to see more of your code, have you made sure that your thread is actually running (calling Execute) otherwise your code won´t run, obviously :)
The difference when using the TIdUDPClient component on a visible form is that it is autocreated (The constructor is run automatically by the delphi designer)
TIdUDPClient.Create(AOwner: **)
by doing so the component will be operating on the main thread ("GUI thread" if you like)
A few things I´d suggest you do in order to hunt this down is:
First Make sure that the thread is actually executing
The problem could also be that the TIdUDPClient does not have an owner if instantiated with TIdUDPClient.Create(nil) inside the thread, try to instantiate int with and owner either the TThread or the Application
TIdUDPClient.Create(Self)
or
TIdUDPClient.Create(Application)
Hope this helps, but as I said posting more of your code would definitely make it possible for me to help you!
In a production app with the debug information stripped out, how do you convert the output of:
NSLog(#"Stack Trace: %#", [exception callStackSymbols]);
To an legible class and method name? A line number would be a blessing.
Here's the output I'm getting:
0 CoreFoundation 0x23d82f23 <redacted> + 154
1 libobjc.A.dylib 0x23519ce7 objc_exception_throw + 38
2 CoreFoundation 0x23cb92f1 <redacted> + 176
3 MyApp 0x23234815 MyApp + 440341
The final line is the bread and butter line, but when I use dwarf to find the address, nothing exists.
dwarfdump --arch armv7 MyApp.dSYM --lookup 0x00234815 | grep 'Line table'
I've read here that you need to convert the stack address to something else for dwarf or atos:
https://stackoverflow.com/a/12464678/2317728
How would I find the load address or slide address to perform the calculation? Is there not a way to calculate all this prior to sending the stacktrace to the log from within the app? If not, how would I determine and calculate these values after receiving the stack trace? Better yet, is there an easier solution I'm missing?
Note I cannot just wait for crash reports since the app is small and they will never come. I'm planning on sending the stack traces to our server to be fixed as soon as they appear.
EDITORIAL
The crash reporting tools in iOS are very rough, particularly when compared to Android. In android, the buggy lines are sent to Google analytics, you use the map to debug the line--simple (comparatively). For iOS, you are confronted with: a) waiting on user bug reports (not reasonable for a small app), b) sending stack traces to a server where there are scant tools or information on how to symbolicate the stack traces, c) relying on large quasi-commercial 3rd party libraries. This definitely makes it harder to build and scale up--hoping Apple will eventually take notice. Even more hopeful someone has spotted an easier solution I might have missed ;)
Thanks for your help!
A suggestion, you can easily get the method name, exception reason and line number using:
NSLog(#"%# Exception in %s on %d due to %#",[exception name],__PRETTY_FUNCTION__,__LINE__,[exception reason]);
I am new to opencv and opencl.
In opencl they give a wrappers for opencl utility function calling.
I do not need to do much. ocl::Context::getContext() will get me the context and I can pass it to all opencl related execution. I do not need command quee here. But I want to know the performance of the kernels using opencl's profiling events. For that I need to create a custom command queue. How can I create a command queue with the same context that I used for executing kernel. Please I created this context using opencv's function ocl::Context::getContext().
I do not want to create command queue from scratch (by getting platform id, device id, context one by one). That would mean to change a lot of places. I want to reuse opencv's context and reuse it to create command queue with event capability.
You are in a tricky situation since the OpenCV code has missing functionality interface to the underlying OpenCL options:
804 void CommandQueue::create(ContextImpl* context)
805 {
806 release();
807 cl_int status = 0;
808 // TODO add CL_QUEUE_PROFILING_ENABLE
809 cl_command_queue clCmdQueue = clCreateCommandQueue(context->clContext, context->clDeviceID, 0, &status);
810 openCLVerifyCall(status);
811 context_ = context;
812 clQueue_ = clCmdQueue;
813 }
I think you should either release and re-create the internal queue, by:
cl_command_queue Queue = clCreateCommandQueue(ocl::Context::getOpenCLContextPtr(), ocl::Context::getOpenCLDeviceIDPtr(), CL_QUEUE_PROFILING_ENABLE); //Create a new queue with same parameters
ocl::CommandQueue::Release(); //To release the old queue
ocl::CommandQueue::clQueue_ = Queue ; //To overwrite it internally with the new one
Or do it everything yourself (creating all the devices and using them manually)
But beware! This is unsafe! (And untested). However, the DOC says that those classes have public attributes and they can be written from the outside.