I have a problem with calculating OR confidence intervals from a glm in the latest version of R, but I have not had this issue before. With any glm where family="binomial", no matter how simple the model is, it will easily allow me to extract the summary and exp(coef(model)), however when I try to extract the confint() or exp(confint(model)), the "Waiting for profiling to be done..." message is displayed and nothing happens (I've waited up to 10 mins then cancelled the procedure, this usually takes only seconds on my machine). Any ideas what might be tripping this function up? I've tried it on multiple datasets and variables, with the same result. Any ideas why it is taking so long/failing to finish?
Well, for some unknown reason, specifying exp(confint.default(model)) leads to instant resolution of this problem.
Related
I have had some issues with the visualization of the progress on my tasks.
I used to use the status page on the 8787 port, but visualizations don't always seem as smooth (refresh rates) as what I can see in blogposts.
I was quite happy when using the dask.diagnostics.progress since it ran smoothly and had a timer.
But this time when trying to persist a dataframe, my progress got blocked (stayed at 2.4 seconds and never advanced). Updating it doesn't fix the problem and on the 8787 I am not getting much info either. Any ideas why this happens?
The progress bar will stop if the computation has failed. You might want to check that users1 is producing a good result. A simple way to do this would be to trigger a cheap computation on the full dataframe, like len
>>> len(users1)
Exception ...
Other reasons for non-responsive progress bars include a poor internet connection, or a very saturated scheduler (if there are millions of tasks), but neither of these seem to be the case in your situation.
Say I have sensor data from a hardware which senses every 100ms. Now a switch is automatically turned on based on some set of features of the device and then again gets turn off based on the values of those features again.
I have build a model which looks only in the sensor data in the vicinity(by time) of turning on trigger point(as 1 output data) and same for turning off point as well(as 0 output data).
But the classifier works very poorly . It detect turn on point randomly and never turns off, and some times totally misses a on/off cycle.
Any suggestions on how to attack these kinds of problems?
There could be a number of reasons associated with the failure. I do not know what hardware or software that is being employed for the problem, but here are a few that may come to mind:
Timing - Perhaps there is some delay in your system
Algorithm - Perhaps there is something that is incorrect in your code that is causing random behavior in the system.
Wiring - Perhaps there is something that is sending the wrong signals.
The best way to attack these problems is to try and break the system down and test each individual component. This way, you may be able to diagnose the fault and resolve the issue.
Good luck!
I'm coding an emulator in XNA. The emulator's accuracy depends greatly on real fps. The fps must be 60 per second. While Microsoft claims their update function on FIXED TIME SETUP, should be called exactly 60 times per second, in my emulator (or a blank XNA 4.0 refresh project), I've collected these values so far, using differect processors: AMD athlon 2x 6000 (the unsynced ones, that need an extra optimized update from AMD to run correctly) 60-61 tps. Intel i5 (2.8 i think) 55tps. On an old Lenovo laptop with a slow centrino 59-60tps (the desired one). Same goes for an old intel p4 hyperthreading; exactly 60 tps. On a DELL implementation (don't remember the cpu) it was 52-53 tps.
So what should I do. Timer's accuracy on seperate threads is horrible too. Besides, the update function will be called too in parallel (using almost 100% of the cpu).
Make a different profile for each cpu? (Assume that the tps, [times per second] counting is correct)
The Update method calls are not 100% fixed. On a PC they will try to be made 60 times per second, but that depends on the machine and the logic of your game.
You can check if the system is working slower than it should by checking the GameTime.IsRunningSlowly property on your game (link here).
Also, you can modify the ticks it makes every second, for example a phone project starts with 33 ticks per second configured, against the 60 of a normal Windows Game project.
It would be wise to check Shawn Hargreaves blog post about GameTime, it won't fix your problem but you will give you a better understanding of how the Update method works and maybe get an idea on how to fix the problem.
I quote part of his post here:
By default XNA runs in fixed timestep mode, with a TargetElapsedTime
of 60 frames per second. This provides a simple guarantee: •We will
call your Update method exactly 60 times per second •We will call your
Draw method whenever we feel like it
Digging into exactly what that means, you will realize there are
several possible scenarios depending on how long your Update and Draw
methods take to execute.
The simplest situation is that the total time you spend in Update +
Draw is exactly 1/60 of a second. In this case we will call Update,
then call Draw, then look at the clock and notice it is time for
another Update, then Draw, and so on. Simple!
What if your Update + Draw takes less than 1/60 of a second? Also
simple. Here we call Update, then call Draw, then look at the clock,
notice we have some time left over, so wait around twiddling our
thumbs until it is time to call Update again.
What if Update + Draw takes longer than 1/60 of a second? This is
where things get complicated. There are many reasons why this could
happen:
1.The computer might be slightly too slow to run the game at the desired speed.
2.Or the computer might be way too slow to run the game at the desired speed!
3.The computer might be basically fast enough, but this particular frame might have taken an unusually long time for some reason. Perhaps
there were too many explosions on screen, or the game had to load a
new texture, or there was a garbage collection.
4.You could have paused the program in the debugger.
We do the same thing in response to all four causes of slowness: •Set
GameTime.IsRunningSlowly to true. •Call Update extra times (without
calling Draw) until we catch up. •If things are getting ridiculous and
we are too far behind, we just give up.
If this doesn't help you, then posting some code showing what you think may be making the process slower would help.
I have a PIC32MX340F512 board developed by another company for us, The board has a DS1338 RTCC and 24LC32A eeprom, and display unit on an I2C bus, on this bus i included a TSL2561 I2C light sensor, i wrote code in c to poll the light sensor continously , when the light sensor reaches a certain level i save the time and date and light sensor value on SD card. This all works fine but if i leave the system without exposure to light inside tunnel where incident light on one end of the tunnel is ought to be monitored the system becomes unresponsive no matter how much amount of light you apply and then if i switch power off and back on again everything starts to work normal. i am a one man development team and have been trying to find out the problem for months, i activated the watchdog timer to prevent the system from hanging but the problem still persisted. i then decided to find out if the problem is with the sensor by including a push button to activate light measurement but still when 4-5 hours elapse the PIC cant even detect a change in the the input pin. Under the impression that a hardware reset overrides anything going on i included a reset button and it also works ok for the first few hours after that the PIC doesn't seem to be responding to anything including a reset. I was getting convinced that there is nothing wrong with the firmware but also with all this happening the display unit (pic16f1933 and lcd) on the I2C shares power with the main unit and doesn't seem to be affected as it alternates between different messages constantly Does anybody have an idea what could be wrong (hardware/firmware or my sensor). I am using a 24v DC power supply purchased seperately. The PIC seems to go into a deep sleep although i dd not implement any kind of SLEEP mode in my code. Nb We use the same board for many other projects and i haven't come across such a problem . Thanks in advance.
I think you need to (if you haven't already) explore the wonderful world of in-circuit-debugging (such as with the ICD3 or PICkit 2/3). It allows you to run the processor in a special mode that lets you pause execution, see exactly which line of code is being executed, inspect variable values, and step through the code to see which parts are running and not running, or see exactly where execution takes a wrong turn. If the problem takes hours to reproduce, that's okay. You can just leave it overnight running in debug mode and hopefully it will be locked-up or 'sleeping' in the morning. At this point, you will be able to pause the processor and poke around to see if you got caught in some kind of infinite loop or something. This is often the only way to dig inside a running piece of code to see why things aren't working as you expect. But as you say, those bugs that take hours or days to manifest are the trickiest. Good luck!
It sounds like you can break up your design into two main parts, sd card interfacing, reading the rtc and reading the light sensor. If it were me I would upload a version of the code that mimics reading the light sensor but only returns fake data and see if that cures the problem. Additionally do the same with the other two modules separately and see if any of the three versions of your project not show this problem. From there just keep narrowing it down until you find the block of code thats causing problems.
if Two or more versions of your debug code show the same problem then my guess is it has to do with one of the communication protocols. I had a problem with a Pic32 silicon version blocking when using the DMA in conjunction with the SPI peripherals. So I would suggest checking the errata for your chip.
If you still cant find the problem, my only suggestion would be to check for memory leaks or arrays that are growing into reserved memory.
Hope that helps, good luck!
I am building an iOS game and I notice that the game performs fine while running normally in the XCode debugger. However, when i run it from within Instruments (Product-> Profile to trace Leaks), the game freezes when Instruments displays 'Analyzing Process' in the left sidebar. After that the game messes up all its state since some parts of the game that were being analyzed froze up while other parts kept going.
Is this something I can/need to fix or is it sufficient to make sure the game runs in release?
If a fix is needed, what do I need to do to make it work?
Update 1:
So we found the issue - the same problem repros even if we are playing the game, press the home button and click on the game icon and continue playing.
The issue is that most of our work is done in the update method, and it relies on the value of the (ccTime)dt parameter. The value of dt is usually < 0.1 seconds, and occasionally somewhere upto 0.5 seconds).When we pause (either by clicking the home button, or when instruments pauses the game to take a snapshot) and resume playing, the value of dt is several seconds! And that throws all our calculations out of range.
We tried a temporary (but ugly) workaround that fixes the issue: at the beginning of the update method, we add this:
if(dt > 1)
return;
And it now works as expected - doesn't go out of sync. However, this is not a permanent solution, since sometimes, the values of dt are legitimately close to 1 second, and in resource crunched situations, this may lead to stutter (or worse).
We also considered another (equally ugly) solution of storing the previous value of dt, and then check in the update method:
if(dt > 10 * prevDt)
{
return;
}
We tried calling unscheduleUpdate in AppDelegate.m's applicationDidEnterBackground, and called scheduleUpdate in the applicationWillEnterForeground method, but that approach did not work.
What is the best way to deal with updates with erratic time values due to external pauses?
Thanks
Anand
I don't know much about how cocos2D works, but if you've run out of options, I would simply clamp the delta time to an upper range. In whatever update methods used delta time, I would do the following.
double delta = (dt > 0.5) : (0.5) ? (dt);
Use delta instead of dt from that point onward. The result is that any frame with a delta of over half a second will be limited to a delta of half a second. You don't want to skip frames, because then you could potentially skip many frames in a row which would be bad for the user; they'd probably think the app crashed or something. However, you don't want to run a frame with a large delta because then delta-dependent objects and forces will get multiplied many times for that frame, preventing you from catching collisions and stuff. So, we prevent the game from getting completely screwed over while not skipping frames by simply clamping the value downwards.
The negative is that the app will appear to move more slowly when your frame rate drops to 2 frames per second. You lose the benefit of a time-based update method, which is the ability to always make the game appear to run at a well defined speed, when the engine is overburdened.
To be honest, if you ever run at 2 frames per second for an extended period of time, you've got bigger problems to worry about. I don't think the user will notice if objects move slightly more slowly per second if the game is only rendering once every who-knows-when anyways.
Unfortunately this is a problem, for which there is no sure answer; at least not without access to your system to run a whole variety of checks.
The failure in the profile may be because your game is running tight loops the timing of which get's upset in unpredictable ways and your game is crashing due to timing or resource issues (where those timing issues don't crop up with the debugger in the same way). If that's the case, there is probably not much you can do about it. Or it may be because there is a problem in your code. The problem is it can be very difficult to figure out which of these is the case. It's probably best to assume the problem is in your code though and do some further investigation.
The first thing to do, if you haven't done it already, is run the static analysis tool (Analyse from the Product menu in Xcode). Consider each of the raised errors carefully, and work to remove all of them. Sometimes they might seem obvious and you think you can ignore them, but some prodding reveals they are a symptom of a deeper problem.
If you haven't tried already, try running the instrument to check for zombies. There's a high chance this will fail also if the allocation instrument is failing, but if there are some stale references to de-allocated objects hanging around they could be causing the problem you are experiencing. Another instrument you can try is the performance analyser, to check where your app is spending most of it's time. There may be some really significant problem with the overallocation of resources you are not aware of. If you can't run the memory profiler, it will be difficult for you to see if this is the case, but using the performance analyser, it might be possible to see if your app is getting hung up for too long somewhere it shouldn't be.
Lastly - if all else fails and this may be a sledgehammer to crack a nut - and also may not in any case provide the solution. If you aren't using ARC, consider how long it would take to convert your app to using it (definitely create a branch first before doing it though). The Apple algorithms for object allocation/deallocation are very efficient and there is a very good chance if you have subtle memory management errors, they will be eliminated by Automatic Reference Counting.