Could you overheat a monitor through simple C code? - monitor

I was told by a professor that using C code one could heat a single pixel on an old monitor to the point that the monitor would overheat and smoke. Have any of you come across anything that would support this? I am having a debate in my office on whether this is possible or not.

With old PC monochrome monitors, you could programmatically turn off the horizontal sync signal which would cause internal bits in the monitor to overheat and physically fail.

Well, the old multi-sync CRTs were a bit flaky. Get them in a state (resolution) where the vertical and horizontal deflection coils would stop moving the electron beam around (and not turn the beam off), it would burn a nice pinhole in the phosphor coating. Messing with the signals sent to the CRT wasn't hard, you could reprogram the CRT controller with some simple OUT instructions. Smoke? Nah, it was on the inside. It was a problem for a year or two, I was just a pup back then.
Never actually smoked one myself, but great urban professor myth.

Ah the killer poke. This clearly was possible on certain very early computer models. The Commodore PET is the one that springs to mind.

This myth is probably popping up because monitors have an effect where pixels that don't change become "burnt" but the term is slang, not literally burn.
On a CRT or Plasma, you can "burn" a pixel by using it excessively, causing the pixel to get stuck. LCD will appear to do this to, but if you simply leave the monitor off for a few hours, the burnt image will go away, CRTs and Plasmas on the other hand, are damaged forever.

You could definitely burn out old (< 1990 monitors) by writing bad data to the crt controller, on the old PC's. they would smoke and cease to work.
I did it when I was a BIOS writer.
Ed

Related

Control motor/position over slow bus

I seem to have coded myself into a corner with the following issue: I'm trying to control a motor on a robot through a slow RS485-based bus connection. Unfortunately, I don't have access to the firmware on the motor, so I'm stuck with the current setup.
The biggest issue is that I can only control the motor's target speed. While I can retrieve its absolute position through a built-in encoder, there is no positioning function built into the firmware on the motor itself.
The second issue is that the bus connection is really slow, the somewhat awkward protocol needs 25 ms for a full cycle - is controlling a position via speed adjustments even feasible this way?
I have a tried a naive approach of estimating the position 25 ms ahead, subtracting the current position and dividing by 25 ms to calculate the speed required to the next desired position. However, this oscillates badly at certain speeds when targeting a fixed position, I assume due to the high cycle times producing a lot of overshoot.
Maybe a PID controller could help, but I am unsure what the target value would be -- every PID I have used so far used a fixed target. A completely moving target (i.e. the position) is hard to imagine, at least for me.
What's the usual way to deal with a situation like this? Maybe combine the naive approach and add PID-control only for an additional offset term? Or do I need to buy different motors?
If you want to keep the benefits of rs485 (it has some great positive things), then you likely would need to rethink how you drive this engine.
It might be easier to change the motor control, so that you only have to send some numeric data as "end position" and leave it to you smart control to handle that. In that situation your rs485 communication is minimal.
I always tend to think keep the "brains" at place where they are needed in industrial environments so you keep your IO down, or else someday you end up with behemoths such as industrial ethernet.

Reaching clock regions using BUFIO and BUFG

I need to realize a source-synchronous receiver in a Virtex 6 that receives data and a clock from a high speed ADC.
For the SERDES Module I need two clocks, that are basically the incoming clock, buffered by BUFIO and BUFR (recommended). I hope my picture makes the situation clear.
Clock distribution
My problem is, that I have some IOBs, that cannot be reached by the BUFIO because they are in a different, not adjacent clock region.
A friend recommended using the MMCM and connecting the output to a BUFG, which can reach all IOBs.
Is this a good idea? Can't I connect my LVDS clock buffer directly to a BUFG, without using the MMCM before?
My knowledge about FPGA Architecture and clocking regions is still very limited, so it would be nice if anybody has some good ideas, wise words or has maybe worked out a solution to a similar problem in the past.
It is quite common to use a MMCM for external inputs, if only to cleanup the signal and realize some other nice features (like 90/180/270 degree phase shift for quad-data rate sampling).
With the 7-series they introduced the multi-region clock buffer (BUFMR) that might help you here. Xilinx has published a nice answer record on which clock buffer to use when: 7 Series FPGA Design Assistant - Details on using different clocking buffers
I think your friends suggestion is correct.
Also check this application note for some suggestions: LVDS Source Synchronous 7:1
Serialization and Deserialization Using
Clock Multiplication

How can I investigate failing calibration on Spartan 6 MIG DDR

I’m having problems with a Spartan 6 (XC6SLX16-2CSG225I) and DDR (IS43R86400D) memory interface on some custom hardware. I've tried on a SP601 dev board and all works as expected.
Using the example project, when I enable soft_calibration, it never completes and calib_done stays low.
If I disable calibration I can write to the memory perfectly as far as I can see. But when I try to read from it, I get a variable number of successful read commands before the Xilinx memory controller stops implementing the commands. Once this happens, the command fifo fills up and stays full. The number of successful commands varies from 8 to 300.
I'm fairly convinced it's a timing issue, probably related to DQS centering. But because I can't get calibration to complete when enabled, I don't have continuous DQS Tuning. So I'm assuming it works with calibration disabled until the timing drifts.
Is there any obvious places I should be looking for why calibration fails?
I know this isn't a typical stack overflow question, so if it's an inappropriate place then I'll withdraw.
Thanks
Unfortunately, the calibration process just tries to write and read content successively while adjusting taps internally. It finds one end of success then goes the other direction and identifies that successful tap and then final settles on some where in the middle.
This is probably more HW centric as well, so I post what I think and let someone else move the thread.
Is it just this board? Or is it all of them that are doing it? Have you checked? If it's one board, and the RAM is BGA style, it could be a bad solider job. Push you finger down slightly on the chip and see if you get different results... After this is gets more HW centric
Does the FPGA image you are running on your custom board, have the ability to work on your devkit? A lot of times, that isn't practical I know, but I thought I would ask as it rules out that the image you are using on the devkit has FPGA constraints you aren't getting in your custom image.
Check your length tolerances on the traces. There should have been a length constraint. Plus or minus 50 mils something like that. No one likes to hear they need a board re-spin, but if those are out, it explains a lot.
Signal integrity. Did you get your termination resistors in there and are they the right values? Don't supposed you have an active probe?
Did you get the right DDR memory. Sometimes they use a different speed grade and that can cause all sorts of issue.
Slowing down the interface will usually help items 4 and 5. so if you are just trying to work done, you might ask for a new FPGA image with a slower clock.

Watchdog timeout during call to file.format?

This question is entirely unrelated to my code, but to satisfy the obligatory show your code directive:
file.format()
Before the call above returns, on this one SoC I always get a wdt reset. Sometimes but not always the flash does appear to be formatted when the chip is started again. And sometimes if freezes after the wdt reset message, and has to be powered off (looks like wrong comm parameters after pressing hardware reset, but none of the terminal app options seemed to match.)
(Note: since starting this draft I built another copy of my device, using another new, recently received ESP8266-12E, and it behaves identically. Previously built copies still work normally, with the identical firmware.)
So this must be a bad chip, right? Or maybe bad on-board flash? It is a brand new one I just bought. I've also seen file.write issues, with buffer size always 255 bytes or less, though no read issues at all.
One other quirk, after burning a cloud-built nodemcu image to this ESP8266-12E device, adc.read returned 65535 and adc.readvdd33 returned an apparently valid value. (I corrected that by burning esp_init_data_default.bin to 0x3FC000.) This was the first (out of 15, maybe 20) I have seen that was like that. I did not check to see if an older version of nodemcu was already on it.
This wouldn't be the first chip with which I've had issues on arrival; it's at least the 2nd, likely the 3rd or 4th.
So maybe the larger question, what percentage of the ESP8266's that you buy, are either DOA or suffer infant mortality? (Not counting the ones that you have reason to believe were inadvertently killed.)
The problem can be something other than the ESPs, like a inappropriate power supply. I know from my own experience that the Arduino Uno and most USB-TTL converters cannot safely deliver enough current to ESPs. If you're not already, consider using a dedicated power supply circuit that are connected to a USB power source.
It does indeed seem to be a hardware issue, 2 bad out of 6, not good! I think it might be a certain vendor but don't want to name names without being sure... Whatever is wrong with the chip hangs it up long enough to make the watchdog bark.
Much more than the cost of the part, the time consumed figuring out whether it's lua code, firmware, supporting connections, peripherals or the chip itself, is the costly thing (not to mention frustration, and wasted storage on SO.)

Hunting down EOutOfResources

Question:
Is there an easy way to get a list of types of resources that leak in a running application? IOW by connecting to an application ?
I know memproof can do it, but it slows down so much that the application won't even last a minute. Most taskmanager likes can show the number, but not the type.
It is not a problem that the check itself is catastrophic (halts the app process), since I can check with a taskmgr if I'm getting close (or at least I hope)
Any other insights on resource leak hunting (so not memory) is also welcomed.
Background:
I've an Delphi 7/2006/2009 app (compiles with all three) and after about a few week it starts acting funny. However only on one of the places it runs, on several other systems it runs till the power goes out.
I've tried to put in some debug code to narrow the problem down. and found out that the exception is EOutofResources on a save of a file. (the file save can happen thousands of times a day).
I have tried to reason out memory leaks (with fastmm), but since the dataflow is quite high (60MByte/s from gigabit industrial camera's), I can only rule out "creeping" memory leaks with fastmm, not quick flashes of memoryleaks that exhaust memory around the time it happens. If something goes wrong, the app fills memory in under half a minute.,
Main suspects are filehandles that are somehow left on some error and TMetafiles (which are streamed to these files). Minor suspects are VST, popupmenu and tframes
Updates:
Another possible tip: It ran fine for two years with D7, and now the problems are with Turbo Explorer (which I use for stable projects not converted to D2009 ).
Paul-Jan: Since it only happens once a week (and that can happen at night), information acquisition is slow. Which is why I ask this question, need to combine stuff for when I'm there thursday. In short: no I don't know 100% sure. I intend to bring the entire Systemtools collection to see if I can find something (because then it will be running for days). There is also a chance that I see open files. (maybe should try to find some mingw lsof and schedule it)
But the app sees very little GUI action (it is an machine vision inspection app), except screen refresh +/- 15/s which is tbitmap stretchdraw + tmetafile, but I get this error when saving to disk (TFileStream) handles are probably really exhausted. However in the same stream, TMetafile is also savetostreamed, something which later apps don't have anymore, and they can run from months.
------------------- UPDATE
I've searched and searched and searched, and managed to reproduce the problems in-vitro two or three times. The problems happened when memusage was +/- 256MB (the systems have 2GB), user objects 200, gdi objects 500, not one file more open than expected ).
This is not really exceptional. I do notice that I leak small amounts of handles, probably due to reparenting frames (something in the VCL seems to leak HPalette's), but I suspect the core cause is a different problem. I reuse TMetafile, and .clear it inbetween. I think clearing the metafile doesn't really (always?) resize the resource, eventually each metafile in the entire pool of tmetafile at maximum size, and with 20-40+ tmetafiles (which can be several 100ks each) this will hit the desktop heap limit.
That's theory, but I'll try to verify this by setting the desktop limit to 10MB at the customers, but it will be several weeks before I have confirmation if this changes anything. This theory also confirms why this machine is special (it's possible that this machine naturally has slightly larger metafiles on average). Occasionally freeing and recreating a tmetafile in the pool might also help.
Luckily all these problems (both tmetafile and reparenting) have already been designed out in newer generations of the apps.
Due to the special circumstances (and the fact that I have very limited test windows), this is going to be a while, but I decided to accept the desktop heap as an example for now (though the GDILeaks stuff was also somewhat useful).
Another thing that the audit revealed GDI-types usage in a thread (though only saving tmetafiles (that weren't used or connected otherwise) to streams.
------------- Update 2.
Increasing the desktop limit only seemed to minorly increase the time till the problem occurred.
Unfortunately, I won't be able to follow up on this further, since the machines were updated to a newer version of the framework that doesn't have the problem.
In summary I can only state what the three core modifications were going from the old to the new framework:
I no longer change screens by reparenting frames. I now work with forms that I hide and show. I changed this since I also had very rare crashes or exceptions (that could be clicked away) due to this. The crashes were all while operating the GUI though, not spontaneously like the main problem
The routine where the crash happened dealt with TMetafile. TMetafile has been designed out, and replace by a simpler own made format. (basically arrays with Opengl vertices)
Drawing no longer happened with tbitmap with a tmetafile overlay strechdrawn over it, but using OpenGL.
Of course it could be something else too, that got changed in the rewrite of the above parts, fixing some very nasty detail bug. It would have to be an extremely bad one, since I analysed the above system as much as I could.
Updated nov 2012 after some private mail discussion: In retrospect, the next step would have been adding a counter to the metafiles objects, and simply reinstantiate them every x * 1000 uses or so, and see if that changes anything. If you have similar problems, try to see if you can somewhat regularly destroy and reinitialize long living resources that are dynamically allocated.
There is a slim chance that the error is misleading. The VCL naively reports EOutOfResources if it is unable to obtain a DC for a window (see TWinControl.GetDeviceContext in Controls.pas).
I say "naively" because there are other reasons why GetDC() might return a NULL handle and the VCL should report the OS error, not assume an out of resources condition (there is a Windows version check required for this to be reliably possible, but the VCL could and should take of that too).
I had a situation where I was getting the EOutOfResources error as the result of a window handle becoming invalid. Once I'd discovered the true problem, finding the cause and fixing it was simple, but I wasted many, many hours trying to find a non-existent resource leak.
If possible I would examine the stack trace leading to this exception - if it is coming from TWinControl.GetDeviceContext then the problem may not be what you think (it's impossible to say what it might be of course, but eliminating the impossible is always the first step toward discovering the solution, no matter how improbable).
If they are GDI handle leaks you can have a look at MSDN Magazine January 2003 which uses the tool GDILeaks. Other tools are GDIObj or GDIView. Also see here.
Another source of EOutOfResources could be that the Desktop Heap is full. I've had that issue on busy terminal servers with large screens.
If there are lots of file handles you are leaking you could check out Process Explorer and have a look at the open file handles of your process and see any out of the ordinary. Or use WinDbg with the !htrace command.
I've run into this problem before. From what I've been able to tell, Delphi may throw an EOutOfResources any time the Windows API returns ERROR_NOT_ENOUGH_MEMORY, and (as the other answers here discuss) Windows may return ERROR_NOT_ENOUGH_MEMORY for a variety of conditions.
In my case, EOutOfResources was being caused by a TBitmap - in particular, TBitmap's call to CreateCompatibleBitmap, which it uses with its default PixelFormat of pfDevice. Apparently Windows may enforce fairly strict systemwide limits on the memory available for device-dependent bitmaps (see, e.g, this discussion), even if your system otherwise has plenty of memory and plenty of GDI resources. (These systemwide limits are apparently because Windows may allocate device-dependent bitmaps in the video card's memory.)
The solution is simply to use device-independent bitmaps (DIBs) instead (although these may not offer quite as good of a performance). To do this in Delphi, set TBitmap.PixelFormat to anything other than pfDevice. This KB article describes how to pick the optimal DIB format for a device, although I generally just use pf32Bit instead of trying to determine the optimal format for each of the monitors the application is displayed on.
Most of the times I saw EOutOfResources, it was some sort of handle leak.
Did you try something like MadExcept?
--jeroen
"I've tried to put in some debug code to narrow the problem down. and found out that the exception is EOutofResources on a save of a file. (the file save can happen thousands of times a day)."
I'm shooting in the dark here, but could it be that you're using the Windows API to (GetTempFileName) create a temp file and you're blowing out some file system indexes or forgetting to close a file handle?
Either way, I do agree that with your supposition about it being a file handle problem. That seems to be the most likely thing given your symptoms and diagnosis.
Also try to check handle count for the application with Process Explorer from SysInternals. Handle leaks can be very dangerous and they build slowly through time.
I am currently having this problem, in software that is clearly not leaking any handles in my own code, so if there are leaks they could be happening in a component's source code or the VCL sourcecode itself.
The handle count and GDI and user object counts are not increasing, nor is anything being created. Deltic's answer shows corner cases where the message is kind of a red-herring, and Allen suggests that even a file write can cause this error.
So far, The best strategy I have found for hunting them down is to use either JCL JCLDEBUG stack tracebacks, or the exception report save features in MadExcept to generate the context information to find out what is actually failing.
Secondly, AQTime contains many tools to help you, including a resource profiler that can keep the links between where the code that created the resources is, and how it was called, along with counts of the total numbers of handles. It can grab results MID RUN and so it is not limited to detecting unfreed resources after you exit. So, run AQTime, do a results capture in mid run, wait several hours, and capture again, and you should have two points in time to compare handle counts. Just in case it is the obvious thing. But as Deltics wisely points out, this exception class is raised in cases where it probably shouldn't have been.
I spent all of today chasing this issue down. I found plenty of helpful resources pointing me in the direction of GDI, with the fact that I'm using GDI+ to produce high-speed animations directly onto the main form via timer/invalidate/onpaint (animation performed in separate thread). I also have a panel in this form with some dynamically created controls for the user to make changes to the animation.
It was extremely random and spontaneous. It wouldn't break anywhere in my code, and when the error dialog appeared, the animation on the main form would continue to work. At one point, two of these errors popped up at the same time (as opposed to sequential).
I carefully observed my code and made sure I wasn't leaking any handles related to GDI. In fact, my entire application tends to keep less than 300 handles, according to Task Manager. Regardless, this error would randomly pop up. And it would always correspond with the simplest UI related action, such as just moving the mouse over a standard VCL control.
Solution
I believe I have solved it by changing the logic to performing the drawing within a custom control, rather than directly to the main form as I had been doing before. I think the fact that I was rapidly drawing on the same form canvas which shared other controls, somehow they interfered. Now that it has its own dedicated canvas to draw on, it seems to be perfectly fixed.
That is with about 1 hour of vigorous testing at least.
[Fingers crossed]

Resources