Video capture dialog keeps popping up with embedded PC webcam delphi app - delphi

I have a delphi app that takes snapshots from a webcam at 1 sec intervals. On the development PC it goes fine, but on the target platform (Atom-based tablet PC running embedded Windows 7 with a different camera) it is extremely flaky. After a reboot and the first time the app is run, it normally manages to initialise the webcam OK and get regular frames from it, but the next time the app is run, it fails to locate the webcam driver, and also pops up a dialog asking me to specify the video source, presumably because it can't find one..
My question: I'm sure this is related to video capture API calls not being in the right order or something, but is there a tool (like wireshark) that will enable me to sniff the API calls, so I can compare what is happening on the embedded Windows 7 system to the XP development system that works?
I am using the following calls/messages:
Initialisation:
capCreateCaptureWindow
WM_CAP_DRIVER_CONNECT
WM_CAP_SET_PREVIEW (false)
WM_CAP_SET_VIDEOFORMAT (as camera after boot is in format I can't handle)
WM_CAP_GET_VIDEOFORMAT
On 1 sec timer:
WM_CAP_SET_CALLBACK_FRAME
WM_CAP_GRAB_FRAME_NOSTOP
On callback:
WM_CAP_SET_CALLBACK_FRAME (nil)
On finish:
WM_CAP_ABORT
WM_CAP_STOP
WM_CAP_DRIVER_DISCONNECT

The first step is a lot easier: Did you make absolutely sure you have the same driver?
It might also be that the sequence of detect - start acquisition is too fast for this slow system. See if introducing a few secs sleep inbetween helps

Related

AquireNextFrame() fails with different errors

I have some working code that captures the current desktop cyclically and using the code described at DirectX Screen Capture - Desktop Duplication API - limited frame rate of AcquireNextFrame / https://github.com/microsoft/Windows-classic-samples/tree/master/Samples/DXGIDesktopDuplication
This works well except on one machine (where I unfortunately do not have physical access for detailed debugging but only get reports from users). On this machine when I call AcquireNextFrame() with a timeout value of 500, it repeatedly fails with an error code 0x887A0027 / DXGI_ERROR_WAIT_TIMEOUT. To make this clear: the call does not fail only a few times, it fails all the time, so AcquireNextFrame() never returns a result, no matter how often one
When I increase the timeout-value to 850, it fails with an error 0x887A0026 / DXGI_ERROR_ACCESS_LOST.
So...any idea what can cause these errors and how one can prevent it from happen?
Thanks!
The behavior is normal.
Windows does not normally render desktop at 60 Hz, that would be a waste of resources and electricity. DXGI_ERROR_WAIT_TIMEOUT simply means the computer is showing the same image as before. AcquireNextFrame returns S_OK and gives you another frame when some window visible on the desktop has updated something.
I think that one machine doesn’t run any programs which continuously updating GUI on the desktop being captured.
You have to workaround. For instance, maintain a copy of the desktop texture in your capturing app, when AcquireNextFrame returns S_OK update it with CopyResource, when it returns DXGI_ERROR_WAIT_TIMEOUT use the old desktop texture.

Using AVAudioSequencer to send MIDI to third-party AUv3 instruments

I'm having trouble controlling third-party AUv3 instruments with MIDI using AVAudioSequencer (iOS 12.1.4, Swift 4.2, Xcode 10.1) and would appreciate your help.
What I'm doing currently:
Get all AUs of type kAudioUnitType_MusicDevice.
Instantiate one and connect it to the AVAudioEngine.
Create some notes, and put them on a MusicTrack.
Hand the track data over to an AVAudioSequencer connected to the engine.
Set the destinationAudioUnit of the track to my selected Audio Unit.
So far, so good, but...
When I play the sequence using AVAudioSequencer it plays fine the first time, using the selected Audio Unit. On the second time I get either silence, or a sine wave sound (and I wonder who is making that). I'm thinking the Audio Unit should not be going out of scope in between playbacks of the sequence, but I do stop the engine and restart it again for the new round. (But it should even be possible to swap AUs while the engine is running, so I think this is OK.)
Are there some steps that I'm missing? I would love to include code, but it is really hard to condense it down to its essence from a wall of text. But if you want to ask for specifics, I can answer. Or if you can point me to a working example that shows how to reliably send MIDI to AUv3 using AVAudioSequencer, that would be great.
Is AVAudioSequencer even supposed to work with other Audio Units than Apple's? Or should I start looking for other ways to send MIDI over to AUv3?
I should add that I can consistently send MIDI to the AUv3 using the InstrumentPlayer method from Apple's AUv3Host sample, but that involves a concurrent thread, and results in all sorts of UI sync and timing problems.
EDIT: I added an example project to GitHub:
https://github.com/jerekapyaho/so54753738
It seems that it's now working in iPadOS 13.7, but I don't think I'm doing anything that different than earlier, except this loads a MIDI file from the bundle, instead of generating it from data on the fly.
If someone still has iOS 12, it would be interesting to know if it's broken there, but working on iOS 13.x (x = ?)
In case you are using AVAudioUnitSampler as an audio unit instrument, the sine tone happens when you stop and start the audio engine without reloading the preset. Whenever you start the engine you need to load any instruments back into the sampler (e.g. a SoundFont), otherwise you may hear the sine. This is an issue with the Apple AUSampler, not with 3rd party instruments.
Btw you can test it under iOS 12 using the simulator.

How to put a specific monitor into standby mode? [duplicate]

I have 3 monitors, but I don't need them all turned on all the time. I can just shut them down with power button, but I want to use their standby mode, like Windows does when we let PC idle for a while - it shuts down monitors, HDD, etc.
But of course, I wanna keep using PC and let just that monitor on standby. Others must remain on and that one doesn't wake up even with me using PC.
Is it possible to do that? It would be great to have a shortcut like Winkey+1, 2, 3 etc to shut down and wake up each monitor.
An existing app with this feature is not likely to exist, but is there a Windows API function that can control monitor state, for each monitor in a MultiMonitor system?
The display control panel applet calls SetDisplayConfig to start or stop forced projection on a particular target
You can probably use MS Detours or some other API hooking tool to inspect the usage pattern of the API while using the applet to adjust display settings.
You'll want to try Display Fusion. You should be able to do what you're asking for using Monitor configurations.
I know I'm late on this but use DDC to control your display. You can easily create hotkeys that send a command via DDC to the display to turn-off. This would be equivalent to turning off the display using the button. Works like a charm for me. The only trick is that DDC command specs vary across monitor manufacturers but its not hard to find the right codes to send with the help of google.
Ready made tools also exist for this; search for anything that is related to DDC or EDID and you should find.
Be aware though that this does not remove the display from Windows which means that apps may find their way onto displays that are off and you will be looking for them.

PIC32 becomes unresponsive after a few hours

I have a PIC32MX340F512 board developed by another company for us, The board has a DS1338 RTCC and 24LC32A eeprom, and display unit on an I2C bus, on this bus i included a TSL2561 I2C light sensor, i wrote code in c to poll the light sensor continously , when the light sensor reaches a certain level i save the time and date and light sensor value on SD card. This all works fine but if i leave the system without exposure to light inside tunnel where incident light on one end of the tunnel is ought to be monitored the system becomes unresponsive no matter how much amount of light you apply and then if i switch power off and back on again everything starts to work normal. i am a one man development team and have been trying to find out the problem for months, i activated the watchdog timer to prevent the system from hanging but the problem still persisted. i then decided to find out if the problem is with the sensor by including a push button to activate light measurement but still when 4-5 hours elapse the PIC cant even detect a change in the the input pin. Under the impression that a hardware reset overrides anything going on i included a reset button and it also works ok for the first few hours after that the PIC doesn't seem to be responding to anything including a reset. I was getting convinced that there is nothing wrong with the firmware but also with all this happening the display unit (pic16f1933 and lcd) on the I2C shares power with the main unit and doesn't seem to be affected as it alternates between different messages constantly Does anybody have an idea what could be wrong (hardware/firmware or my sensor). I am using a 24v DC power supply purchased seperately. The PIC seems to go into a deep sleep although i dd not implement any kind of SLEEP mode in my code. Nb We use the same board for many other projects and i haven't come across such a problem . Thanks in advance.
I think you need to (if you haven't already) explore the wonderful world of in-circuit-debugging (such as with the ICD3 or PICkit 2/3). It allows you to run the processor in a special mode that lets you pause execution, see exactly which line of code is being executed, inspect variable values, and step through the code to see which parts are running and not running, or see exactly where execution takes a wrong turn. If the problem takes hours to reproduce, that's okay. You can just leave it overnight running in debug mode and hopefully it will be locked-up or 'sleeping' in the morning. At this point, you will be able to pause the processor and poke around to see if you got caught in some kind of infinite loop or something. This is often the only way to dig inside a running piece of code to see why things aren't working as you expect. But as you say, those bugs that take hours or days to manifest are the trickiest. Good luck!
It sounds like you can break up your design into two main parts, sd card interfacing, reading the rtc and reading the light sensor. If it were me I would upload a version of the code that mimics reading the light sensor but only returns fake data and see if that cures the problem. Additionally do the same with the other two modules separately and see if any of the three versions of your project not show this problem. From there just keep narrowing it down until you find the block of code thats causing problems.
if Two or more versions of your debug code show the same problem then my guess is it has to do with one of the communication protocols. I had a problem with a Pic32 silicon version blocking when using the DMA in conjunction with the SPI peripherals. So I would suggest checking the errata for your chip.
If you still cant find the problem, my only suggestion would be to check for memory leaks or arrays that are growing into reserved memory.
Hope that helps, good luck!

How to synchronize audio playback on 2 or more iOS devices?

I would like to write a web application that allows me to sync audio playback of an MP3 down to ~50ms, or close enough that the human ear can't detect the difference.
The idea would be that two or more smartphones could each be paired to a bluetooth speaker, and two or more speakers would play the same audio at the exact same time.
How would you suggest I go about setting this up, both client-side and server-side? I'm planning to use Rails/Ruby for backend, and iOS/obj c for mobile dev.
I had though of the idea of syncing to a global/atomic clock on the server, and having the server provide instructions to clients on when to start playing/jump in to an already playing track. My concern is that, if I want to stream the audio, that it will be impossible to load a song into memory and start playback accurately on the millisecond level.
Thoughts?
The jitter in internet packet delivery will be too large, so forget about syncing over the internet. However you could check the accuracy of NTP which is still used (I guess, I know that older UNIX's used it) by the OS when you switch on automatic date/time in Settings, but my guess is that it won't be good enough either. But perhaps the OS may also use other time sources like GPS; I'm don't know how iOS does it but accuracy within 20ms is not to be expected. You could create experimental app to check it out.
So, what's left is a sync closer to home, meaning between the devices directly. Of course you need to make sure that all devices haves loaded (enough of) the song, and have preloaded it in AVAudioPlayer or whatever you're using, to be able to start playing immediately. (It may actually not be the best idea to use higher level 'AVAudioPlayer` API's as it may give higher delays, and what more important higher jitter, than lower level API's.)
Here are three ideas (one device needs to be master triggering the start play, the others are slaves that are waiting for the trigger):
Use an audio trigger pulse, like a high tone of a defined length and frequency. Then use FFT to recognise this tone.
Connect the devices via GameKit Bluetooth and transmit the trigger on these connections.
Use the iPhone 4+ flash as trigger: flash in a certain pattern. This would require you to sample the video data which is quite doable and can be very fast.
I'm going with a solution that uses an atomic clock for synchronization, and an external service that allows server instructions/messages to be sent to all devices in close sync.

Resources