is it possible to use NetStream to publish the stage constantly to a FMS?
I have tried to attach a camera to the netstream which works perfectly. However I want to publish a stream showing the stage and all its elements / objects including the case where a user interacts with the elements and changes their position/appearance.
Thank you very much.
As I know it's not possible this way.
You can't use a custom input for the netstream to encode it.
You have to following options:
if you can reproduce the same elements on the other side, create an API, that only passes the interactions (i.e. drawLine(startX,startY,endX,endY), loadImage(url), etc). This way everything will be shown on both PC, with much less data traffic and CPU usage
if you have a very complex stage and somehow it's impossible to reproduce it on the other side, then you can create bitmap shots, and send them through FMS JPEGencode (not too nice)
use a webcam splitter that grabs the stage, and it can be a webcam source (not too nice)
Related
I want to know if there is any way to send byte array ( that represent simple image ) to some application and this application will show this image on some screen that connected to current machine ?
I have 2 screen connected to my machine.
On the first screen i want to show the operation application that i wrote.
And on the other machine i want to show the output of the video that i hold => that mean that the second screen will show running images.
Is there is a way to do it ?
If there is a way so how .. ?
Most operating systems today do not allow direct access to the hardware from user mode programs. However, they do provide interfaces that can accomplish what you need.
Typical examples are using APIs like: OpenGL/DirectX/SDL
You should choose and use one, depending on your OS and exact requirements.
Most operating systems support multi monitor display. You app must create two Windows (using whatever native windowing system API available) and you can arrange them (either manually or programmatically according to what you specified). For video output you need to select some video format and use a library (e.g. ffmpeg) to display it.
I require my app to Scan barcode automatically , i have the barcodes, i have the app required, how can i make the App read physical bar codes using automation in appium,
In manual i can scan the code by pointing out the camera to an bar code.
I dont know how to do it while executing an test suite.
i got idea of placing Mobile device on an Stand, tripod and placing barcode in front of it.
But the problem is we can test only one barcode. i want to run for about 100-200 barcodes ans see app performance does not decrease, can any one suggest some ways?
This is a very interesting case. If you really want to test your app scanning the bar codes through camera then I think instead of looking for a solution through appium you have to look for a solution to exactly match your manual process.
You can click scan button using appium(I assume) - for example you can write a script to click on this button every 10 seconds.
Challenge is to point the camera to the next barcode as soon as first scan is complete. Possible solutions- I believe that all the bar codes can be captured in a file in a pc. Copy these bar code images in a ppt or using any other program so that these images can automatically displayed one by one.
Put your device in front of this pc as you are already planning to use tripod stand etc. Focus them on screen(may be first time you might need to do all these adjustments). Run your script. Do some trial runs. Synchronize the process with correct time outs. I think this should be feasible though really not the best way to automate this scenario.
I haven't tested it, but this blog post can be your answer http://www.mobileqazone.com/profiles/blogs/simulating-camera-in-android-emulator. If not, you can try bypass it by creating API to upload an image to your server instead of reading it from the camera. I think the impact on your QA will not change dramatically (besides, it's very easy and fast to check that part manually)
We do have an app that scans plenty of items such as bar codes plus tracking the dimensions of objects through the camera.
I read the idea of synchronizing images into a slideshow which is absolutely hilarious. The way I do it, is by using my own node server app with websockets that will toggle images through http requests. When this app is hosted in a laptop/ipad positioned exactly in front of the AUT, the test will have full control on which barcode to be shown at particular time frame.
No synchronization required at all and does the job.
It is a modified version of https://github.com/JangoSteve/websockets-demo
There's a Stackoverflow thread elsewhere that points out that Firemonkey has to display video through the primary thread. I am trying to use a DirectX camera to snag a series of images (in Win8.1 for now--other OS's can wait). So I use the SampleBufferReady and SampleBufferSync approaches in the Embarcadero example code (that just has a TImage on a form), but with enough changes that I never see anything. I need to do my display in a TImageViewer; pointing the tbitmap in the SampleBufferSync at that tbitmap is easy. But nothing displays. From a procedural viewpoint, pseudocode of what I want is
setup whatever
camera.startcapture
repeat
repeat until framecaptured {what SampleBufferReady should do -- only fire when ready}
Imageviewer.repaint {inside SampleBufferReady?}
inc(mycounter) {inside SampleBufferReady?}
until (mycounter>mylimit) or (user interrupts video input)
camera
One could add a ttimer to slow things down. What I don't "get" is
must I define my own TEvent to find out that the camera's snagged an image, or does this even already exist? I would have thought that SampleBufferReady would respond to the arrival of an image and I could process whatever inside that event.
to display an image in something other than a TImage, will I need to turn off the camera, paint the bitmap, then turn the camera back on? If so, will I need to have SampleBufferReady contain a command to turn the camera off? Boy, does that sound clunky!
Suggestions?
here is a complete code source that i test the c++ version that is same as pascal in term of function calls and mechanism, only the syntax differ:
download here the pascal version.
the code work fine for both android and desktop( i test the c++ version). so download, test and confirm for me the pascal code.
I'm creating a game, turn based, and I was thinking of using Game Center to handle it, but the passed game-object is evidently max 64kb. Is there another way to pass objects between devices for this use, without having to create a database or storage-server as middle man? The game-object itself for me is probably a lot less than 64kb, but there are some initial variables I would like to send, such as images. With my calculations, the initial data for one game is about 500kb, but after getting those images once, the passed game object is just a couple of kb's, and are never going to include those images again.
Is there a way to send these images directly?
There are a few ways to get around the limit.
This answer mentions Alljoyn which would allow you to transfer that size of files.
You could also send them indirectly by transferring them to your own server, then passing a link to the file to the other player. For a turn based game, this would have good advantages of enhanced reliability as you could put in retries on error for both the upload to the server and the download to the device and control it yourself. I would recommend AFHTTPClient for this, also.
Is there another way to pass objects between devices for this use, without having to create a database or storage-server as middle man?
Without your own server, there isn't.
I'm working on an embedded home surveillance system. I want to interface a couple of serial-enabled JPEG capture cameras, maybe a couple of door sensors, etc. Problem is, I can't for the life of me figure out how to interface a camera to a microcontroller. Stills, streaming video, it doesn't matter - I can't find any how-to documentation on this.
I understand serial communications, and most of the camera documentation I've found out there describes the protocol necessary to instruct the camera to send the datastream down to the uC for capture. What they don't show is what you're supposed to do with the data once you get it.
Here's an example.
They show a great little video, and the datasheet describes which bytes must be sent to the camera to retrieve the image. What I need is an example or tutorial of some sort that will explain what to do with the stream of bytes that make up the image itself. How do I arrange those bytes into an image and save it as a file?
I've looked all over the place for a tutorial of some sort, but have come up dry. I'm not sure which processor I'll use for this project just yet, but this question isn't really processor-dependent. All I need is the algorithm, maybe a peek at a library, if one exists. I'll take that process and adapt it to my hardware, I just can't seem to find a place to get started.
Have any of you done this?
I think the details are pretty clear in page 10 inside this document:
http://www.4dsystems.com.au/downloads/micro-CAM/Docs/uCAM-DS-rev4.pdf
First, one package is between 64 to 512 bytes - flexibly defined by the programmer. Image size is the actual JPEG image itself....nothing more or less....just pure JPEG image. So the equation to calculate the number of package based on image_size / package_size is given in page 10.
Next, is that (package_size - 6) is to be consistently used everywhere, because 6 bytes are used up for non-data purpose, so (package_size - 6) will be just the data - but u have to reassemble it yourself.
To assemble the data from the package, u have to strip the 4 byte header + 2 byte trailer and concatenate all these from all the package sequentially one after another.
Other facts:
a. "Set Package Size" command must be sent from host to CAM - before "SNAPSHOT" command, which capture the image from the camera into the CAM memory buffer.
b. Next is to send "SNAPSHOT" command to capture the image into memory buffer.
c. Last is to send "GET PICTURE" command (only one time, but data will come back multiple times - see diagram in page 15) to extract out all the images....and it will come back in the form of "package" as we have defined the size earlier in "set package size". Since u have calculate the formula u will know when to stop asking for the next package. And there is a verification byte - u have to used that to make sure data is correct.
I have not used this camera but looks like it works exactly the same is a camera (C328) I have used. Send an image resolution/colour command. When you want get an image send an image capture command. The camera responds by sending a binary file over the serial link.