I'm working on an embedded home surveillance system. I want to interface a couple of serial-enabled JPEG capture cameras, maybe a couple of door sensors, etc. Problem is, I can't for the life of me figure out how to interface a camera to a microcontroller. Stills, streaming video, it doesn't matter - I can't find any how-to documentation on this.
I understand serial communications, and most of the camera documentation I've found out there describes the protocol necessary to instruct the camera to send the datastream down to the uC for capture. What they don't show is what you're supposed to do with the data once you get it.
Here's an example.
They show a great little video, and the datasheet describes which bytes must be sent to the camera to retrieve the image. What I need is an example or tutorial of some sort that will explain what to do with the stream of bytes that make up the image itself. How do I arrange those bytes into an image and save it as a file?
I've looked all over the place for a tutorial of some sort, but have come up dry. I'm not sure which processor I'll use for this project just yet, but this question isn't really processor-dependent. All I need is the algorithm, maybe a peek at a library, if one exists. I'll take that process and adapt it to my hardware, I just can't seem to find a place to get started.
Have any of you done this?
I think the details are pretty clear in page 10 inside this document:
http://www.4dsystems.com.au/downloads/micro-CAM/Docs/uCAM-DS-rev4.pdf
First, one package is between 64 to 512 bytes - flexibly defined by the programmer. Image size is the actual JPEG image itself....nothing more or less....just pure JPEG image. So the equation to calculate the number of package based on image_size / package_size is given in page 10.
Next, is that (package_size - 6) is to be consistently used everywhere, because 6 bytes are used up for non-data purpose, so (package_size - 6) will be just the data - but u have to reassemble it yourself.
To assemble the data from the package, u have to strip the 4 byte header + 2 byte trailer and concatenate all these from all the package sequentially one after another.
Other facts:
a. "Set Package Size" command must be sent from host to CAM - before "SNAPSHOT" command, which capture the image from the camera into the CAM memory buffer.
b. Next is to send "SNAPSHOT" command to capture the image into memory buffer.
c. Last is to send "GET PICTURE" command (only one time, but data will come back multiple times - see diagram in page 15) to extract out all the images....and it will come back in the form of "package" as we have defined the size earlier in "set package size". Since u have calculate the formula u will know when to stop asking for the next package. And there is a verification byte - u have to used that to make sure data is correct.
I have not used this camera but looks like it works exactly the same is a camera (C328) I have used. Send an image resolution/colour command. When you want get an image send an image capture command. The camera responds by sending a binary file over the serial link.
Related
Summary:
While a video is streaming in the internet browser, packets of mp4 files are being downloaded.
I would like to write some code that saves the packets into one file so the video is preserved after it is done streaming.
I think this could be done by taking the packets in as byte files, removing how ever many bytes the addressing header at the front is made out of, and then concatenating the rest, but I will be the first to admit that I have never done anything like this before and don't really know what I am doing.
If anyone out there has experience with with this kind of thing, I am not looking for much, just a few words to point me in the right direction, or at least keep me from going too far in the wrong direction. I am really just coming at this project with half a CS degree and some notions about what to google right now.
Specific Questions:
Do you know how I can access the data packs being downloaded from the internet? For example, if I wanted to make a script that grabbed each of the mp4 data packs as they came in, what kind of command would I use to access them?
Do you know what would be the best way to save the video file? Should I access the data packs as they come in? or should I access them later, as they have been strung together by the media player? Is the second part even possible or would the data be protected by that time?
If you don't have answers to these questions, are you able to give me a reference to a place the does? I haven't been able to find this subject on stack. Even some good keywords to search for would be useful.
Thanks for taking the time to read,
-Coder 357
TLDR: Trying to make code or script to save the mp4 packets into video files as they are streamed.
Progress So Far:
I suppose all I have done is press f12 and open up the network activity section of the internet page, streaming the video I want to capture. I know that I am rapidly fetching mp4 files in the kb and mb size range. That is what lead me to my conclusion that I will likely need to string these mp4 files together into one file, after removing any extra bits and bytes at the front and end.
That being said, I don't have an understanding of how my media player holds the data, so there may be a much more convenient way to do this that I am not seeing.
Goal:
My best case expectation for the project is to have a program or script which can be run from the command line and will then combined mp4 data packets, from a given website, into a single video file.
The device is a label printer. It can be connected to via bluetooth and USB. I would imagine it is running some kind of linux, as it has a fairly complex interface/screen, but am not sure. In fact, this is something I would like to determine. But my goal is to get a shell, or some kind of 'meaningful' connection through which I can send commands/data which will trigger print events by the printer without using the manufacturer's software
Connecting to the device in ubuntu via USB creates /dev/usb/lp0. I tried connecting to this using python's serial module, but it couldn't connect to the serial port.
Via bluetooth I was also able to connect, using hcitool scan to get the device's MAC address and then rfcomm to connect (using this approach) . This created /dev/rfcomm0, which I was able to connect to and send data to using python.
Is it feasible to mimic the data normally sent over usb/bluetooth by the manufacturer's software to print without the software? I assume getting this would be possible by 'sniffing' data sent over bluetooth while a normal print command is sent by the manufacturer's software (although I suppose there's no reason it would look intelligible to a human).
If this kind of mimicry is possible, I am wondering whether simply sending the equivalent data over bluetooth, for example, would result in a print event. So far I have no reason to believe that data I send via the bluetooth connection is not being received, but I have yet to get any kind of response (data or physical) from the bluetooth connection.
Any advice/suggestions on how I might achieve my overall goal would be appreciated
This is certainly possible (sorry for the answer 6 years later, but hopefully this will help anyone later in need). I have a similar problem and this is how I solved.
I have a MHT-P80F thermo printer. I figured out in settings that it supports a protocol called TSPL. These are the instructions you need to send to a printer and tell it to do either raw text printing, or even bitmaps.
All you need to do is to construct the correct bytestream (in mostly human-readable ASCII) and send it to /dev/usb/lp0. I have not tested it via bluetooth but I assume it should be similar.
For example, if you want to print out a "Hello World", these instructions will suffice:
CLS
SIZE 80mm,50mm
GAP 5mm,0mm
HOME
TEXT 0,0,"0",0,1,1,"Hello World"
PRINT 1
Each line is separated by a "\n".
Explanations (more could be found by searching TSPL):
CLS Tell the printer to clear all previous staged commands.
SIZE Tell the printer the size of each label (width, height).
GAP Between each label there's 5mm space without paper.
HOME (Re-)locate the paper roll for a new print.
PRINT Start printing of 1 copie(s).
Note these instructions are for the use of discrete labels. For a whole paper roll it might be different. TSPL implementations on different printers may differ, so you might to experiment a bit.
Generally, if you can print a bitmap, then you can print virtually any document (e.g. using PIL in Python or Jimp in Node.js to generate an image beforehand). So here's the most useful BITMAP command:
BITMAP 16,24,40,256,0,<BYTE STREAM>
where
16: the starting (left most) X coordinate for your bitmap
24: the starting (top most) Y coordinate for your bitmap
40: the width of bitmap, in BYTES (see below)
256: the height of bitmap, in DOTS
0: mode of print, 0 being overwritting anything in that region
and <BYTESTREAM> being the binary data(black/white) of this image, from left to right and from top to bottom.
The bitmap width is given in bytes, so each byte represents 8 horizontal continous dots in the image. The highest bit 7 being most left, the lowest bit 0 most right. So if as in example we write 40 in this parameter, the image would be 40x8=320 dots in width.
The bitmap height, on the contray, is given in dots.
Most such thermal printer have a DPI of 203. This is an interesting start point to investigate into: 203/25.4 = 7.99, or rounded as 8. So for the printer, each 8 dots equals 1mm. In the above example, X=16 and Y=24 (both in dots) corresponds to starting location X=2mm and Y=3mm.
And finally, you generally do not need to inverse the color of this image. In BITMAP command, a 1 in a bit means correctly white or non-printed dot, and 0 means the black or heated dot.
I am not sure about bluetooth but for USB printing you can use the cups library (licups) and use the APIs to do the priting. It uses IPP protocol. Usually cups uses a .ppd file specific to the printer (which contains the details about the printer) for installing it. For new language versions such as PCL5, 5e, 6 etc there are generic ppd files that can be used to install any printer that uses the respective language
I want to be able to make my printer (HP DeskJet 1280 on USB) print out all raster data I have sent to it so far, without ejecting the page. I am sending only plain raster graphics and cursor positioning commands — no vector graphics, no text.
More precisely, I have two questions:
1) After sending some raster data to the printer (with Transfer Raster Data ("\033*b%dW")), how to make it print it out right away and stop, without ejecting the page?
2) After sending a vertical cursor positioning command with a positive argument (e.g., Vertical Cursor Positioning (Decipoints) ("\033&a%+dV")), how to make the printer advance the paper to the new position right away and stop there?
(Note that even solving only (1) would be almost sufficient, because advancing the cursor could be done indirectly by sending a blank raster of the appropriate height.)
Since PCL is a page description language, it could be actually impossible to do things at such a low level. But after an extensive search in the PCL documentation and the Internet, I have not yet found a definite negative answer either.
It seems that the printer has some kind of internal buffer to store its data, and that it flushes (i.e., prints out) that buffer when it grows large enough. If there were a command to tell the printer to flush that buffer immediately without doing anything else, everything would be fine. But I have not found such a command. Even "\033*rC" (End Raster Graphics) has no immediate effect.
I am using CUPS' USB backend to communicate with the printer, and have verified (using usbmon) that the backend actually sends all my commands to the printer as soon as it sees them, so it cannot be the issue of data getting stuck in the driver.
Commands that print out partial pages include, for example, "\033E" (Printer Reset), "\033%%-12345X" (Universal Exit Language), "\033&r1F" (Flush All Pages (including partial pages)) — but all of them also eject the partial page.
Can somebody suggest a clever way to do what I want, or confirm my impression that it is an absolute impossibility?
I am trying to display a JPEG image as it downloads, using part of the data, similiar to many web browsers do, or the facebook app.
there is a low-quality version of the image(just part of the data) and then display the full image in full quality.
this is best shown in the VIDEO HERE
I followed this SO question:
How do I display a progressive JPEG in an UIImageView while it is being downloaded?
but all I got was a imageview that is being rendered as data keeps comes in, no low-quality version first, no true progressive download and render.
can anyone share a code snippet or point me to where I can find more info as to how this can be implemented in an iOS app ?
tried this link for example which shows JPEG info, it identifies the image as progressive
http://www.webpagetest.org/jpeginfo/jpeginfo.php?url=http://cetus.sakura.ne.jp/softlab/software/spibench/pic_22p.jpg
and I used the correct code sequence
-(void)connection:(NSURLConnection*)connection didReceiveData:(NSData*)data
{
/// Append the data
[_dataTemp appendData:data];
/// Get the total bytes downloaded
const NSUInteger totalSize = [_dataTemp length];
/// Update the data source, we must pass ALL the data, not just the new bytes
CGImageSourceUpdateData(_imageSource, (CFDataRef)_dataTemp, (totalSize == _expectedSize) ? true : false);
/// We know the expected size of the image
if (_fullHeight > 0 && _fullWidth > 0)
{
[_imageView setImage:[UIImage imageWithCGImage:image]];
CGImageRelease(image);
}
}
but the code only shows the image when it is finished loading, with other images, it will show it as it downloading, but only top to bottom, no low quality version and then progressively add detail as browsers do.
DEMO PROJECT HERE
This is a topic I've had some interest in for a while: there appears to be no way to do what you want using Apple's APIs, but if you can invest time in this you can probably make it work.
First, you are going to need a JPEG decoding library: libjpeg or libjpeg-turbo. You will then need to integrate it into something you can use with Objective-C. There is an open source project that uses this library, PhotoScrollerNetwork, that uses leverages the turbo library to decode very large jpegs "on the fly" as they download, so they can be panned and zoomed (PhotoScroller is an Apple project that does the panning and zooming, but it requires pre-tiled images).
While the above project is not exactly what you want, you should be able to lift much of the libjpeg-turbo interface to decode progressive images and return the low quality images as they are received. It would appear that your images are quite large, otherwise there would be little need for progressive images, so you may find the panning/zooming capability of the above project of use as well.
Some users of PhotoScrollerNetwork have requested support for progressive images, but it seems there is very little general use of them on the web.
EDIT: A second idea: if it's your site that you would use to vend progressive images (and I assume this since there are so few to be found normally), you could take a completely different tact.
In this case, you would construct a binary file of your own design - one that had say 4 images inside it. The first four bytes would provide the length of the data following it (and each subsequent image would use the same 4-byte prefix). Then, on the iOS side, as the download starts, once you got the full bytes of the first image, you could use those to build a small low res UIImage, and show it while the next image was being received. When the next one fully arrives, you would update the low res image with the newer higher res image. Its possible you could use a zip container and do on the fly decompression - not 100% sure. In any case, the above is a standard solution to your problem, and would provide near-identical performance to libjpeg, with much much less work.
I have implemented a progressive loading solution for an app I am currently working on. It does not use progressive Jpeg as I needed more flexibility loading different-res versions, but I get the same result (and it works really well, definitely worth implementing).
It's a camera app working in tandem with a server. So the images originate with the iPhone's camera and are stored remotely. When the server gets the image, it gets processed (using imageMagick, but could be any suitable library) and stored in 3 sizes - small thumb (~160 x 120), large thumb (~400x300) and full-size (~ double retina screensize). Target devices are retina iPhones.
I have an ImageStore class which is responsible for loading images asynchronously from wherever they happen to be, trying the fastest location first (live cache, local filesystem cache, asset library, network server).
typedef void (^RetrieveImage)(UIImage *image);
- (void) fullsizeImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
- (void)largeThumbImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
- (void)smallThumbImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
Each of these methods will also attempt to load lower-res versions. The completion block actually loads the image into it's imageView.
Thus
fullsizeImageFromPath
will get the fullsized version, and also call largeThumbImageFromPath
largeThumbImageFromPath
will get the large thumb and also call smallThumbImageFromPath
smallThumbImageFromPath
will just get the small thumb
These methods invoke calls that are wrapped in cancellable NSOperations. If a larger-res version arrives before any of it's lower-res siblings, those respective lower-res calls are cancelled. The net result is that fullsizeImageFromPath may end up applying the small thumb, then the large thumb, and finally the full-res image to a single imageView depending on which arrives first. The result is really smooth.
Here is a gist showing the basic idea
This may not suit you as you may not be in control of the server side of the process. Before I had implemented this, I was pursuing the solution that David H describes. This would have been a lot more work, and less useful once I realised I also needed access to lower-res images in their own right.
Another approach which might be closer to your requirements is explained here
This has evolved into NYXProgressiveImageView, a subclass of UIImageView which is distributed as part of NYXImagesKit
Finally ... for a really hacky solution you could use a UIWebView to display progressive PNGs (progressive JPegs do not appear to be supported).
update
After recommending NYXProgressiveImageView, I realised that this is what you have been using. Unfortunately you did not mention this in your original post, so I feel I have been on a bit of a runaround. In fact, reading your post again, I feel you have been a little dishonest. From the text of your post, it looks as if the "DEMO" is a project that you created. In fact you didn't create it, you copied it from here:
http://cocoaintheshell.com/2011/05/progressive-images-download-imageio/ProgressiveImageDownload.zip
which accompanies this blog entry from cocoaintheshell
The only changes you have made is one NSLog line, and to alter the JPG test URL.
The code snippet that you posted isn't yours, it is copied from this project without attribution. If you had mentioned this in your post it would have saved me a whole heap of time.
Anyway, returning to the post... as you are using this code, you should probably be using the current version, which is on github:
https://github.com/Nyx0uf/NYXImagesKit
see also this blog entry
To keep your life simple, you only need these files from the project:
NYXProgressiveImageView.h
NYXProgressiveImageView.m
NYXImagesHelper.h
NYXImagesHelper.m
Next you need to be sure you are testing with GOOD images
For example, this PNG works well:
http://www.libpng.org/pub/png/img_png/pnglogo-grr.png
You also need to pay attention to this cryptic comment:
/// Note: Progressive JPEG are not supported see #32
There seems to be an issue with JPEG tempImage rendering which I haven't been able to work out - maybe you can. That is the reason why your "Demo" is not working correctly, anyway.
update 2
added gist
I believe this is what you are looking for:
https://github.com/contentful-labs/Concorde
A framework for downloading and decoding progressive JPEGs on iOS and OS X, that uses libjpeg-turbo as underlying JPEG implementation.
Try it, may be its useful for you :
https://github.com/path/FastImageCache
https://github.com/rs/SDWebImage
I have the same problem then i found something tricky its not proper solution but it works.
You have to load low resolution/thumbnail image when loaded then load
actual image.
This is example for android i hope you can transform it into ios version.
I am performing processing on the camera output on the iPhone that I can only perform on the device. I now need to store each processed frame together with some additional information that has been provided by the processing step. Ultimately I need to open the data in Matlab to analyze it further.
Currently Im using AVAssetWriter to write a mov file and simultaneously building up a separate data.txt file which I email to myself at the end of each recording.
This process is unpleasant because it means I have to download the data.txt file every time I record something. Then I have to import the movie file from the phone (I obviously cant email the movie file because its too big for email) and then I have to give them both a common name.
Is there a video format supported by iOS and that I can open in Matlab which would allow me to get per frame data AND metadata? Is there a better method than what Im currently doing? I saw there is an AVMediaTypeText but I havnt found sample code for it and I doubt I can interleave it into a mov file...
It seems to me that you should create a "metafile" that includes the information about the name and location of the movie - then pass matlab the name of the metafile from which it accesses all the metadata and figures out what movie file to open. In other words - turn the problem around: instead of adding meta data to the mov, add mov information to the meta file.
I think that would solve 90% of the headache you are referencing?