I have a C++ server application that receives .png images from a client Android application. The connection uses sockets and receives the image data as char's and stores it in a vector.
std::vector<char> vec;
The application then needs to display the images on screen using directx10, I know you can manually fill textures but that would mean I would need to parse the png file. My question is is there anyway other way of doing this?
D3DX10CreateTextureFromMemory.
Related
I try to upload my image into server converting to NSData but it's take to much time to convert images.
Can we upload image to server without converting to NSData ?
At starting I had tried sending image directly to server.
But without converting image to data and string if I tried to send it ,it could not send also it is not possible.Because in server if we send data anything to server,it must be a string format.If we send image directly,it takes much space also it should not be a data.Image must be a URL or path in server.
It depends upon the source of these images. If the images are from the photos libraries, for example, then, yes, you can retrieve the original asset, avoiding the inefficient process of converting a UIImage to a NSData (which can make the asset larger, introduce quality loss, lose metadata, etc.).
The key take home message in this scenario is not "how to avoid converting to NSData", but rather "how to avoid round-tripping the image through a UIImage and then reconverting it back to a NSData, but rather just retrieve the original asset."
But this only applies to images for which you have the original raw asset (e.g. images in the photos library, images in the documents folder, etc.). In those cases, just grab the raw NSData and you're done.
But if this is an image that you generated programmatically (e.g. a snapshot of a UIKit view), then that will have to be converted to a NSData. But if you can avoid round-tripping it through the UIImage in the first place, then you eliminate this overhead.
You can use blob data type store the data. Though its not highly recommended. You can refer these links. BLOB data type and how to use BLOB storage service as don't put large BLOBs in a database. Put them in the filesystem next to your database file and then write the filenames or an URL to the database. This will be a much faster, will use the database much more efficiently, will minimize I/O.
I am developing an app in which I have to upload a file to server from device, i know how to send file to server using following NSMutableURLRequest , but is it possible to store file to server without converting to NSData??
I have seen How upload image in to asp.net server without converting into NSData in iPhone this question but didn't find any solution.
Networks are analogue and computer and devices are digital they not understand what is what for them all the things are digital data so I guess now you understand its simply not possible, If you are having any specific problem of converting your file into data and the getting them back, you can go ahead
The data on a network always flows in Bytes irrespective of any platform.. NSData will finally be converted to bytes data before sending it over network.. On higher level your data should be in NSData format but on lower level its on Bytes.. So in ios it will be either Bytes of Data (if you are using NSInputstream to upload data) or NSData (which will finally converted to Bytes again and sent over the Network)
I am developing an application that requires editing an image by converting it into a canvas an the save the changes as an Image. So my doubt is that, can this Data URL be passed along the application and save to local storage of the browser to be produced back as an image when the application is loaded ??
Yes, why not.
Only problem is that canvas creates very large filesizes and local storage has a size limit. For production keep limit the use of local storage to 2,5mb. That should cover most browsers.
A better solution would be to post the data url to a server and retrieve it later.
Check it out yourself, compare filesize of your canvas created png to the filesize of optimized png's data url
I' m framing video images from video stream and I took one frame of the video streaming from a video Port (as a first step of my application) so I could transmit the raw UYVY video data. After running, these data are stored into a .dat file
Meanwhile, before transmitting the raw data, I am looking for a way to display the decoded information stored in the .dat file. In other words, Is there a software that would convert the UYVY raw data into a picture?
Thank you for your assistance
Each of your .dat file is an UYVY 422 image that can be display.
Without any details about your platform and your running context, I would recommend the use of the FFMPEG suite.
Using ffplay you could display your image with
ffplay -video_size WIDTHxHEIGHT -pixel_format uyvy422 filename.dat
As you saw the only things you need to know is the image size!
I'm working on an embedded home surveillance system. I want to interface a couple of serial-enabled JPEG capture cameras, maybe a couple of door sensors, etc. Problem is, I can't for the life of me figure out how to interface a camera to a microcontroller. Stills, streaming video, it doesn't matter - I can't find any how-to documentation on this.
I understand serial communications, and most of the camera documentation I've found out there describes the protocol necessary to instruct the camera to send the datastream down to the uC for capture. What they don't show is what you're supposed to do with the data once you get it.
Here's an example.
They show a great little video, and the datasheet describes which bytes must be sent to the camera to retrieve the image. What I need is an example or tutorial of some sort that will explain what to do with the stream of bytes that make up the image itself. How do I arrange those bytes into an image and save it as a file?
I've looked all over the place for a tutorial of some sort, but have come up dry. I'm not sure which processor I'll use for this project just yet, but this question isn't really processor-dependent. All I need is the algorithm, maybe a peek at a library, if one exists. I'll take that process and adapt it to my hardware, I just can't seem to find a place to get started.
Have any of you done this?
I think the details are pretty clear in page 10 inside this document:
http://www.4dsystems.com.au/downloads/micro-CAM/Docs/uCAM-DS-rev4.pdf
First, one package is between 64 to 512 bytes - flexibly defined by the programmer. Image size is the actual JPEG image itself....nothing more or less....just pure JPEG image. So the equation to calculate the number of package based on image_size / package_size is given in page 10.
Next, is that (package_size - 6) is to be consistently used everywhere, because 6 bytes are used up for non-data purpose, so (package_size - 6) will be just the data - but u have to reassemble it yourself.
To assemble the data from the package, u have to strip the 4 byte header + 2 byte trailer and concatenate all these from all the package sequentially one after another.
Other facts:
a. "Set Package Size" command must be sent from host to CAM - before "SNAPSHOT" command, which capture the image from the camera into the CAM memory buffer.
b. Next is to send "SNAPSHOT" command to capture the image into memory buffer.
c. Last is to send "GET PICTURE" command (only one time, but data will come back multiple times - see diagram in page 15) to extract out all the images....and it will come back in the form of "package" as we have defined the size earlier in "set package size". Since u have calculate the formula u will know when to stop asking for the next package. And there is a verification byte - u have to used that to make sure data is correct.
I have not used this camera but looks like it works exactly the same is a camera (C328) I have used. Send an image resolution/colour command. When you want get an image send an image capture command. The camera responds by sending a binary file over the serial link.