Kivy: Pixel density aware media - kivy

Native android SDK supports media files differentiation with drawable-x, where x is one of ldpi (low), mdpi (medium), hdpi (high), and xhdpi (extra high) paths for media.
Is there a way to control which media should be used for which kind of pixel-density?

You can access the the dpi, density, resolution etc. through the kivy.metrics module, which on android uses the values reported by the system. Once you have that, you can easily choose a different image source depending on its value, though I don't think there is a standard property or widget to point at.
I'm not really famililar with the mechanism and advantages of the normal java method here, but it would probably be quite easy to make something very similar in kivy. For instance, you could easily make your own image widget subclass that chooses a specific image of a certain size depending on the pixel density.

Related

Display exr/hdr luminance in a viewer

I want to know if it's possible to display an exr/hdr image or just RGB color that have different luminance in a language ?
I tried sdl, python, cpp, opengl and also qt, but any language don't have much documentation about it.
I also saw about io and oiio, all of them is doing a tone mapping when displaying the image.
What I would like is like the DisplayHDR Test from VESA that are showing some test with different luminance, meaning a monitor with 100nits would not show difference with an image with 100 or 200nits but a monitor superior to 200nits would per example.
I am not sure but I think they use video and not image.

How to overlay the RTDOSE and Image data in the correct position?

I'm currently an MS student in Medical Physics and I have a great need to be able to overlay an isodose distribution from an RTDOSE file onto a CT image from a .dcm file set.
I've managed to extract the image and the dose pixel arrays myself using pydicom and dicom_numpy, but the two arrays are not the same size! So, if I overlay the two together, the dose will not be in the correct position based on what the Elekta Gamma Plan software exported it as.
I've played around with dicompyler and 3DSlicer and they obviously are able to do this even though the arrays are not the same size. However, I think I cannot export the numerical data when using these softwares.I can only scroll through and view it as an image. How can I overlay the RTDOSE to an CT image?
Thank you
for what you want it sounds like you should use Simple ITK (or equivalent - my experience is with sitk) to do the dicom handling, not pydicom.
Dicom has built in a complete system for 3D point and location specifications for all the pixel data in patient coordinates. This uses a bunch of attributes in the dicom files in the Image Plane Module set of tags. See here for a good overview.
The simple ITK library fully understands and uses the full 3D Image Plane tags to identify and locate any images in patient coordinates by default - irrespective of such things as the specific pixel spacing, slice thickness etc etc.
So - in your case - if you use SITK to open your studies, then you should be able to overlay them correctly "out of the box", because SITK will do all the work to parse the Image Plane Module tags and locate the data in patient coordinates - just like you get with 3DSlicer.
Pydicom, in contrast, doesn't itself try to use any of that information at all. It only gives you the raw pixel arrays (for images).
Note I use both pydicom and SITK. This isn't something bad about pydicom, but more a question of right tool for the job. In fact, for many (most?) things I use pydicom, but for any true 3D type work, SITK is the easier toolkit to use.

How to detect text in a photo

I am researching into the best way to detect test in a photo using open source libraries.
I think the standard way is as follows (note: steps 1 - 4 all use OpenCV):
1) detect outline of document
2) transform document so it's flat and cropped, using said outline
3) Make the background of document white, using a filter
4) Feed resulting image to Tesseract
Is this the optimum process, or is there a better way, or better tools?
Also, what happens for case if the photo doesn't have a document outline (It's possible that step 1 & 2 are redundant)?
Is there anyway to automatically detect document orientation (i.e. portrait / landscape)?
I think your process is fine. I've used a similar process for an Android project.
I think that the only way you can discover if a document is portrait/landscape is to reason with the length of the sides of the bounding box of your outline.
I don't think there's an automatic way to do this, maybe you can find the most external contour approximable with a 4 segment polyline (all doable in opencv). In order to get this you'll have to work with contour hierarchy and contous approximation (see cv2.approxPolyDP).
This is how I would go for automatic outline detection. As I said, the rest of your algorithm seems just fine to me.
PS. I'll leave my Android project GitHub link. I don't know if it can be useful to you, but here I specify the outline by dragging some handles, then transform the image and feed it to Tesseract, using Java and OpenCV. Yeah It's a very bad idea to do that in the main thread of an Android app and yeah, the app is not finished. I just wanted to experiment with OCR, so I didn't care much of performance and usability, since this was not intended to use, but just for studying.
Look up the uniform width transform.
What this does is detect edges which have more or less the same width with respect to their opposite edge. So things like drainpipes (which can be eliminated at a later pass) but also the majority of text. Whilst conceptually it's similar to a distance transform, the published method uses rather ad hoc normal projection methods and Canny edge detection.

Reducing Flash file size in KB?

I have a Flash file that I need to reduce the size of.
The reason that I need to reduce its size is that I will need to convert this into an iPhone app.
currently it only has 2 buttons and 2 TLF textfileds on the stage one, layer one and the size of the file is 355KB.
I have also placed the code on layer 2.
is there anyway to reduce the size of it so I won't have problems when publishing and sending for app store?
Thanks
The biggest portion of that file size will be related to TLF. TLF (Text-Layout-Framework) is huge and is generally not recommended on mobile (as it has pretty high cpu usage).
If you're not using any TLF specific features, then it would be wise to change your text fields to use classic text instead (DF3).
Beyond TLF, make sure you're using vector objects instead of bitmaps wherever you can as that will drastically reduce file size. If you are using bitmaps, you can play around with the compression settings to optimize file size further. You can do this globally in the Publish Settings (JPEG Quality) or individually on a graphics properties menu.
One note with Vector graphics and mobile, simple vectors will run ok, but complex vectors will run terribly. Make sure to set cacheAsBitmap = true; on any complex (or even all) vectors to improve performance. OR in FLashPRO, click on a movieClip and in the properties panel, go to the "Display" twirl down, and set cache as bitmap in the Render setting.

Programmatically determine available iPhone camera resolutions

It looks like when I shoot video with UIImagePickerControllerQualityTypeMedium, on an iPod Touch it comes out 480x360, but on an iPhone 4 it's something higher (can't say just what as I don't have one handy at the moment) and on an iPad 2 presumably the same as the 4, if not something different again.
I'd like to shoot the same quality on all devices -- I have to add some frames and titles, and it'll make my life a lot easier if I just have to code that for one resolution. Is there any way to determine what the different UIImagePickerControllerQualityType values correspond to at run time? (Apart from shooting video with each and then examining the result, that is.)
Or is my only choice to use UIImagePickerControllerQualityType640x480?
If you need more customization/power on iOS than you get wish the higher level objects, such as UIImagePickerController, it is recommended to work at the next lower level: AV Foundation Framework. Apple has some excellent documentation on AV Foundation programming that should come in handy for that purpose.
Unfortunately, even there you are limited to capturing at 640x480 if you do want it standard across all devices. There, however, is a great chart available at the same link (but anchors are broken in the docs, so Ctrl+F to "Capturing Still Images") that lists all the resolutions for various devices under certain quality directives.
Your most solid bet, assuming 640x480 is too small, is to work out some sort of scaling algorithm that would allow you to scale according to the overall resolution.

Resources