Image over a Image - blackberry

i want to put a Big image over a small Image , condition is the image which is on top has some specific rectangular area where the second image will be displayed. I want the the small Image to be displayed inside the big image not over the big image. i don't no will it possible or not , if it is possible can any one provide me guidance or provide me a sample code or link
thanks alot

Here is solution. Why don't you put the small image on top of the big image? Will that work? That way the illusion is the same that the small image is inside. Otherwise you have to play around with alpha transparency.
PS. Rupesh, you should also go back to the 13 questions that you asked prior to this one and accepted at least some answers. Otherwise chances are you will not get many answers later on to any of your new questions, because you are not rewarding the people that take time to answer your questions, with positive karma.

Related

Different background images for different scenes

I need some help on a trivia app that I am currently building. My question is, on each trivia question I would like to have a different background image that corresponds to each trivia question.
I sort of know how to do it but I am still really confused, I only know how to make an image a permanent background and not how to have it change for each question.
You can attach images with your question if you want different and unique images for all questions.
But if you want to use random images and repetition is allowed than you can use array of images and randomly choose from the one.

How do I Display Select Pixels of an Image Using Swift?

I'm trying to define pixels of an image based on their color code, and display them on command. I've been researching this and can't seem to find information on the subject, I've mostly been pointed towards "Animations" and "Core Image Filters" which isn't quite what I'm looking for. Any information on the subject is much appreciated.
https://github.com/jjxtra/DRColorPicker
try this sample code of a color picker. one of the views is a plane of continuous colors and is probably what you want. apple has good source code and the swift book is great for avoiding bad questions. you have to watch it as a beginer.too many simple questions on this site and they will block you from asking more.

Is it possible for an iOS app to take an image and then analyze the colors present in said image?

For example after taking the image, the app would tell you the relative amount of red, blue, green, and yellow present in the picture and how intense each color is.
That's super specific I know, but I would really like to know if it's possible and if anyone has any idea how to go about that.
Thanks!
Sure it's possible. You've have to load the image into a UIImage, then get the underlying CGImage, and get a pointer to the pixel data. If you average the RGB values of all the pixels you're likely to get a pretty muddy result, though, unless you're sampling an image with large areas of strong primary colors.
Erica Sadun's excellent iOS Developer Cookbook series has a section on sampling pixel image data that shows how it's done. In recent versions there is a "core" and an "extended" volume. I think it's in the Core iOS volume. My copy of Mac iBooks is crashing repeatedly right now, so I can't find it for you. Sorry about that.
EDIT:
I got it to open on my iPad finally. It is in the Core volume, in recipe 1-6, "Testing Touches Against Bitmap Alpha Levels." As the title implies, that recipe looks at an image's alpha levels to figure out if you've tapped on an opaque image pixel or missed the image by tapping on a transparent pixel. You'll need to adapt that code to come up with the average color for an image, but Erica's code shows the hard part - getting and interpreting the bytes of image data. That book is all in Objective-C. Post a comment if you have trouble figuring it out.

Create custom UIImagePickerController to set aspect ratio of camera

Stack Overflow maybe has 1000 threads addressing and re-addressing cropping images on iOS; many of which contains answers claiming to work just like Instagram (my guess is they never open Instagram). So instead of simply using the word Instagram, let me describe the functionalities of what I am trying to do:
I want to create CustomUIImagePickerController such that:
CAMERA: I can either set the exact dimension of the image that the camera takes; or on the very next screen (i.e. retake/use image) have a custom rectangle that the user can move to crop the image just taken.
GALLERY: (same as above:) set the dimensions of the frame that user will use to crop the image.
So far on SO one answer comes close: the answer points to https://github.com/gekitz/GKImagePicker.
But the crucial problem with that project is that it only works with gallery. It does not work with camera.
So lastly, if you look at the Instagram app for iOS-7, it takes complete control of the picture taking experience. How do I do that? I don’t want to have my users go through the whole standard iOS UIImagePickerController experience and then on top of that have them go through my own cropper just to load an image. That’s simply terrible user experience. How many steps or screens does one need to take or load a picture? Right? I know I can't be the only developer who would love to solve this problem. I don’t mind being the one to find the solution and then share it with the world. But presently I don’t even know where to start?
Does anyone have any ideas where I might start?
BTW: the following are not answers:
how to change the UIImagePickerController crop frame
Set dimensions for UIImagePickerController "move and scale" cropbox

reading , zoom and move images like remote sensing image (e.g jpg,tiff,img) by managed DX

excuse me,my english is poor however i would try to describe my questions clearly.
first i want to operate (read,zoom,move,zoom with rectangle) some image whose format like jpg,tiff and img .
i have try to do this by gdal,using rasterio to zoom and move ,but the result is quite strange.it's slow than i do it with gdi+.i have asked other people,however ,the answer may be rasterio read image direct from hard disk, but gdi+ do things in ram. and maybe the images i operated are small images ,small than 4000*3000.
so now i operate images in gdi+.but i think if i can do same things in directx?
i mean i use directx instead gdi+.because i think it will be more fast.
and because i can only use c#,so i think there are some people could give me some suggestion with managed dx or xna
thx~~~
There is already a fast image viewer called TuiView that is simple to install and use.
Documentation is here: http://tuiview.org
If I understood your question, you are trying to build a simple image viewer.
If so you can easily do it with XNA and it will work very fast.
All you need to do is to load the image and display it to the screen, and the pan and zoom are also very simple.
Read this tutorial :
http://rbwhitaker.wikidot.com/spritebatch-basics

Resources