I get detections in the form of a point(range index,doppler index) .If I have multiple targets ,i will get multiple points for those many targets.So how can the RADAR distinguish which point corresponds to which target?
There is an idea of something called "Range Resolution".
The ability of a radar system to distinguish between two or more targets on the same bearing but at different ranges. The degree of range resolution depends on the width of the transmitted pulse, the types and sizes of targets, and the efficiency of the receiver and indicator.
Pulse width is the primary factor in range resolution. A well-designed radar system, with all other factors at maximum efficiency, should be able to distinguish targets separated by one-half the pulse width time τ. Therefore, the theoretical range resolution cell of a radar system can be calculated from the following equation: d >= ct/2, where c=speed of light, and t=time.
You can read more about it here: http://www.radartutorial.eu/01.basics/Range%20Resolution.en.html
Related
Background:
Assuming there are two shots for the same scene from two different perspective. Applying a registration algorithm on them will result in Homography Matrix that represents the relation between them. By warping one of them using this Homography Matrix will (theoretically) result in two identical images (if the non-shared area is ignored).
Since no perfection is exist, the two images may not be absolutely identical, we may find some differences between them and this differences can be shown obviously while subtracting them.
Example:
Furthermore, the lighting condition may results in huge difference while subtracting.
Problem:
I am looking for a metric that I can evaluate the accuracy of the registration process. This metric should be:
Normalized: 0->1 measurement which does not relate to the image type (natural scene, text, human...). For example, if two totally different registration process on totally different pair of photos have the same confidence, let us say 0.5, this means that the same good (or bad) registeration happened. This should applied even one of the pair is for very details-reach photos and the other of white background with "Hello" in black written.
Distinguishing between miss-registration accuracy and different lighting conditions: Although there is many way to eliminate this difference and make the two images look approximately the same, I am looking of measurement that does not count them rather than fixing them (performance issue).
One of the first thing that came in mind is to sum the absolute differences of the two images. However, this will result in a number that represent the error. This number has no meaning when you want to compare it to another registration process because another images with better registration but more details may give a bigger error rather than a smaller one.
Sorry for the long post. I am glad to provide any further information and collaborating in finding the solution.
P.S. Using OpenCV is acceptable and preferable.
You can always use invariant (lighting/scale/rotation) features in both images. For example SIFT features.
When you match these using typical ratio (between nearest and next nearest), you'll have a large set of matches. You can calculate the homography using your method, or using RANSAC on these matches.
In any case, for any homography candidate, you can calculate the number of feature matches (out of all), which agree with the model.
The number divided by the total matches number gives you a metric of 0-1 as to the quality of the model.
If you use RANSAC using the matches to calculate the homography, the quality metric is already built in.
This problem is given two images decide how misaligned they are.
Thats why we did the registration. The registration approach cannot answer itself how bad a job it did becasue if it knew it it would have done it.
Only in the absolute correct case do we know the result: 0
You want a deterministic answer? you add deterministic input.
a red square in a given fixed position which can be measured how rotated - translated-scaled it is. In the conditions of lab this can be achieved.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Getting displacement from accelerometer data with Core Motion
Android accelerometer accuracy (Inertial navigation)
I am trying to use core motion user acceleration values, and double integrating them to derive distance covered. I move my iPhone linearly along its Y axis, against a 30 cm log ruler, on the table. First, I let the device be at rest for 10 seconds, and I calculate my offsets along the three axes, by averaging the respective user acceleration values.
The X, Y and Z offsets are subtracted from the acceleration values, when I try calculating the distance covered. After offset subtraction, these values are passed through a low pass filter and a median filter, separately of course. The filters are linear filters, and the cut-off frequency is specified by the number of neighbouring values whose mean is taken in low pass, and median in the median filter. I have experimented with varying values of this number from 1 to 100. In the end, these filtered values are double integrated using trapezoidal rule to get distances. But, the distance calculated is no where close to 30 cm. The closest value I got was some -22 cm(I am wondering why I am getting negative values even though I move the device in positive Y direction). I also came across this:
http://ajnaware.wordpress.com/2008/09/05/accelerating-iphones/
its an old post about the same thing, which says that the accelerometer readings returned appeared to come in quanta of about 0.18m/s^2 (ie. about 0.018g), resulting in a large cumulative error very quickly. Going by that, for this error to really not matter, one will have to accelerate the device by almost 1.8m/s^2, which is practically impossible for distance/length measurement purposes. for small movements, it does not look like there is a possibility of calculating distances by using an optimal filter and a higher order numerical integration method, without an impractical velocity/acceleration constraint like that. Is it possible?
How about using my acceleration vs timestamp data to interpolate a polynomial that grows over time, as I get more and more motion updates, which represents approximately an acceleration vs time curve. Double integration of ths polynomial would be a piece of cake. But, for small distances, the polynomial will have a big error component. Using a predictable known motion that my device will be subjected to, I wish to take a huge number of snapshots (calculated distance vs actual known distance) to calculate my error polynomial in a similar way, and then subtract it from my first polynomial. Can this work?
Although this does not fit StackOverflow, because it's not a question but a discussion, I'll try to sum up my thoughts about it.
As already said, the accelerometer is very inaccurate and you would need very good accuracy for this kind of task, especially if you are trying to measure such short distances. Plus, accelerometers differ from device to device, you will get different results for the same movements with different device. Plus a very huge random error.
My guess is, that you can get rid of a huge part of randomness/error by calibrating the device and making the "measurement move" a couple of times, like 10 times. After that you have enough data to get an average that might get close to the real value.
Calibration is a key part here, you have to think of a clever way to calibrate, like letting the user move the device over different distances in different speeds.
But all this is just theory. I would really like to see your results, but I doubt you get it working good enough even using the best possible filters/algorithms, since there is just too much noise.
I'm actually trying to detect characteristics of the time series for a very big region composed of many smaller subregions (in my case pixels). I don't know much about this, so the only way I can come up with is an averaged time series for the entire region, although I know this would definitely conceal many features by averaging.
I'm just wondering if there are any widely used techniques that can detect the common features of a suite of time series? like pattern recognition or time series classification?
Any ideas/suggestions are much appreciated!
Thanks!
Some extra explanations: I'm dealing with remote sensing images of several years with a time step of 7 days. So for each pixel, there is a time series associated, with values extracted from this pixel on different dates.So if I define a region consisting of many pixels, is there a way to detect or extract some common features charactering all or most of the time series of pixels within this region? Such as the shape of the time series, or a date around which there's an obvious increase in the values?
You could compute the correlation matrix for the pixels. This would simply be:
corr = np.zeros((npix,npix))
for i in range(npix):
for j in range(npix):
corr(i,j) = sum(data(i,:)*data(j,:))/sqrt(sum(data(i,:)**2)*sum(data(j,:)**2))
If you want more information, you can compute this as a function of time, i.e. divide your time series into blocks (say minutes) and compute the correlation for each of them. Then you can see how the correlation changes over time.
If the correlation changes a lot, you may be more interested in the cross-power spectrum of the pixels. This is defined as
cpow(i,j,:) = (fft(data(i,:))*conj(fft(data(j,:)))
This will tell you how much pixel i and j tend to change together on various time-scales. For example, they could be moving in unison in time-scales of a second (1 Hz), but also have changes on a time-scale of, say, 10 seconds which are not correlated with each other.
It all depends on what you need, really.
I work with a lot of histograms. In particular, these histograms are of basecalls along segments on the human genome.
Each point along the x-axis is one of the four nitrogenous bases(A,C,T,G) that compose DNA and the y-axis represents how many times a base was able to be "called" (or recognized by a sequencer machine, so as to sequence the genome, which is simply determining the identity of each base along the genome).
Many of these histograms display roughly linear dropoffs (when the machines aren't able to get sufficient read depth) that fall to 0 or (almost-0) from plateau-like regions. When the score drops to zero, it means the sequencer isn't able to determine the identity of the base. If you've seen the double helix before, it means the sequencer can't figure out the identify of one half of a rung of the helix. Certain regions of the genome are more difficult to characterize than others. Bases (or x data points) with high numbers of basecalls, on the order of >=100, are able to be definitively identified. For example, if there were a total of 250 calls for one base, and we had 248 T's called, 1 G called, and 1 A called, we would call that a T. Regions with 0 basecalls are of concern because then we've got to infer from neighboring regions what the identity of the low-read region could be. Is there a straightforward algorithm for assigning these plots a score that reflects this tendency? See box.net/shared/nbygq2x03u for an example histo.
You could just use the count of base numbers where read depth was 0... The slope of that line could also be a useful indicator (steep negative slope = drop from plateau).
I want to create a game with a level structure similar to iCopter or Canabalt, where each level has a randomized floor (and roof), but the height of the floor is never impossible to reach from the previous one. I am also unsure on how to continually increase difficulty. I have searched far and wide for a tutorial or something like that, but I couldn't find anything. Can anyone help?
It sounds like far too specific a need to be the subject of a tutorial, to be honest. I've played Canabalt but not iCopter so I'll talk about a game like the former.
There are all sorts of calculus equations you could use to calculate acceleration and gravity to work out precisely where a platform would have to be in order to be reachable, but I suspect you will do just as well with a simpler approximation. If all your platforms are of a minimum length, then you can make an assumption on the speed that it's reasonable to expect a player to be able to reach by the time they get to the end. That, in combination with however long your jump algorithm keeps someone in the air, dictates the maximum distance that another platform of the same height could possibly be and still be reachable.
The highest platform you can reach is usually dictated by your jump algorithm - that could be a constant height, or it could be proportional to the speed, but either way you can easily estimate the highest reasonable jump you can make at the end of any given platform. This gives you a maximum relative height that you can reach from there.
Assuming your physics are fairly realistic and you apply a constant downwards force while the player is in the air, the apex of the jump will be at around the half-way point. So a platform that is the maximum attainable height relative to the player needs to be half as distant as one on the same level would be. And to find reasonable relative height and distance combinations in between, you can linearly interpolate.
Platforms below you are obviously more lenient - they can be further away than one on the same level, again by a distance roughly proportional to the speed you're travelling.
A simple algorithm then would be to pick, at each stage, either a higher or lower platform, pick a relative height from within the attainable bounds, then find the distance it needs to be at.
To adjust difficulty, you can start with these relative height and distance values above, which represent the extremes of what is possible, and reduce them by a proportion to make the jumps easier to complete. I might start with 50% reductions, +/-10% (randomised) to provide a few tougher jumps. Then as the game progresses, I'd slowly ramp that 50% down towards 0, so the player has less and less margin for error.
EDIT: Since I posted this answer, I found another interesting source which may be of use: A Probabilistic Multi-Pass Level Generator. Although the game in question is different (one of the Mario games I don't recognise) many of the principles are similar, in terms of placing platforms at reasonable heights and distances. Java code is provided.
I'm not sure how comfortable you are with math, physics, etc. but, in my opinion, this is a pretty simple solution:
Using a formula to determine if a launched ball will clear a fence in the distance is a reasonable way to find an arc defining the possible farthest points the next platform could be. It's a standard formula you learn in physics when studying projectile motion. There's a fairly interactive example here that includes the equation.
I'd recommend determining the position for your next platform like this:
Randomly choose a horizontal distance X from the end of the current platform to the beginning of the next platform (determine a reasonable range for X however you want).
Use the fence problem to find a maximum value for height Y to make the platform reachable.
You may need to subtract a small amount from the maximum height to ensure the platform can be reached depending on how you have things implemented.
Choose a height Y that is no higher than the maximum (remember that you can and should allow negative values for Y).
Place the next platform past the current one at distance X and height Y