Openlayers-3 fitExtent capture replay not working - openlayers-3

I'm creating a 'bookmarking' feature on my map, recording the extent of the current view via ol.View.calculateExtent(). Once I've grabbed this extent I persist it (no loss of precision, in 'EPSG:900913').
Problem now is if I feed this extent into ol.View.fitExtent() I don't get exactly the same view, I get a slightly 'zoomed out' one.
The coordinates are exactly the same, the map size (ol.Map.getSize()) even the resolution (ol.View().getResolution()) but each time my recorded 'view' when I call it is further out than the recorded one.
Any ideas how I can exactly record the current 'view' and replay it accurately?Is this rounding? Should I not be using fitExtent?
N.B. This doesn't ALWAYS' happen! At high zooms it can sometimes accurately record and return me to the same view - resolutions at 2.388657133911758, 1.194328566955879 and 305.748113140705, when recorded, do not seem to exhibit this behaviour.

It has been replaced by ol.View.fit in v3.7.0:
Replace ol.View.fitExtent() and ol.View.fitGeometry() with
ol.View.fit() ... This combines two previously distinct functions into
one more flexible call which takes either a geometry or an extent.

Related

Swift AVCaptureSession keep original image after zooming

Question: Is there a way of having access to both original and zoomed image after using ramp function?
Use case: .ramp(..) is called every frame in captureOutput. Its zoom factor parameter is calculated by processing data from the original non-zoomed image.
Issue: I cannot seem to be able to have access to both images at the same time. Inside captureOutput sampleBuffers only contain the zoomed image.
Thoughts: I was initially thinking of researching a CAReplicatorLayer or Metal implementation of a backup layer, but I guess that would also hold a copy of the zoomed image. AVMultiCamSession would have been an easy solution, if only you had the possibility of adding the same device multiple times, thus performing ramp only on one AVCaptureDeviceInput while the other remains unaltered. Anyway, that is not permitted, you can add a device only once.

How to restrict findTransformEcc to a partial affine transform with scale but without shear?

I built a stereoscopic camera mobile app which performs automatic alignment using findTransformEcc and the app is working pretty well with it. I know I should probably be using rectifyStereoUncalibrated preceded by keypoints and descriptors etc. etc. but I get bad results from that despite many different approaches attempted and I'm super frustrated. So instead, I'm sticking with findTransformEcc (at least for now). At the moment I'm using MotionType.Euclidean (restricted to translations and rotations) but I would like to change that.
So far, the app has worked by having the user take one picture and move to the side to capture the next (chacha method). But now I'm adding the ability to have two phones capture simultaneously. The problem is that the focal length and sensor size (angular field of view) may be different between the two cameras, so in order to align the two pictures I need to allow scaling/zooming. However, if I want to do that with findTransformEcc I can only step up from Euclidean to Affine, it seems like I can't go between. That is, it seems I cannot allow scaling without also allowing shearing, and I don't want shearing.
As another way to explain this, I'd like to get the type of transform that you can get from estimateRigidTranform(array,array,FALSE) (partial affine) but rather than using keypoints as that function does, I want to use findTransformEcc because from my experimentation it just seems to be more reliable.
(https://github.com/KRA2008/crosscam/blob/develop/AutoAlignment/OpenCV.cs is the auto-alignment code if that helps at all)
Take a look at Fourier-Mellin transform based approach: https://github.com/Smorodov/LogPolarFFTTemplateMatcher
It will give you offset, scale and rotation parameters, nothing more.

How do you make Media Source work with timestampOffset lower than appendWindowStart?

I want to use appendBuffer and append only piece of the media I have.
To cut the piece from the end, I use appendWindowEnd and it works.
To cut it from the beginning I have to set timestampOffset lower than appendWindowStart. I have seen shaka-player doing something similar.
var appendWindowStart = Math.max(0, currentPeriod.startTime - windowFudge);
var appendWindowEnd = followingPeriod ? followingPeriod.startTime : duration;
...
var timestampOffset = currentPeriod.startTime -mediaState.stream.presentationTimeOffset;
From my tests, it works when timestampOffset is
same as appendWindowStart
1/10 second lower
Does't work when timestampOffset is lower than that. The segment doesn't get added. Does that have something to do with my media or the spec/implementation doesn't allow it?
From MDN web docs:
The appendWindowStart property of the SourceBuffer interface controls the timestamp for the start of the append window, a timestamp range that can be used to filter what media data is appended to the SourceBuffer. Coded media frames with timestamps within this range will be appended, whereas those outside the range will be filtered out.
Just found this in the specification, so I am updating the question:
If presentation timestamp is less than appendWindowStart, then set the need random access point flag to true, drop the coded frame, and jump to the top of the loop to start processing the next coded frame.
Some implementations may choose to collect some of these coded frames with presentation timestamp less than appendWindowStart and use them to generate a splice at the first coded frame that has a presentation timestamp greater than or equal to appendWindowStart even if that frame is not a random access point. Supporting this requires multiple decoders or faster than real-time decoding so for now this behavior will not be a normative requirement.
If frame end timestamp is greater than appendWindowEnd, then set the need random access point flag to true, drop the coded frame, and jump to the top of the loop to start processing the next coded frame.
Some implementations may choose to collect coded frames with presentation timestamp less than appendWindowEnd and frame end timestamp greater than appendWindowEnd and use them to generate a splice across the portion of the collected coded frames within the append window at time of collection, and the beginning portion of later processed frames which only partially overlap the end of the collected coded frames. Supporting this requires multiple decoders or faster than real-time decoding so for now this behavior will not be a normative requirement. In conjunction with collecting coded frames that span appendWindowStart, implementations may thus support gapless audio splicing.
If the need random access point flag on track buffer equals true, then run the following steps:
If the coded frame is not a random access point, then drop the coded frame and jump to the top of the loop to start processing the next coded frame.
Set the need random access point flag on track buffer to false.
and
Random Access Point
A position in a media segment where decoding and continuous playback can begin without relying on any previous data in the segment. For video this tends to be the location of I-frames. In the case of audio, most audio frames can be treated as a random access point. Since video tracks tend to have a more sparse distribution of random access points, the location of these points are usually considered the random access points for multiplexed streams.
Does that mean, that for a video, I have to choose timeOffset, which lands on 'I' frame?
The use of timestampOffset doesn't require an I-Frame. It just shifts the timestamp of each frame by that value. That shift calculations is performed before anything else (before appendWindowStart getting involved)
It's the use of appendWindowStart that are impacted to where your I-frames are.
appendWindowStart and appendWindowEnd act as an AND over the data you're adding.
MSE doesn't reprocess your data, by setting appendWindowStart you're telling the source buffer that any data contained prior that time are to be excluded
Also MSE works at the fundamental level of GOP (group of picture): from one I-Frame to another.
So let's imagine this group of images, made of 16 frames GOP, each having a duration of 1s.
.IPPPPPPPPPPPPPPP IPPPPPPPPPPPPPPP IPPPPPPPPPPPPPPP IPPPPPPPPPPPPPPP
Say now you set appendWindowStart to 10
In the ideal world you would have:
. PPPPPPP IPPPPPPPPPPPPPPP IPPPPPPPPPPPPPPP IPPPPPPPPPPPPPPP
All previous 9 frames with a time starting prior appendWindowStart have been dropped.
However, now those P-Frames can't be decoded, hence MSE set in the spec the "need random access point flag" to true, so the next frame added to the source buffer can only be an I-Frame
and so you end up in your source buffer with:
. IPPPPPPPPPPPPPPP IPPPPPPPPPPPPPPP IPPPPPPPPPPPPPPP
To be able to add the frames between appendWindowStart and the next I-Frame would be incredibly hard and time expensive.
It would require to decode all frames before adding them to the source buffer, storing them either as raw YUV data, or if hardware accelerated storing the GPU backed image.
A source buffer could contain over a minute of video at any given time. Imagine if it had to deal with decompressed data now rather than compressed one.
Now, if we wanted to preserve the same memory constraint as now (which is around 100MiB of data maximum per source buffer), you would have to recompress on the fly the content before adding it to the source buffer.
not gonna happen.

OpenCV: goodFeaturesToTrack and calcOpticalFlowPyrLK for moving camera

I tried the code written here:
http://i-vizon.blogspot.ch/2013/03/optical-flow-using-opencv-library-on.html
It works pretty well, but does not perform well in the case of moving camera, because all features are removed on scene change.
Fundamentally the code is so composed:
First frame: goodFeaturesToTrack(grayFrames,points1,MAX_COUNT,0.01,5,Mat(),3,0,0.04);
Other frames:
calcOpticalFlowPyrLK(prevGrayFrame,grayFrames,points2,points1,status,err,winSize,3,termcrit,0,0.001);
goodFeaturesToTrack(grayFrames,points1,MAX_COUNT,0.01,10,Mat(),3,0,0.04);
with the swapping of points and copy of the current frame in the previous.
Problem:
when I use it with camera, handling it on my hand, when I change the first scenes no optical flow is produced, I suppose because initial features are not more contained in the new frames.
How I can refresh the feature point in this code to continue working?
Which is a good refresh condition? Based for example on the the number of features?
Thank you very much.

openCV: is it possible to time cvQueryFrame to synchronize with a projector?

When I capture camera images of projected patterns using openCV via 'cvQueryFrame', I often end up with an unintended artifact: the projector's scan line. That is, since I'm unable to precisely time when 'cvQueryFrame' captures an image, the image taken does not respect the constant 30Hz refresh of the projector. The result is that typical horizontal band familiar to those who have turned a video camera onto a TV screen.
Short of resorting to hardware sync, has anyone had some success with approximate (e.g., 'good enough') informal projector-camera sync in openCV?
Below are two solutions I'm considering, but was hoping this is a common enough problem that an elegant solution might exist. My less-than-elegant thoughts are:
Add a slider control in the cvWindow displaying the video for the user to control a timing offset from 0 to 1/30th second, then set up a queue timer at this interval. Whenever a frame is needed, rather than calling 'cvQueryFrame' directly, I would request a callback to execute 'cvQueryFrame' at the next firing of the timer. In this way, theoretically the user would be able to use the slider to reduce the scan line artifact, provided that the timer resolution is sufficient.
After receiving a frame via 'cvQueryFrame', examine the frame for the tell-tale horizontal band by looking for a delta in HSV values for a vertical column of pixels. Naturally this would only work when the subject being photographed contains a fiducial strip of uniform color under smoothly varying lighting.
I've used several cameras with OpenCV, most recently a Canon SLR (7D).
I don't think that your proposed solution will work. cvQueryFrame basically copies the next available frame from the camera driver's buffer (or advances a pointer in a memory mapped region, or blah according to your driver implementation).
In any case, the timing of the cvQueryFrame call has no effect on when the image was captured.
So as you suggested, hardware sync is really the only route, unless you have a special camera, like a point grey camera, which gives you explicit software control of the frame integration start trigger.
I know this has nothing to do with synchronizing but, have you tried extending the exposure time? Or doing so by intentionally "blending" two or more images into one?

Resources