I'm working with the Photos framework, specifically I'd like to keep track of the current camera roll status, thus updating it every time assets are added, deleted or modified (mainly when a picture is edited by the user - e.g a filter is added, image is cropped).
My first implementation would look something like the following:
private var lastAssetFetchResult : PHFetchResult<PHAsset>?
func photoLibraryDidChange(_ changeInstance: PHChange) {
guard let fetchResult = lastAssetFetchResult,
let details = changeInstance.changeDetails(for: fetchResult) else {return}
let modified = details.changedObjects
let removed = details.removedObjects
let added = details.insertedObjects
// update fetch result
lastAssetFetchResult = details.fetchResultAfterChanges
// do stuff with modified, removed, added
}
However, I soon found out that details.changedObjects would not contain only the assets that have been modified by the user, so I moved to the following implementation:
let modified = modifiedAssets(changeInstance: changeInstance)
with:
func modifiedAssets(changeInstance: PHChange) -> [PHAsset] {
var modified : [PHAsset] = []
lastAssetFetchResult?.enumerateObjects({ (obj, _, _) in
if let detail = changeInstance.changeDetails(for: obj) {
if detail.assetContentChanged {
if let updatedObj = detail.objectAfterChanges {
modified.append(updatedObj)
}
}
}
})
return modified
}
So, relying on the PHObjectChangeDetails.assetContentChanged
property, which, as documentation states indicates whether the asset’s photo or video content has changed.
This brought the results closer to the ones I was expecting, but still, I'm not entirely understanding its behavior.
On some devices (e.g. iPad Mini 3) I get the expected result (assetContentChanged = true) in all the cases that I tested, whereas on others (e.g. iPhone 6s Plus, iPhone 7) it's hardly ever matching my expectation (assetContentChanged is false even for assets that I cropped or added filters to).
All the devices share the latest iOS 11.2 version.
Am I getting anything wrong?
Do you think I could achieve my goal some other way?
Thank you in advance.
Related
Im looking for a fast way to compare two frames of video, and decide if a lot has changed between them. This will be used to decide if I should send a request to image recognition service over REST, so I don't want to keep sending them, until there might be some different results. Something similar is doing Vuforia SDK. Im starting with a Framebuffer from ARKit, and I have it scaled to 640:480 and converted to RGB888 vBuffer_image. It could compare just few points, but it needs to find out if difference is significant nicely.
I started by calculating difference between few points using vDSP functions, but this has a disadvantage - if I move camera even very slightly to left/right, then the same points have different portions of image, and the calculated difference is high, even if nothing really changed much.
I was thinking about using histograms, but I didn't test this approach yet.
What would be the best solution for this? It needs to be fast, it can compare just smaller version of image, etc.
I have tested another approach using VNFeaturePointObservation from Vision. This works a lot better, but Im afraid it might be more CPU demanding. I need to test this on some older devices. Anyway, this is a part of code that works nicely. If someone could suggest some better approach to test, please let know:
private var lastScanningImageFingerprint: VNFeaturePrintObservation?
// Returns true if these are different enough
private func compareScanningImages(current: VNFeaturePrintObservation, last: VNFeaturePrintObservation?) -> Bool {
guard let last = last else { return true }
var distance = Float(0)
try! last.computeDistance(&distance, to: current)
print(distance)
return distance > 10
}
// After scanning is done, subclass should prepare suggestedTargets array.
private func performScanningIfNeeded(_ sender: Timer) {
guard !scanningInProgress else { return } // Wait for previous scanning to finish
guard let vImageBuffer = deletate?.currentFrameScalledImage else { return }
guard let image = CGImage.create(from: vImageBuffer) else { return }
func featureprintObservationForImage(image: CGImage) -> VNFeaturePrintObservation? {
let requestHandler = VNImageRequestHandler(cgImage: image, options: [:])
let request = VNGenerateImageFeaturePrintRequest()
do {
try requestHandler.perform([request])
return request.results?.first as? VNFeaturePrintObservation
} catch {
print("Vision error: \(error)")
return nil
}
}
guard let imageFingerprint = featureprintObservationForImage(image: image) else { return }
guard compareScanningImages(current: imageFingerprint, last: lastScanningImageFingerprint) else { return }
print("SCANN \(Date())")
lastScanningImageFingerprint = featureprintObservationForImage(image: image)
executeScanning(on: image) { [weak self] in
self?.scanningInProgress = false
}
}
Tested on older iPhone - as expected this causes some frame drops on camera preview. So I need a faster algorithm
I'm developing a QLThumbnailProvider extension to display thumbnails for my document type. My extension does not appear to be being called - my thumbnails are not appearing and I'm not seeing the logging I've added appearing in any log files.
I have an UIDocumentBrowserViewController based app that defines a new document type. It exports an UTI (com.latenightsw.Eureka.form). My app is able to browse, create and open documents, but the thumbnails are blank.
I've added a Thumbnail Extension target to my project. The code looks like this:
class ThumbnailProvider: QLThumbnailProvider {
override func provideThumbnail(for request: QLFileThumbnailRequest, _ handler: #escaping (QLThumbnailReply?, Error?) -> Void) {
// Third way: Set an image file URL.
print("provideThumbnail: \(request)")
handler(QLThumbnailReply(imageFileURL: Bundle.main.url(forResource: "EurekaForm", withExtension: "png")!), nil)
}
}
I've confirmed that EurekaForm.png is part of the target and being copied to the extension's bundle (as well as the host app's bundle).
And I've confirmed that my UTI is declared:
Does anyone have any suggestions?
It appears that logging and breakpoints sometimes do not work inside app extension. Even fatalErrors occur silently.
In my project I could not get the initialiser QLThumbnailReply(imageFileURL:) to work. However the other initialisers seem to work better.
Drawing the image into a context
When using the context initialiser you have to use a context size which lies between request.minimumSize and request.maximumSize.
Below I've written some code which takes an image and draws it into the context while keeping the above conditions.
override func provideThumbnail(for request: QLFileThumbnailRequest, _ handler: #escaping (QLThumbnailReply?, Error?) -> Void) {
let imageURL = // ... put your own code here
let image = UIImage(contentsOfFile: imageURL.path)!
// size calculations
let maximumSize = request.maximumSize
let imageSize = image.size
// calculate `newImageSize` and `contextSize` such that the image fits perfectly and respects the constraints
var newImageSize = maximumSize
var contextSize = maximumSize
let aspectRatio = imageSize.height / imageSize.width
let proposedHeight = aspectRatio * maximumSize.width
if proposedHeight <= maximumSize.height {
newImageSize.height = proposedHeight
contextSize.height = max(proposedHeight.rounded(.down), request.minimumSize.height)
} else {
newImageSize.width = maximumSize.height / aspectRatio
contextSize.width = max(newImageSize.width.rounded(.down), request.minimumSize.width)
}
handler(QLThumbnailReply(contextSize: contextSize, currentContextDrawing: { () -> Bool in
// Draw the thumbnail here.
// draw the image in the upper left corner
//image.draw(in: CGRect(origin: .zero, size: newImageSize))
// draw the image centered
image.draw(in: CGRect(x: contextSize.width/2 - newImageSize.width/2,
y: contextSize.height/2 - newImageSize.height/2,
width: newImageSize.width,
height: newImageSize.height);)
// Return true if the thumbnail was successfully drawn inside this block.
return true
}), nil)
}
I've gotten the Thumbnail Extension rendering but it only displays its renders in the Files app (others use the App Icon) as far as I can tell.
It is important to note this issue with debugging extensions in that print to console and breakpoints may not be called even though the extension is running.
I see that you have the QLSupportedContentTypes set with your UTI but you may also want to change your UTI to something new as this is when it started working for me. I think after some testing the UTI can get corrupted. While it was working, I had a breakpoint set and it was never called.
In my case, the extension didn't work in the simulator (Xcode 11.1). Everything works as expected on a real device (iOS 13.1.2).
Ok, I am new to URL querying and this whole aspect of Swift and need help. As is, I have an iMessage app that contains and SKScene. For the users to take turns playing the game, I need to send the game back and forth in messages within 1 session as I learned here : https://medium.com/lost-bananas/building-an-interactive-imessage-application-for-ios-10-in-swift-7da4a18bdeed.
So far I have my scene all working however Ive poured over Apple's ice cream demo where they send the continuously-built ice cream back and forth, and I cant understand how to "query" everything in my SKScene so I can send the scene.
I'm unclear as to how URLQueryItems work as the documentation does not relate to sprite kit scenes.
Apple queries their "ice cream" in its current state like this:
init?(queryItems: [URLQueryItem]) {
var base: Base?
var scoops: Scoops?
var topping: Topping?
for queryItem in queryItems {
guard let value = queryItem.value else { continue }
if let decodedPart = Base(rawValue: value), queryItem.name == Base.queryItemKey {
base = decodedPart
}
if let decodedPart = Scoops(rawValue: value), queryItem.name == Scoops.queryItemKey {
scoops = decodedPart
}
if let decodedPart = Topping(rawValue: value), queryItem.name == Topping.queryItemKey {
topping = decodedPart
}
}
guard let decodedBase = base else { return nil }
self.base = decodedBase
self.scoops = scoops
self.topping = topping
}
}
fileprivate func composeMessage(with iceCream: IceCream, caption: String, session: MSSession? = nil) -> MSMessage {
var components = URLComponents()
components.queryItems = iceCream.queryItems
let layout = MSMessageTemplateLayout()
layout.image = iceCream.renderSticker(opaque: true)
layout.caption = caption
let message = MSMessage(session: session ?? MSSession())
message.url = components.url!
message.layout = layout
return message
}
}
But I cant find out how to "query" an SKScene. How can I "send" an SKScene back and forth? Is this possible?
You do not need to send an SKScene back and forth :) What you need to do is send the information relating to your game set up - such as number of turns, or whose turn it is, or whatever, as information that can be accessed by your app at the other end to build the scene.
Without knowing more about how your scene is set up and how it interacts with the information received for the other player's session, I can't tell you a lot in terms of specifics. But, what you need to do, if you are using URLQueryItems to pass the information, simply retrieve the list of query items in your scene and set up the scene based on the received values.
If you have specific questions about how this could be done, if you either share the full project, or post the relevant bits of code as to where you send out a message from one player and how the other player receives the information and sets up the scene, I (or somebody else) should be able to help.
Also, if you look at composeMessage in the code you posted above, you will see how in that particular code example the scene/game information was being sent to the other user. At the other end of the process, the received message's URL parameter would be decomposed to get the values for the various query items and then the scene would be set up based on those values. Look at how that is done in order to figure out how your scene should be set up.
I recently upgraded my iOS Project to Swift 3 and iOS 10. Since then I'm running into a weird problem with Realm.
Here is what I try to do: I have a set of Positions, which I want to update with Server Data. So for every existing Position, I want to update it if there exists a newer version. If there is a Server Position, which doesn't exist locally, then I add it.
Here is the Code for that:
let newPositions = serializePositions(jsonResponse)
for newPosition in newPositions {
if let existingPositon = uiRealm.object(ofType: Position.self, forPrimaryKey: newPosition.id as String) {
if (progress.learningVersion < serverLearningVersion) {
try! uiRealm.write {
existingPositon.rank = newPosition.rank
existingPositon.starred = newPosition.starred
}
}
} else {
try! uiRealm.write {
progress.positions.append(newPosition)
}
}
}
If I run this, the something weird happens:
For the firstItem in the loop (the first Position) it works correctly.
But then for the following Positions I get nil for existing Positions, even if it exists.
The Primary Key in the Position Model is a String Field and I use MongoDB Object Ids from the Server.
This is how the Positions are serialized from JSON:
func serializePositions(_ json: JSON) -> List<Position> {
let positions = List<Position>()
let serverPositions = json["positions"].arrayValue
for serverPosition in serverPosition {
let position = Position()
position.id = listItem["id"].stringValue
position.starred = listItem["starred"].boolValue
position.rank = listItem["version"].intValue
positions.append(position)
}
return positions
}
I'm pretty new to Realm and iOS and I hope, that I just make a stupid little mistake here. Thanks in advance for every idea..
Cheers,
Raffi
I discovered the issue. I simply closed a for loop at the wrong position. As the issue was not related to Realm or JSON in the end I don't post the changes. Really it dosnt't help nobody :)
I'm probably missing something. I'm trying to change filter to my GPUImageView.It's actually working the first two times(sometimes only one time), and than stop responding to changes. I couldn't find a way to remove the target from my GPUImageView.
Code
for x in filterOperations
{
x.filter.removeAllTargets()
}
let f = filterOperations[randomIntInRange].filter
let media = GPUImagePicture(image: self.largeImage)
media?.addTarget(f as! GPUImageInput)
f.addTarget(g_View)
media.processImage()
Any suggestions? * Processing still image from my library
UPDATE
Updated Code
//Global
var g_View: GPUImageView!
var media = GPUImagePicture()
override func viewDidLoad() {
super.viewDidLoad()
media = GPUImagePicture(image: largeImage)
}
func changeFilter(filterIndex : Int)
{
media.removeAllTargets()
let f = returnFilter(indexPath.row) //i.e GPUImageSepiaFilter()
media.addTarget(f as! GPUImageInput)
f.addTarget(g_View)
//second Part
f.useNextFrameForImageCapture()
let sema = dispatch_semaphore_create(0)
imageSource.processImageWithCompletionHandler({
dispatch_semaphore_signal(sema)
return
})
dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER)
let img = f.imageFromCurrentFramebufferWithOrientation(img.imageOrientation)
if img != nil
{
//Useable - update UI
}
else
{
// Something Went wrong
}
}
My primary suggestion would be to not create a new GPUImagePicture every time you want to change the filter or its options that you're applying to an image. This is an expensive operation, because it requires a pass through Core Graphics and a texture upload to the GPU.
Also, since you're not maintaining a reference to your GPUImagePicture beyond the above code, it is being deallocated as soon as you pass out of scope. That tears down the render chain and will lead to a black image or even crashes. processImage() is an asynchronous operation, so it may still be in action at the time you exit your above scope.
Instead, create and maintain a reference to a single GPUImagePicture for your image, swap out filters (or change the options for existing filters) on that, and target the result to your GPUImageView. This will be much faster, churn less memory, and won't leave you open to premature deallocation.