In my iOS app I am trying to read in a JSON file and populate a table with its contents. Below is a sample of the JSON file.
Presently, I am reading the JSON into a mutable array called “items” and then populating the table cells like this.
// Populate the cell
if let showName = self.items[indexPath.row]["Show"] as? NSString {
cell.textLabel!.text = showName as! String
I would like to have the image for each JSON record appear in the cell as well and that is where I am getting tripped up.
I have the URL to the image but how do I get it into the table cell?
My entire approach may be wrong so I would appreciate being pointed in the right direction.
{
"shows": [{
"Day": "Sunday",
"Time": "12am",
"Show": “Show Name Here",
"imgPath": "http://remoteserver/image.jpg"
}, {
"Day": "Sunday",
"Time": "1am",
"Show": "Show Name Here",
"imgPath": “http://remoteserver/image.jpg"
}, {
"Day": "Sunday",
"Time": "2am",
"Show": "Show Name Here",
"imgPath": http://remoteserver/image.jpg"
}
I'd recommend using a library like SDWebImage for this. That will provide you with methods for loading an image into an imageview asynchronously, and let you set a placeholder while the image is downloading.
You can set an image from a url like this (copied from SDWebImage docs):
[cell.imageView sd_setImageWithURL:[NSURL URLWithString:#"http://www.domain.com/path/to/image.jpg"]
placeholderImage:[UIImage imageNamed:#"placeholder.png"]];
More info here: https://github.com/rs/SDWebImage
Apple has a example project for this using RSS. LazyTableImages This code is not in swift but will show exactly how apple handled this.
The problem it sounds like you are having is cells are being reused and your image is not being cleared or are being added to a reused cell.
Related
I am trying to programmatically create an annotation on a canvas in mirador by calling a javascript function outside mirador.
Basically, I have a textual representation of a scanned text that includes descriptions of certain phenomenon on the scan, such as handwritten additions. If the user clicks on that description, I want the phenomenon in question to be highlighted with a box on the specified canvas in an already opened mirador instance mymirador that has several windows showing different scans.
For this, I try to pass the annotation as JSON using the receiveAnnotation action, but the annotation is not displayed at the correct place.
Javascript (added linebreaks for convenience):
function openAnnotationInMirador() {
var item = '{"id": "url-to-canvas/annotation",
"type": "Annotation",
"motivation": "commenting",
"body": {
"type": "TextualBody",
"language": "en",
"value": "Some description"},
"target": "url-to-canvas/#xywh=100,100,200,200"}}';
mymirador.store.dispatch(Mirador.actions.receiveAnnotation('url-to-canvas/annotation', 'url-to-canvas', item));
}
Tha function is triggered by a simple html button with onclick: <button onclick="openAnnotationInMirador()">Mirador Test</button>.
I suspect the JSON to be incorrect, but I am quite at loss here. Any suggestions are very much appreciated!
The third parameter to the receiveAnnotation action creator function has to be a JavaScript object of a WebAnnotation AnnotationPage, with the actual annotation in its items array, in your case:
const annoPage = {
id: 'dummy://my.annotation/page',
type: 'AnnotationPage',
items: [
{
"id": "url-to-canvas/annotation",
"type": "Annotation",
"motivation": "commenting",
"body": {
"type": "TextualBody",
"language": "en",
"value": "Some description"},
"target": "url-to-canvas/#xywh=100,100,200,200"
}
}
]
}
When working with the Redux actions in Mirador, it's very useful to install the Redux DevTools Browser Extension with which you can inspect the actions created by Mirador itself.
For different facets in object page in List Report, when I add any custom action and add property "requiresSelection" to true, action remains disabled.
Tried adding below code in manifest.json
"Sections": {
"to_PDL::com.sap.vocabularies.UI.v1.LineItem": {
"id": "to_PDL::com.sap.vocabularies.UI.v1.LineItem",
"Actions": {
"TestAction_Deactivate": {
"id": "TestAction_Deactivate",
"text": "Deactivate",
"press": "onDeactivate",
"requiresSelection" : true
}
}
}
}
In the official SAP docu it says for property :
“Property that indicates whether the action requires a selection of items (true) or not (false). The default value is true.”
This means, that first you have to select a row in the table. Then the action becomes enabled.
Does it work for you this way?
Im trying to upload a post on firebase which contains: one thumbnail image, and unlimited subposts (like the content of the thumbnail). I have 3 steps on posting the pictures.
1: handle the upload image task.
2: create a "bulkUpload" function that creates image paths for each subpost.
3: call the "bulkUpload" (which also calls the upload image task)
The structure looks something like this:
(postid) {
author: (author)
likes: (number)
pathToImage: (path)
postId: (id)
subposts {
(id): (path)
(id): (path)
...etc
}
userId: (userid)
}
Simple and gets it working. But, not really. There is a strange problem that occurs with the subposts.
When I post the post, everything works, except for the subposts. The subposts, when posting the set of images for the first time, don't show.
Without adding or removing subposts, I tried posting for the second time, this time, the subposts do show. But double the amount I selected in the picker and what shows in the imageViews.
I will link the code from pastebin since its a little lengthy (and stackoverflow doesn't like a lot of code).
But hope I can get this working.
https://pastebin.com/rpLZT6nm
#Matt is right, in your case you should update your data model, which is presented by the collectionView and call reloadData or reloadItems(at:)
self.updateModel();
self.collectioView.reloadData()
self.collectioView.reloadItems(at: [IndexPath(item: 0, section: 0)]);
But if you need direct acces to visible cells, you can use other workflow:
let image : UIImage? = nil
let path = IndexPath(item: 0, section: 0)
let visiblePaths = self.collectioView.indexPathsForVisibleItems
if visiblePaths.contains(path) {
if let cell = self.collectioView.cellForItem(at: path) as? UploadSubPostCell {
cell.previewStep.image = image;
}
}
You can use such workflow to update only visible cells, because invisible cells will be reused before show to the screen
I want to create a website blocking app where parents can block any website they want by typing the link or any tags into UITextField. I can't work out how to have the user enter the links/tags into a UITextField ( i only know how to manually add the links/tags by accessing the .JSON file
in the code.
Any help would be appreciated,
thx,
Noja
P.S. The below code is in the .JSON file
[
{
"action": {
"type": "block"
},
"trigger": {
"url-filter": "example"
}
}
]
I am trying to build a custom keyboard, it's like a emoji keyboard, but the keyboard's data is from a json file. After parse this json file and get the data, how to make the custom keyboard use it and show in the keyboard view, like the emoji keyboard that built in? Right now, I follow App Extension Keyboard: Custom Keyboard guide, and there only small bits of information here. Is there any tutorial or guide about how to create a custom emoji keyboard online? The current codes I am trying are below:
class KeyboardViewController: UIInputViewController {
override func viewDidLoad() {
super.viewDidLoad()
var error: NSError?
let yanFile = NSBundle.mainBundle().pathForResource("yan", ofType: "json")
let yanData = NSData(contentsOfFile: yanFile) as NSData
let yanDict = NSJSONSerialization.JSONObjectWithData(yanData, options: NSJSONReadingOptions.MutableContainers, error: &error) as NSDictionary
println("dict: \(yanDict)") //print nothing in console
// Perform custom UI setup here
self.nextKeyboardButton = UIButton.buttonWithType(.System) as UIButton
self.nextKeyboardButton.setTitle(NSLocalizedString("Next Keyboard", comment: "Title for 'Next Keyboard' button"), forState: .Normal)
}
}
The json like below:
{
"list":
[
{
"tag": "laugh",
"yan":
[
"o(*≧▽≦)ツ┏━┓",
"(/≥▽≤/)",
"ヾ(o◕∀◕)ノ"
]
},
{
"tag": "wanna",
"yan":
[
"✪ω✪",
"╰(*°▽°*)╯",
"≖‿≖✧",
">ㅂ<",
"ˋ▽ˊ",
"✪ε✪",
"✪υ✪",
"ヾ (o ° ω ° O ) ノ゙",
"(。◕ˇ∀ˇ◕)",
"(¯﹃¯)"
]
}
]
}
You can build a xib file by clicking new file -> view
1) inside the xib file create a uiview 320x216 and you can drag'n'drop whatever controls you want into it
2) then you can load the nib like this into your keyboard's inputView:
// Perform custom UI setup here
UIView *layout = [[[NSBundle mainBundle] loadNibNamed:#"keyboardXib" owner:self options:nil] objectAtIndex:0];
[self.inputView addSubview:layout];
3) i think it's amazing if you build a JSON to keyboard api
you send a JSON of the keyboard map to your app
and the app knows how to arrange the keys on the inputView accordingly
let us know if you build this project!
EDIT:
4) Most of what you need to do is parse the JSON and display the content you want from the JSON uibuttons, and also decide what text they are inserting into the text field
check out this question: How to parse a JSON file in swift?
Good luck!
First of all you need to create your UI for keyboard in your KeyboardViewController. It's up to you how you customize it, add buttons, views, gestures etc.. (By the way height of view is limited, to standard keyboard height size, so don't try to make it higher it won't draw) Template that is generated it's just sample to show how you can put a single button in it. After you setup your UI make sure you have Next Keyboard Button it's required.
Regarding Emoji, it's not real images, they are just unicode characters that later replaced with images by system. So you can't pass images, the only input that you can provide is NSString [self.textDocumentProxy insertText:#"hello "]; // Inserts the string "hello " at the insertion point
More details can be found here https://developer.apple.com/library/prerelease/ios/documentation/General/Conceptual/ExtensibilityPG/Keyboard.html.