Not quite sure if this is on topic, so when in doubt feel free to close.
We have a client who is missing tracking data for a large segment of his visitors in his Report Suite. However the complete set of data is available in a data warehouse. We are now investigating if it is possible to import them as a data source. I have only experience with enriching data via classifications, however the goal here is to create views (sessions etc) for a past timeframe etc from scratch.
According to the documentation this should be possible. However there is one caveat specifically mentioned in the FAQ:
"Adobe recommends you select new, unused variables to import data
using Data Sources. If you are uncertain about the configuration of
your data file, or want to better understand the risks of re-using
variables, contact Customer Care.“
I take that to mean that I should not import data to props,Evars,events etc. that have been used when data has been collected via the tracker, which would pretty much defeat our purpose (basically we want to merge the data from the data warehouse with existing data). Since I have to go to some intermediaries to reach customer care and this takes a long time I wonder if somebody here can explain what the dangers in re-using variables are (and maybe even if there is still a way to do this).
DISCLAIMER: I'm not familiar with Adobe Analytics, but the problem here is pretty universal. If someone with actual experience/knowledge specific to the product comes along, pay more attention to them than me :)
As a rule, Variable reuse in any system runs the risk of data corruption. I'm not familiar with Adobe Analytics, but a brief read through some blogs imply that this is what they're worried about in terms of variable reuse - if you have a variable that is being used in one section, and you import data into it in another section when is in the same scope, you overwrite the data that the other section was using.
Now, that same blog states that provided you have your data structure set up in a specific way, it can allow you to reuse variables/properties without issue and in fact encourages it, hence the statement in your quote "If you are uncertain about the configuration of your data file...". They're probably warning you that if you know what you're doing and know that there won't be any overwriting, fine, go ahead and reuse, but if you don't, or you aren't sure whether something else might be using the original content, then it's unsafe.
Regarding your specific case, you want to merge the two piece of data together, not overwrite, so reusing your existing variables would overwrite the existing data. Sounds like you will need to import to a second (new) set of variables, and then compare/merge between them within the system, rather than trying to import and merge in one go.
We have since received an answer from Adobe Customer Care.
According to them the issue is that hits created via data imports are indistinguishable from hits created via server calls, so they cannot be removed or corrected after they have been imported. They recommend to use different variables so that the original data remains recognizable and salvageable in case one does import faulty data via the imports.
This was already mentioned in the online documentation, but apparently Adobe thinks this is important enough to issue the extra warning.
Related
Currently, there are a lot of old tables existing in framework model A in Cognos content store, we need rename these old tables to new tables(old tables structure are same to new tables, but their name are different) automatically, not manually, not by framework manager. Anyone has idea about it? could you share me? many thanks.
It would help very much if you clarify what the situation is. Your terminology is confusing.
The Cognos content store is the data base which stores everything which appears in the UI of Cognos Analytics. This includes the data source connection definitions, reports, and administration functionality like the memberships of groups and roles.
It is controlled by Cognos and you would very much like not touch it and alter it.
The objects in a Framework manager model are not stored in the content store. When you publish a package, the information in the package is written into the content store. Where is not something you should need to know.
My understanding about your question is this:
Some tables in a data base which is used in your FM model have had their name changed. You want to alter the model so that the model uses the new tables rather than the old ones.
There is extensive functionality in the FM UI to deal with cases such as the remapping to new source functionality.
It should be possible to programmatically alter the model.
The nature of the change of the tables is important.
If the nature of the renaming has been that you have changed the case of the tables from any of all upper case or all lower case or a mix of case table names and you have made them all lower case or all upper case and if you have 11.1.7 then you can use this utility.
https://www.ibm.com/support/knowledgecenter/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ug_fm.doc/c_fm_model_update_util.html
The utility does not have provision to specify a subset of tables to have their case changed. It will do the action on all of the data source query subjects. This might be a problem in your case.
2.
I have never needed to do this and, subsequently, do not know if this would really work but you could edit the action logs. This would entail looking through the metadata import actions and the modelling actions which refer to the tables which were renamed and editing them to refer to the new tables.
You would really need to understand the action logs.
For example, if you happen to have expanded the node in the UI for any of the tables, views, or synonyms when you did the metadata import, the action log will have decided to do an import of the list of the the tables ( or views etc.) of the data base. This happens even if you closed the node. If you didn't then the metadata import as captured in the action log will import every table (or view etc.) generically. This means that if a table has been added, the action log playback will import it and if a table has been deleted, then the action log playback will not fail because it could not find the table. Why such a state could ever happen is beyond me but that is what happens.
Do this on a copy. That should not be necessary to mention but you never know.
You would need to be absolutely sure that no-one decided that they were clever and manually edited the model.xml file. Manual editing does not get captured in the action logs.
You would need to be absolutely sure that no-one decided that they were clever and deleted some or all of the log files in the Cognos project.
You would need to be really really really sure that this is better than remapping. Frankly, the time invested in importing the new tables and remapping your model to use them would probably be less than the time invested in editing and then running the action logs, and then wading through the model to make sure you have not mucked things up.
Also, I don't know if editing the action logs is allowed in the licence so you could end up getting IBM mad at you.
I assume from your vocabulary that you are very new to Cognos...
The appropriate solution depends on how you define "a lot". Your Framework Manager model can be edited manually or by a program you write in C# or Java that leverages the Cognos Framework Manager SDK. Cognos SDKs are not trivial. There is a steep learning curve. In my experience, if "a lot" means less than a few hundred, you should do this manually. An experienced Framework Manager modeler could probably get through a hundred tables in an afternoon.
I've looked through a lot of answers on stack overflow but haven't found anything that answers this question.
I have an app where you download data in a downloadVC (there are many other VC's). I want to be able to access the currentUser and the downloaded data in the downloadVC whenever I go to the downloadVC without re-downloading the data.
The options I've looked at so far are:
Making a singleton with data that I could access at all points of the app (feels like the easiest way but people have warned me singletons aren't good).
Making a class var of download VC. (This is the solution I understand the least, what is the scope of a class var? Is it just the same as making a singleton?)
Passing it around (This wouldn't work for me as the app is too big to have it passed between every VC)
Saving it to user defaults, (how quick is accessing something user defaults?)
Please tell me if this questions doesn't fit the stack overflow rules?
Singletons are usually implemented using a static variable, so your first and second options are quite similar.
There is a "singletons are evil" sect that's very vocal in the development community lately.
Personally, I think they have their place and can sometimes clean up your design. I recently worked on a project that had been designed by a member of the "Singletons are evil" cult who then went to absurd lengths to pass a data manager object around to every other object in the project, resulting in a lot of overhead and more than a few bugs when the object got dropped.
There is no one answer. You need to weigh the pros and cons of different approaches for your app.
I would caution against over-using UserDefaults though. That's intended for saving small bits of data like user preference settings, not big data objects.
Basically, adding any networking logic inside the view controller is a first big mistake that you could make. So move that to another class that is responsible only for sending network requests and handling the responses.
Then, when you have the data downloaded, you'll probably need something to manage the cache - no matter whether it is a set of big files or a couple of small strings, the logic would be the same - to have all this cache controlled by one object. This cache manager could save the objects to local files, use CoreData or simply keep those objects in memory - this is for you to decide, depending on what kind of app are you creating.
Now, the cache manager could basically be your endpoint for view controllers, since it will either download the data and return it to view controller after the request's success, or return it immediately. The view controllers don't really have to know about any networking at all.
But, how the view controllers will find out about the cache manager?
You could either pass around the reference to it, but you've already said that this is impossible in your app. So basically another solution would be to use the hated singleton pattern. I too personally think this is a bad pattern, however you cannot create any app without it, since you always must have at least one singleton, which is AppDelegate.
Anyway, a good idea might be to create a singleton class (or maybe even use the AppDelegate for that purpose), that would be responsible for handling the dependencies between the classes responsible for networking, the cache manager, and any other classes that you might need to perform some logic behind your app. This is actually a pattern called Dependency Injection Container. It would allow you to easily access your model classes through this container, without neither having a ton of singletons that will eventually make you confused, nor creating some ridiculous logic of passing around the model objects.
The simple answer is, yes, you use a singleton.
That's exactly correct.
Note that whether you use
SQLite.swift,
core data,
realm.io,
write to a text file,
a baas such as Firebase or Back4app (aka Parse),
or literally just "keep it in an array" ("in memory"),
Yes, the answer to your question is you'll have singleton which is where you "keep - access" that stuff. Exactly correct.
That seems to be what you are asking here.
Given that ...
... if you are then additionally asking "what's the easiest / best / most modern way ("in my data singleton") to store data locally in my app", the real-world answer in 2017 is
realm.io
Previously you used Apple's core data. Which is (a) spectacular (b) extremely difficult. Importantly though: realm.io and SQLite are identically available on both Android and iOS; in very many cases, on real world projects today, this eliminates Core Data from consideration.
However. All of this is somewhat moot. Don't forget ...
We now live in a "baas world"
You can't eg. "get a job programming iOS or Android" any more. You get a job because of how good you are / your specific expertise areas in either Firebase, Parse, PubNub, Cloudbase, or whatever. For better or worse, being able to move buttons and tables around in Xcode, Studio, is not really a thing anymore. (There are a minuscule number of hyper-specialities, like GPU programming, dynamic mesh, or the like, that are exceptions to this.) Similarly, say you're a hobbyist making your own app or startup: again in that case, it's just entirely and totally about what backend you work with. (And that applies equally to games or business/social apps.) There are simply no "local, static" apps any more. (Again, applies equally to games or business/social apps.)
(Note indeed that realm.io, which is "the" simple, obvious way to keep data on apps these days - indeed, those guys have/are moving to becoming, indeed, a OCC cloud baas-paas-whatever service themselves. We'll probably all just use realm.io instead of Firebase in a year.)
So in a sense the answer to your question is something like Firebase or back4app. But: then within your app, you centralize that to a singleton, indeed (the first part of your question).
But note....................
it is extremely unlikely, at the beginner level, that any of this will be relevant: Just keep the data in an array! That's all there is to it. OK, once in a billion years when a user happens to literally restart the device or your app, the app will reload data. So what?
Note that a common name for your "get data here" Singleton is
Local
So,
import Foundation
often .. import SQLite, Realm, Firebase or whatever
public let local = Local.shared
open class Local {
static let shared = Local()
fileprivate init() {
print("'Local' singleton running AOK")
}
// could be this simple..
var clients:[String:[Blah]] = [:]
var stock:[String:String] = [:]
func loadStockInfoFromWWW() { ... }
func calculateQuantityInfoOrWhatever() { ... }
// or it could be more like this..
func reloadClientsFromParse() { ... }
func sendNewDataToFirebaseParse() { ... }
.... etc
you then just access it from anywhere in your app like
local.updateFromWeb()
height = local.stock["ladders"][idNumber].size.h
and so on.
That's all there is to it.
(A word on singleton code style.)
It depends on file complexity & size. If you don't need to parse the contents, or they're simple to parse, but it's large in size then I'd recommend using the Documents folder and retrieving it from there, if it's small or involves complex processing then a singleton acting as a manager would be my approach as it feels appropriate.
I would like to know if there are better ways to initialize a large collection of same-type instances. This is not a problem only limited to Swift, but I am using Swift in this case.
Take, for example, a large list of API endpoints. Suppose I have 100 endpoints in this API and each of them share some common functionality, such as headers, parameter lists, parsing formats, etc... albeit with different values for each of these "options".
I could think of a few different ways to express 100 endpoints:
Create a resource file with all of the values and read them in from the file on app launch. The problem with this is that it becomes stringly typed and there is potential for typos and/or lots of copy/paste key values. This would include plist files, json files, sqlite tables, csv files, etc. It centralizes and condenses the data, but it doesn't seem maintenance friendly or swiftly. Furthermore, it seems like resource files are harder to obfuscate should the details be somewhat private.
Create a giant enum-ish function with all of the API endpoint instance initialization code blobbed all in the same area/function/file. This would be equivalent of doing a giant switch statement or making a collection literal with all the instantiation happening in one spot. The advantage here is that it can be strongly typed and it is also contained to one area, similar to what a resource file would do. However, it will be a BIG file with lots of scrolling. Maybe too big?
Create a separate file/module/instance/subtype for each endpoint and, more or less, hardcode computed properties inside the instance. This would be maybe creating an extension and/or subclass for each endpoint and putting them in a separate swift file. This limits the visual scope for each endpoint, but it also just turns your project files into the blob of data instead.
I'm wondering if there are philosophical arguments for either of these options. Or, are there other options I have not thought of. Is it preference? Are there best practices when initializing a large collection of what seems like a bunch of complex literals?
If you have lots of this static data, or machine-generated classes, consider the advice in WWDC 2016's Optimizing App Startup Time. It's a great talk. The loader has to initialize and fix up all your static object instances and classes; if you have a lot, your app load time will be adversely affected.
For static data, one piece of advice is to use Swift, which you've already done, as Swift knows to defer the instantiations until run time.
Swift doesn't help with mass-produced classes; though you can switch to structs instead.
Even ignoring the startup time issue, I'd err on the side of being data driven. Option 1. Less code to maintain. IMHO There's nothing wrong with stringly typed here, this code is unlikely to change much; adding endpoints will be trivial. It's cool to see new function when you didn't even write new code!
I'm developing an iOS application using Core Data. I want to have the persistent store located in a shared location, such as a network drive, so that multiple users can work on the data (at different times i.e. concurrency is not part of the question).
But I also want to offer the ability to work on the data "offline", i.e. by keeping a local persistent store on the iPad. So far, I read that I could do this to some degree by using the persistent store coordinator's migration function, but this seems to imply the old store is then invalidated. Furthermore, I don't necessarily want to move the complete store "offline", but just a part of it: going with the simple "company department" example that Apple offers, I want users to be able to check out one department, along with all the employees associated with that department (and all the attributes associated with each employee). Then, the users can work on the department data locally on their iPad and, some time later, synchronize those changes back to the server's persistent store.
So, what I need is to copy a core data object from one store to another, along with all objects referenced through relationships. And this copy process needs to also ensure that if an object already exists in the target persistent store, that it's overwritten rather than a new object added to the store (I am already giving each object a UID for another reason, so I might be able to re-use the UID).
From all I've seen so far, it looks like there is no simple way to synchronize or copy Core Data persistent stores, is that a fair assessment?
So would I really need to write a piece of code that does the following:
retrieve object "A" through a MOC
retrieve all objects, across all entities, that have a relationship to object "A"
instantiate a new MOC for the target persistent store
for each object retrieved, check the target store if the object exists
if the object exists, overwrite it with the attributes from the object retrieved in steps 1 & 2
if the object doesn't exist, create it and set all attributes as per object retrieved in steps 1 & 2
While it's not the most complicated thing in the world to do, I would've still thought that this requirement for "online / offline editing" is common enough for some standard functionality be available for synchronizing parts of persistent stores?
Your point of views greatly appreciated,
thanks,
da_h-man
I was just half-kidding with the comment above. You really are describing a pretty hard problem - it's very difficult to nail this sort of synchronization, and there's seldom, in any development environment, going to be a turn-key solution that will "just work". I think your pseudo-code description above is a pretty accurate description of what you'll need to do. Although some of the work of traversing the relationships and checking for existing objects can be generalized, you're talking about some potentially complicated exception handling situations - for example, if updating an object, and only 1 out 5 related objects is somehow out of date, do you throw away the update or apply part of it? You say "concurrency" is not a part of the question, but if multiple users can "check out" objects at the same time, unless you plan to have a locking mechanism on those, you would start having conflicts when trying to make updates.
Something to check into are the new features in Core Data for leveraging iCloud - I doubt that's going to help with your problem, but it's generally related.
Since you want to be out on the network with your data, another thing to consider is whether Core Data is the right fit to your problem in general. Since Core Data is very much a technology designed to support the UI and MVC pattern in general, if your data needs are not especially bound to the UI, you might consider another type of DB solution.
If you are in fact leveraging Core Data in significant ways beyond just modeling, in terms of driving your UI, and you want to stick with it, I think you are correct in your analysis: you're going to have to roll your own solution. I think it will be a non-trivial thing to build and test.
An option to consider is CouchDB and an iOS implementation called TouchDB. It would mean adopting more of a document-oriented (JSON) approach to your problem, which may in fact be suitable, based on what you've described.
From what I've seen so far, I reckon the best approach is RestKit. It offers a Core Data wrapper that uses JSON to move data between remote and local stores. I haven't fully tried it yet, but from what the documentation reads, it sounds quite powerful and ideally suited for my needs.
You definetly should check these things:
Parse.com - cloud based data store
PFIncrementalStore https://github.com/sbonami/PFIncrementalStore - subclass of NSIncrementalStore which allows your Persistent Store Coordinator to store data both locally and remotely (on Parse Cloud) at the same time
All this stuff are well-documented. Also Parse.com is going to release iOS local datastore SDK http://blog.parse.com/2014/04/30/take-your-app-offline-with-parse-local-datastore/ wich is going to help keep your data synced.
I wonder what sort of things you look for when you start working on an existing, but new to you, system? Let's say that the system is quite big (whatever it means to you).
Some of the things that were identified are:
Where is a particular subroutine or procedure invoked?
What are the arguments, results and predicates of a particular function?
How does the flow of control reach a particular location?
Where is a particular variable set, used or queried?
Where is a particular variable declared?
Where is a particular data object accessed, i.e. created, read, updated or deleted?
What are the inputs and outputs of a particular module?
But if you look for something more specific or any of the above questions is particularly important to you please share it with us :)
I'm particularly interested in something that could be extracted in dynamic analysis/execution.
I like to use a "use case" approach:
First, I ask myself "what's this software's purpose?": I try to identify how users are going to interact with the application;
Once I have some "use case", I try to understand what are the objects that are more involved and how they interact with other objects.
Once I did this, I draw a UML-type diagram that describe what I've just learned for further reference. What happens after depends on the task I've been assigned, i.e. modify the code, document the code etc.
There is the question of what motivation do I have for learning the new system:
Bug fix/minor enhancement - In this case, I may focus solely on that portion of the system that performs a specific function that needs to be altered. This is a way to break down a huge system but also is a way to identify if the issue is something I can fix or if it is something that I have to hand to the off-the-shelf company whose software we are using,e.g. a CRM, CMS, or ERP system can be a customized off-the-shelf system so there are many pieces to it.
Project work - This would be the other case and is where I'd probably try to build myself a view from 30,000 feet or so to know what are the high-level components and which areas of the system does the project impact. An example of this is where I'd join a company and work off of an existing code base but I don't have the luxury of having the small focus like in the previous case. Part of that view is to look for any patterns in the code in terms of naming conventions, project structure, etc. as this may be useful once I start changing some code in the system. I'd probably do some tracing through the system and try to see where are the uglier parts of the code. By uglier I mean those parts that are kludge-like and may have some spaghetti code as this was rushed when first written and is now being reworked heavily.
To my mind another way to view this is the question of whether I'm going to be spending days or weeks wrapping my head around a system like in the second case or should this be a case where it hopefully takes only a few hours, optimistically that is, to get my footing to make the necessary changes.