Ok, I know layers dont exist once compiled and that duplicated movieclips cannot be duplicated to new levels but I need some kind of work around here.
I cannot use the library as the movieclip I am duplicating is dynamically generated by actionscript (a graph based on user input over time) and thus cannot be made by me beforehand as it varies.
I need to somehow make a duplicate of this on a layer above the where the original was made, anyone know how this is possible?
Sort of irrelevant now, I had an array of coords from my original production of the movieclip and just decided to use that to create another with() the higher leveled mc.
Related
regarding to asked question here :
suppose that we have ProductCreated and ProductRenamed events which both contain the title of the product.now we want to query EventStoreDB for all events of type ProductCreated and ProductRenamed with the given title.i want all these events to check whether there is any product in the system which has been created or renamed to the given title, so that i could throw the exception of repetitive title in the domain
i am using MongoDB for creating UI reports from all the published events and everything is fine there.but for checking some invariants, like checking for unique values, i have to either query the event store for some events along with their criteria and by iterating over them, decide whether there is a product created with the same title which has not renamed or a product renamed with the same title.
for such queries, the only way that event store provides is creating a one-time projection with the proper java script code which filters and emits required events to a new stream.and then all i have to do is to fetch events from the new generated stream which is filled by the projection
no the odd thing is, projections are great for subscriptions and generating new streams, but they seem to be odd for doing real time queries.immediately after i create a projection with the HTTP api, i check the new resulting stream for the query result, but it seems that the workers has not got the chance to elaborate on the result and i get 404 response.but after waiting for a bunch of seconds, the new streams pops out and gets filled with the result.
there are too many things wrong with this approach:
first, it seems that if the event store is filled with millions of events across many streams, it wont be able to process and filter all of them immediately to the resulting stream.it does not create the stream immediately, let alone the population.so i have to wait for some time and check for the result hoping the the projection is done
second, i have to fetch multiple times and issue multiple GET HTTP commands which seems to be slow.the new JVM client is not ready yet.
Third, i have to delete the resulting stream after i'm done with the result and failing to do so will leave event store with millions of orphan query result streams
i wish i could pass the java script to some api and get the result page by page like querying MongoDB without worrying about the projection, new streams and timing issues.
i have seen a query section in the Admin UI, but i dont know whats that for, and unfortunetly the documentation doesn't help much
am i expecting the event store to do something that is impossible?
do i have to create a bounded context inner read model for doing such checks?
i am using my events to dehyderate the aggregates and willing to use the same events for such simple queries without acquiring other techniques
I believe it would not be a separate bounded context since the check you want to perform belongs to the same bounded context where your Product aggregate lives. So, the projection that is solely used to prevent duplicate product names would be a part of the same context.
You can indeed use a custom projection to check it but I believe the complexity of such a solution would be higher than having a simple read model in MongoDB.
It is also fine to use an existing projection if you have one to do the check. It might be not what you would otherwise prefer if the aim of the existing projection is to show things in the UI.
For the collection that you could use for duplicates check, you can have the document schema limited to the id only (string), which would be the product title. Since collections are automatically indexed by the id, you won't need any additional indexes to support the duplicate check query. When the product gets renamed, you'd need to delete the document for the old title and add a new one.
Again, you will get a small time window when the duplicate can slip in. It's then up to the business to decide if the concern is real (it's not, most of the time) and what's the consequence of the situation if it happens one day. You'd be able to find a duplicate when projecting events quite easily and decide what to do when it happens.
Practically, when you have such a projection, all it takes is to build a simple domain service bool ProductTitleAlreadyExists.
I want to create a traceability matrix in a new module that shows the object ID & text at the top level, then in columns moving to the right the object ID & text for the source of the first in-link, and then it's in-links to the right, etc. If there is more than one in-link, the next source object will be shown on the next line (new object), and the higher level object ID & text just repeated in the columns to the left. Basically, it is the recursive trace analysis layout dxl, but I want to spread out the information over separate columns.
My question is related to the best practices for approach. Is it best to create a new module and write several dxl layout scripts for each column, pulling info from all of the various modules, and then later converting it to text (so it isn't too heavy)? Or is it necessary (or easier) to actually create dxl attributes within each requirements module, and then pull information from there into my RTM module?
I'm likely over-complicating it, but any tips would be appreciated!
Well, one of our assets contains something that looks like your approach:
script creates new modules that only contains trace information, a "report" module, which does not contain any link to the original "data" modules
there are some 2 or three columns for each Req Level (high level reqs at the left, low level reqs at the right)
the advantage of this approach is that one can easily use standard DOORS filtering mechanisms to find "holes" in the matrix (requirements which have not been implemented, design elements without requirement etc.). plus, as every report run creates a new report module with the date/time in its name, project progress can be made visible over time, reports to excels might be made.
On the other hand the implementation took several weeks. So, I don't know if this approach would be feasible for you.
I hope this isn't an inappropriate post, but I wanted to make sure my first steps implementing parse as my backend are in the right direction to save some time. I'm new to both iOS programming and the parse sdk, so please bear with me!
In my app, users are able to create various polygon shape overlays on a Google Maps mapView, stored as a GMSMutablePath, which is basically a list of coordinates. Users will have at least one group of paths, each with at least one path. There will also be some information stored with each group, stored as strings or numbers. This information is specific to a single group of paths.
I'm trying to figure out the best way to store this data. My first basic question is 1) Can I store the GMSMutablePath as a whole in the Object data type? Or does the Object data type refer to a class that is created through parse? This link (https://www.parse.com/questions/what-is-data-type-of-object-in-data-browser) is the 'best' explanation I found of the Object data type, and it isn't very clear to me.
My gut instinct is no, I can't store the GMSMutablePath object, and that Object refers to a Parse object. Which leads me to 2) How should I store this data, then? I can get the individual lat/long values of the coordinates that make up each path, and I can store those as numbers, and use the numbers to recreate the paths elsewhere. None of the paths should use too many coordinates, and there shouldn't be too many paths in each group.
Playing around a little bit in the data browser, I see that I can store arrays, but I'm not sure how those are formatted, as I'd need an array (of groups) of arrays (of paths) of arrays (of lat/long values). A little bit of googling tells me it can be done, but doesn't show me how. Can any datatype be stored in any array, or is a datatype specified? I'm used to C++ programming, so I'm used to an array containing a single type of element. What I'm thinking is that I'd need an array of objects, which would be the groups of paths. Each one of those objects would have the string/number information associated with the group, as well as an array for the paths within the group. For each one of those paths, it would have to be either an array or an object. Since for the path I just need the coordinate lat/long values, I think that I can get away with an each path being an array of numbers, and I can write my program to use one array, with odd indexes being lat / even indexes being long values. That all being said, I'm not sure how to create all of that. I'm not looking for somebody to write my implementation for me, but all of the examples I can find are much more simple... if anybody could point me in the right direction to do this, or has a better idea of how to do it, I'd love some pointers.
Each user is going to have their own groups, but that data is going to be shared with others at some point. The data will be associated with the user it belongs to. With that in mind, my last question is 3) Should I store all of this information specific to a user and their groups on the User class, or make it all a separate class entirely? My guess it that I should add an Object to the User class, and store the groups within that Object. I just want to make sure I have that right, with future scalability in mind. Like, when I pull the group data, am I going to have to pull the entire User data from another user, and if so, is that going to slow things down significantly? I'm thinking that I do have to send entire user data, but I don't know if that poses any security risks. Would it be best to have a separate class for the groups, and store the user id associated with the groups? If I do this, should I also store the groups as an object on the User class?
Sorry for the wall of text, but thank you for any guidance you can provide!
If you need any clarification, let me know.
Thanks,
Jake
Creating a class to hold all the objects turned out to be unnecessary. It only had a few extra details that were just as convenient to add to the user object, and then have an array of objects on the user.
Some main things to note that I learned are: use addObject to add to an array, rather than setObject to add a single object to a PFObject/User.
Parse fetching/saving happens in background threads, so if you're loading the data to do something specific with it, make sure the code using the data occurs inside a block using the [PFObject fetchInBackgroundWithBlock] method.
Also, as changes are made to the structure of your data on a parse user/object, make sure you sign out of the current user and create a new one on your app, or you may run into lots of undefined behaviour that could crash your app.
I am looking for a template of sorts for merging two linked chains that have already been sorted. I'm still fairly new to Java, and this seems to be a pretty challenging task to accomplish with the limited knowledge I have. I have an understanding of how to merge sort an array, but when it comes to linked lists I seem to be drawing blanks. Any help you all could give me, be it actual code or simply advise on where to start, would be greatly appreciated.
Thank you for your time!
If the two linked list are already sorted, then it is so easy to merge those two together. I am gonna tell you the algorithm but you need to write the code yourself since it seems like a school project. First you make a new linked list, and then assign the head of the new list to be the min of list1Head and list2Head, then you just walk the two list, each time picking the min of the current node of the two list and append to the new created list, make the current to be .Next if it got picked. If one of the list doesn't have more nodes, then append the rest of another list directly to the new list. Done
Can't you look at the first element in each list and take the smallest. This is the start of the new list. Remove this from the front ofwhichever list it came from. Now look at the first element again and take the smallest and make it the second element in the new list. Then just repeat this process zipping the two lists together.
If you want to avoid creating a new list the just find the smallest then look at the thing is pointing at and the beginning of the other list and see which is smaller. If you are not already pointing at the smaller one the update the pointer so it is. Then rinse and repeat.
I have defined a Delphi TTable object with calculated fields, and it is used in a grid on a form. I would like to make a copy of the TTable object, including the calculated fields, open that copy, do some changes to the data with the copy, close the copy, and then refresh the original copy and thusly the grid view. Is there an easy way to get a copy of a TTable object to be used in such a way?
The ideal answer would be one that solves the problem as generically as possible, i.e., a way of getting something like this:
newTable:=getACopyOf(existingTable);
You can use the TBatchMove component to copy a table and its structure.
Set the Mode property to specify the desired operation. The Source and Destination properties indicate the datasets whose records are added, deleted, or copied. The online help has additional details.
(Although I reckon you should investigate a TClientDataSet approach - it's certainly more scalable and faster).
Let me propose several things:
Let us suppose that you want to make changes programmatically. You could then use DisableControls and EnableControls methods of the TTable to disallow screen updates during that time.
If you want to have two screens with the same data (f.e. to compare data during online changes), you could actually create the same screen twice, with the TTable object being on the screen itself. It will have the exact same configuration (but not carry over previously made changes on the first screen but read the data from the database). Changes made on one screen will not be automatically refreshed on the other.
Another way: Try using TDataSetProvider with TTable as Dataset (source) feeding a TClientDataSet. ApplyUpdates would feed back the changes to the TTable. Since the calculated fields are read only, they are not affected. (untested, but should work)
I believe that the second approach (TClientDataset) is probably the best method to use in this scenario. An alternative would be to use a memory table (kbmMemTable for instance). Either way, you would clone your original table and then after making your changes loop thru the memory version of your dataset and update your original table.
You should be able to select the table on the form, copy it using Ctrl-C, then paste it into any text editor. You will get the text version of the object's properties which you can then edit as needed. When you are done, select all the text again and you can copy it to the clipboard and paste it back onto a form.