How to set layout properties of data flow using BIML? - ssis-2012

Every thing is liner in my package which is making it look very thin and tall with so many tasks in it. Is it possible to set layout of tasks like width, height and position from BIML? so I can make it in good looking shape?
I can logically separate tasks into smaller groups with packages and containers but the least one I get could be on an average 20 DFTs.

Nope.
Biml describes the elements that exist within the SSIS object model. That model doesn't have entries for spatial layout of tasks and components nor does it contain anything for Annotations. Since neither of those exist in the object model, the emitted SSIS packages won't have them either.
The same would be true if you were using EzAPI or the base integration services library to construct your SSIS packages.

Related

【CVAT】How to create multiple jobs in one task?

As per the introduction of CAVT, it can be created many jobs into one task, but I can't find any button for creating more jobs, how can I do it?
Please specify the segment size when you create a task (e.g. 1000). It is not possible to add more jobs after a task is created at the moment.
https://github.com/opencv/cvat/blob/master/cvat/apps/documentation/user_guide.md
Segment size. Use this option to divide a huge dataset into a few smaller segments. For example, one job cannot be annotated by several labelers (it isn't supported). Thus using "segment size" you can create several jobs for the same annotation task. It will help you to parallel data annotation process.
You split it by giving a segment size.
Then it is adding jobs automatically when you upload

Strategy for Requirements Traceability Matrix in DOORS

I want to create a traceability matrix in a new module that shows the object ID & text at the top level, then in columns moving to the right the object ID & text for the source of the first in-link, and then it's in-links to the right, etc. If there is more than one in-link, the next source object will be shown on the next line (new object), and the higher level object ID & text just repeated in the columns to the left. Basically, it is the recursive trace analysis layout dxl, but I want to spread out the information over separate columns.
My question is related to the best practices for approach. Is it best to create a new module and write several dxl layout scripts for each column, pulling info from all of the various modules, and then later converting it to text (so it isn't too heavy)? Or is it necessary (or easier) to actually create dxl attributes within each requirements module, and then pull information from there into my RTM module?
I'm likely over-complicating it, but any tips would be appreciated!
Well, one of our assets contains something that looks like your approach:
script creates new modules that only contains trace information, a "report" module, which does not contain any link to the original "data" modules
there are some 2 or three columns for each Req Level (high level reqs at the left, low level reqs at the right)
the advantage of this approach is that one can easily use standard DOORS filtering mechanisms to find "holes" in the matrix (requirements which have not been implemented, design elements without requirement etc.). plus, as every report run creates a new report module with the date/time in its name, project progress can be made visible over time, reports to excels might be made.
On the other hand the implementation took several weeks. So, I don't know if this approach would be feasible for you.

Is that normal that nodes with same labels have different properties?

In modeling, instances of the same label, i.e Student, have same set of properties. However, is it normal that instances of the same label have different sets of properties. For example, I have Product node:
(p:Product)-[:HAS_ATTRIBUTE]->(a:Attributes)
Different instances of Product result in different instances of Attributes. In this case, different Attributes nodes have very different properties.
Is this modeling normal? Different categories of products can have very different attributes.
It's very useful to have different properties. For instance, I have a Y-DNA project with single nucleotide polymorphisms (SNP) Nodes. Some are on the know haplotree and some are not. So, I set a property InHGTree to Y or blank to reflect this. Now I can more readily create queries using the haplotree branching.
BTW, relationships can also have different properties with the same value. DNA results from an individual are in a "kit." The kit is related to numerous SNPs. You want to be able to determine whether the kit is positive or negative for the SNP. It is most logically to put this fact in the relationship between the kit and the SNP.
It's certainly allowed, as there is no table schema as in relational dbs to enforce homogenous properties.
While this provides great flexibility, it may introduce complexity. It's up to the modelers and administrators of the database to provide any guidelines or implement restrictions, if needed.
While that would usually be in the form of convention, APOC triggers (or kernel extensions if you want to implement this yourself) could be used to enforce only certain properties for a node of a given label.

Gave up DDD, but need some of its benefits

I'm giving up traditional DDD, which is often a massive timewaster, and forces me to do endless mapping: data layer <--> domain layer <--> presentation layer.
For even a small change I must change data models, domain models, presentation models / viewmodels, then the repositories, manager/service classes, and of course the AutoMapper maps, and then test the whole thing! Each call requires calling a layer which calls a layer which calls the underlying code. And I don't get anything in return other than "you might need it in the future". Meh.
My current approach is more pragmatic:
I don't worry about the difference between the "data layer" and "domain layer" any longer, as there's no point - the terms are interchangeable. I let EF do it's thing, and add interfaces and repositories on top when needed.
I've merged my "data" and "domain" projects (into "core", boring name, I know), and I could almost swear that Visual Studio is actually running faster.
I allow EF entities to go up and down the stack, but, I still map them to presentation models / viewmodels as usual.
For simple operations I call repositories directly from controllers, for complex operations I use domain managers/services as usual; the repositories never expose IQueryable.
I define entities/POCOs as partial classes, so I can add domain behavior separately in corresponding partial classes.
The problem: I now use the entities all over the place, so client code can see their navigation properties. And the models are always materialized after they leave a repository, so those navigation properties are often null.
Possible solutions:
1. Live with it. It's ugly but preferable to the problems explained above.
2. For each entity, define an interface which hides the navigation properties; and make client code use the interfaces. But ironically, this means another layer (albeit thin and manageable).
3. What else?
I'm not used to this sort of fast-and-loose programming style, so maybe I'm missing some obvious tricks. Is there anything else I should take into account? I'm sure there are other problems I will encounter soon.
EDIT:
This question is not about DDD. And note that many struggle with a traditional DDD approach -- Seemann appears to arrive at the same conclusion, Rahien speaks about the "Useless Abstraction For The Sake Of Abstraction Anti Pattern", and Evans himself said DDD is only truly useful in 5% of cases. Also see this thread. Some of the comments/answers are predictably about how I'm doing DDD wrong, or how I can tweak my system to do it right. However, I'm not asking about DDD or bashing it for the cases where it is suitable, rather I'd like to know what others are doing in line with the thinking I've described above. It's not as if DDD is a panacea to all design ills, every decade a new process comes out (RUP anyone? XP, Agile, Booch, blah...). DDD is just the shiniest new one, and the most well known and used. But pragmatism should come first as I'm trying to build salable products that ship on time and are easy to maintain. The most useful programming axiom I've learned, by far, is YAGNI. What I want is to change my system to a sort of "DDD-lite", where I get it's strong design/OOP/pattern philosophy, but without the fat.
A typical persistence approach with DDD is to map the domain model directly to corresponding tables. Technically, the mappings are still there (and are usually declared in code), but there is no explicit data model, as pointed out by lazyberezovsky.
The problem with navigation properties can be resolved in a few different ways, regardless of whether you are employing DDD or not. I dislike approach 1 because it makes it more difficult to reason about your code - you never know which properties will be set and which won't. Approach 2 is much better in theory, because it makes it very explicit what that a given query requires and making things explicit is a good practice in general. A similar, but simpler and less brittle approach is to use read-models, which are just objects designed to fulfill requirements of a given query of set of queries. Within the context of DDD, they allow you to decouple behavior rich entities from queries, which are quite often at odds. Now proponents of DRY may scream heresy and come at you with torches and pitchforks, but in practice it is often much easier to maintain a read-model and an entity then to try to coerce entities to fulfill query requirements by way of interfaces or complex mapping strategies. Additionally, the responsibilities of a read-model and a behavior model are quite different, therefore DRY isn't applicable.
This is not to say that DDD is applicable in your scenario. It is often a wise decision to avoid full fledged DDD, especially in scenarios that are mostly CRUD. You are correct to be cautious, a good example of KISS and YAGNI. DDD reaps benefits when your domain consists of complex behavior, not just data. At any rate, the read-model pattern applies.
UPDATE
For implementations that don't employ a read-model, take a look at Fetching Strategy Design where the notion of a fetching strategy allows the specification of exactly what is needed from the database which mitigates issues with navigational properties. The material referenced in the linked post is also of interest. Overall, this attempts to avoid the a layer of indirection present in other approaches. However, in my opinion, using the proposed fetching strategy is more complex than using a read-model while the net result is the same.
Some thoughts about this point:
... the repositories never expose IQueryable ... the models are always
materialized after they leave a repository ...
Your question is tagged with "asp.net-mvc", so you have a web application in mind. 90% or more of all requests will be GET requests that are supposed to fetch some data from the database and show those data in a web view. How often are those needed data really entities rather than only bags of properties (a selection of properties of an entity type or perhaps composed of properties from multiple entities)?
Say, your application has 100 views. Only a minority of these will show complete entities:
50 of them are list views that show selected data (a customer with ID and address, but without the customer's contact person, phone number and sales volume)
20 of them contain autocomplete text boxes to select a reference (the customer for an order, but only the customer's name and city is shown in the autocomplete list, not the rest of the address nor contact person, phone number and sales volume and only the first 5 hits are displayed)
1 is an edit view for a customer that shows everything, but not the sales volume
1 is a details view for a customer with his last five orders
1 is a details view for an order including order items including product of each item but without the product's supplier name
1 is the same view but specialized for the purchasing department that wants to see the supplier for each item and item's product with average supplier's lead time for the last three months.
1 is a view for the service department that shows the order with only the order items of product category "repair service"
1 view for the Human Resources department shows employees including a photo stored as a big blob
1 view for personnel planning department shows a short version of the employee without photo
etc., etc.
As a UI programmer I would have all kinds of data requirements to render a view with the examples above:
I need only a selection of properties
I need even different selections of the same entity's properties for different views
I need an order including all items but without a reference to a product
I need an order including all items (but not all properties of the items) and including a reference to a product and to a supplier (but not all supplier's properties)
I need an order including only a filtered list of order items
I need a customer including the last five orders, not all 3000 orders he ever had
I need an employee but please without the big blob image
etc., etc.
How to fulfill these requirements as a data access/repository/service developer?
I only provide a handful of methods and materialize entities: load order header, load order header with items, load order header with items and product, load order header with items and product and supplier, load customer header (throw 15 of the 20 properties away, dear UI developer, if you only need five properties), load customer header with all 3000 orders (throw 2995 away, dear UI developer, if you only need five), etc., etc. I return interfaces from the repositories that hide not loaded navigation properties.
I care about every detail that the UI needs: I create repository/service methods like GetFiveCustomerPropertiesForAutoComplete, GetCustomerWithLastFiveOrders, etc. etc. I return interfaces from the repositories that hide the properties (also scalar) I haven't loaded. Or I return "DTOs" that contain the requested properties. I change the repository/services and create new DTOs every day when a UI developer calls with a data requirement for the next view.
I return IQueryable<TEntity> from the repositories and tell the UI developer "create the LINQ query yourself to fetch the data you need for your views". (Next morning the DBA is complaining about hundreds of terrible performing database queries.)
I return "prepared" IQueryable<TEntity>s from the repositories/services that cover - for example - security concerns like applying Where clauses for the user's access rights or append a Where clause for a search term or apply a NoTracking option to the query. I tell the UI developer: "You are allowed to extend the query with a) projections (Select), b) paging (Take and Skip) and perhaps c) sorting (OrderBy) because I consider those three query parts as UI concerns. All other query requirements (filtering, joining, grouping, etc.) have to be implemented in the repository/service layer and are forbidden in the UI layer." The most important piece here are projections that materialize ViewModels directly through the LINQ/SQL query without intermediate mapping layer and without the overhead to load more than the needed columns/properties.
These are only some thoughts. Every approach has its benefits and downsides. Working in small teams where at least one or a few developers have an overview what is happening in both the repository/service and the UI/"projection" layer the last option works fine for me in my experience although it doesn't always work with the strict rules decribed (for example, the filter by product category for included order items of an order requires to apply a Where clause inside of the projection, i.e. in the UI layer). For POST requests and data modifications I would use DTOs that send to data collected from a view back to a service to be processed there.
For stricter separation of "query layer" and UI layer I would probably prefer something close to the second option, maybe not with an interface/DTO for every UI requirement, but somehow reduced to a set of DTOs for the most common requirements (with the price of a little overhead of sometimes unnecessarily loaded properties). However, I expect that to be more work than the last option due to the larger amount of necessary repository/service methods, the additional maintenance of (perhaps many) DTOs and the intermediate mapping between DTOs and ViewModels.
Personally I am concerned about materializing full entities, especially complex object graphs, when I don't need them 90% of the time. But my concern is not verified by extensive performance measurements proving that this approach is really a problem for a "normal" application that doesn't have special high performance needs.
How can anyone give you sound advice when we have no clue as to what it is you are building? In the grand scheme of things, you might be building the wrong solution (not saying you are). So do realize all we can relate to is technical design issues and similar past experiences.
Many people face your problem, indeed. The mapping is loose coupling tax in the land of static typing. Maybe a more dynamic language could solve some of your pain. Or maybe you might find virtue in automating more (DSL, MDA). You could also switch to client server instead.
Interfaces are not layers, rather abstractions. Use them wisely.
Personally, I'd never take these shortcuts. Been bitten too many times trying to skip steps. Logic starts popping up in odd places. If I have a data driven app to develop simple datasets come to mind, EF as well. But I don't call the objects aggregate or entity in the DDD sense, just entity in the ERD sense. Transactionscript might be a better fit than doing the partial method sprinkeling. As for read model objects, these are not layers of indirection.
Overall, I get the feeling, and it is just that, you're making a mess of things because you fight the mapping friction by taking on a dependency on objects that don't reveal the required shape (navigation properties that are null) thereby causing problems in a different area.
I'll just try to be short - we went for the method 2 - ie, add layer of interfaces that you use on the client. You can have EF generate them for you, just a little tweak of the .tt templates.
Yes, it creates (yet) another layer, but it's logic-free and adds no complexity. Of course, if your client needs to deserialize entities, you have to add (yet) another layer that will handle deserialization and reference both the entities definitions and the interfaces that he'll return to the client. But it's also thin, so we learned to live with it, because it turned out to work just fine, and the client really stays clean...
The problem: I now use the entities all over the place, so client code
can see their navigation properties.
I don't quite get why this is a problem and how it's related to EF entities in particular. By client code do you mean presentation layer code or any code consuming your entities ?
For UI code a simple solution is to define ViewModels that just don't expose these navigation properties (or only expose a few of them depending on the object graph depth your GUIs need).
For other code it's only normal to be able to see the navigation properties of entities. They are public for a reason. You can end up breaking the Law of Demeter if you abuse them, but it's a matter of developer discipline not to fall into that trap.
An entity contains its own contract - all code that has access to the entity is supposed to be able to use any part of this contract. If you feel like your entities are exposing too much and that you need to put interfaces on top of them to restrain access to certain parts, maybe it's just a different entity.
I don't worry about the difference between the "data layer" and "domain layer" any longer, as there's no point - the terms are
interchangeable. I let EF do it's thing, and add interfaces and
repositories on top when needed.
I've merged my "data" and "domain" projects (into "core", boring name, I know), and I could almost swear that Visual Studio is
actually running faster.
I allow EF entities to go up and down the stack, but, I still map them to presentation models / viewmodels as usual.
For simple operations I call repositories directly from controllers, for complex operations I use domain managers/services as
usual; the repositories never expose IQueryable.
I define entities/POCOs as partial classes, so I can add domain behavior separately in corresponding partial classes.
None of these things seems to be fundamentally anti-DDD to me, except data/domain separation.
Especially if you do database-first EF -DDD is clearly a domain-centric approach and you shouldn't define your tables before defining your entities. It's also not clear whether some of your domain entities talk to the database or EF directly (not DDD - and more generally, layered-architecture - compliant) or you systematically have data access objects in between (DDD compliant).

SPROC to update record: how to handle unchanged values

I'm calling a update SPROC from my DAL, passing in all(!) fields of the table as parameters. For the biggest table this is a total of 78.
I pass all these parameters, even if maybe just one value changed.
This seems rather inefficent to me and I wondered, how to do it better.
I could define all parameters as optional, and only pass the ones changed, but my DAL does not know which values changed, cause I'm just passing it the model - object.
I could make a select on the table before updateing and compare the values to find out which ones changed but this is probably way to much overhead, also(?)
I'm kinda stuck here ... I'm very interested what you think of this.
edit: forgot to mention: I'm using C# (Express Edition) with SQL 2008 (also Express). The DAL I wrote "myself" (using this article).
Its maybe not the latest state of the art way (since its from 2006, "pre-Linq" so to say but Linq works only for local SQL instances in Express anyways) of doing it, but my main goal was learning C#, so I guess this isn't too bad.
If you can change the DAL (without changes being discarded once the layer is "regenerated" from the new schema when changes are made), i would recomend passing a structure containing the column being changed with values, and a structure kontaing key columns and values for the update.
This can be done using hashtables, and if the schema is known, should be fairly easy to manipulate this in the "new" update function.
If this is an automated DAL, these are some of the drawbacks using DALs
You could implement journalized change tracking in your model objects. This way you could keep track of any changes in your objects by saving the previous value of a property every time a new value is set.This information could be stored in one of two ways:
As part of each object's own private state
Centrally in a "manager" class.
In the first solution, you could easily implement this functionality in a base class and have it run in all model objects through inheritance.
In the second solution, you need to create some kind of container class that will keep a reference and a unique identifier to any model object that is created and record all changes in its state in a central store.This is similar to the way many ORM (Object-Relational Mapping) frameworks achieve this kind of functionality.
There are off the shelf ORMs that support these kinds of scenarios relatively well. Writing your own ORM will leave you without many features like this.
I find the "object.Save()" pattern leads to this kind of behavior, but there is no reason you need to follow that pattern (while I'm not personally a fan of object.Save(), I feel like I'm in the minority).
There are multiple ways your data layer can know what changed and most of them are supported by off the shelf ORMs. You could also potentially make the UI and/or business layer's smart enough to pass that knowledge into the data layer.
Two options that I prefer:
Generating/hand coding update
methods that only take the set of
parameters that tend to change.
Generating the update statements
completely on the fly.

Resources