I am using MVC3, ASP.NET4.5 and C#
There are a number of choices when precompiling a web application:
Do not Merge.
Do not merge, Create a separate assembly for each page and control.
Merge all outputs to a single assembly.
Merge each individual folder output to its own assembly.
Merge all pages and control outputs to a single assembly.
I am deploying to Azure websites.
I have currently opted for 3 which creates a 2.5MB assembly.
I realise that PCode is generated at this point ready for the Jitter to create the Native code at runtime, and therefore performance should be identical. However I was wondering whether there was still a performance difference between these options. I have currently chosen option 3, because it seemed tidier.
Thanks.
There should be a performance impact based on your chosen merge option. How big? Uncertain without testing, but intuitively if there are more assemblies and files for your application to access, that should take longer than accessing a single assembly. My guess is the difference is minuscule. The assemblies themselves should not experience a performance impact, but your performance would suffer slightly in an un-merged scenario as there would be more files to process at run-time.
Additional resources that might help:
Additional Precompile Settings Dialog Box
ASP.NET Merge Tool, Compilation and Merge Scenarios
Related
when i got to this project there were cucumber tests in "features/enhanced", which ran with javascript and a few in "features/plain" which did not require js. with the later development of per-scenario #javascript, this doesn't make sense. and as the number of features files we have grows and grows, it'd be awesome if this stayed tidy.
so, in best practice land:
1) how long should .feature files be? i try to keep each narrow and specific with 1 or 2 "Scenarios".
2) what folder/file structure should one keep them in?
2a) how might one group similar features?
1) Once you've done them for a few months you'll soon find what works best for you. My advice is you should make them small ish. We have often split our earlier features down into smaller chunks, but have never ended up combining them. It's handy for making use of backgrounds etc...
2) We had a big problem with this and spent ages doing it one way then another. In the end we gunned to group them by the services that our company provides. e.g. payments, customer registration, stock management
Inconveniently, features don't always conform to a hierarchical tree view of the world, so make liberal use of tagging and your primary grouping of features is less important.
Have you tried yard? There's an example here We've just built it into our CI, it lets you pull together sets of scenarios based on tags, you can do unions, intersections etc... well worth it :)
I would keep the JavaScript and non-JavaScript versions of a scenario together, since they should be very similar.
Anything more than 8 scenarios in a feature file is probably too much.
A useful approach is to have a folder to represent the high-levels features (sometimes call epics or themes), and separate feature files within those folders for the different aspects of the behaviour.
For example, you may have a feature "Employee Directory" which would have separate feature files contains scenarios for a photograph, office location, job title, etc.
Depending on the size and complexity of your app, you could group those folders into other folders.
(Note that none of the above is specific to Rails apps).
I want to know best practices for creation of features.
Normally Visual studio extension creates feature for each web part.
Or it good practice or we should create 1 feature for multiple web parts in one WSP?
I don't know of any best-practice, but I can see two ways (I can think of) of looking at it:
When you separate your webparts into several features, you have the possibility to activate/deactivate the different webparts at will. If one webpart has an error you can just deactivate it. When one webpart fails compiling, you still have the others running smoothly.
The downside is that you "clutter" the Sharepoint Interface, because you have to manage several Features instead of one. That goes for activating/deactivating as well as deploying/retracting.
If you have one feature it is all of the above, just in reverse. You only have one feature to activate/deactivate, which makes it faster to manager. But if that one feature fails in some way (or any of the webparts within) you can only deactivate the whole thing. The same goes for deployment/retracting. When one webpart within your feature fails you have to retract the whole thing.
Whether development is easier or harder depends on your preference. One might say that it is harder to keep a consistent configuration in one huge feature deploying a multitude of webparts, workflows and master pages (where was the entry for that workflow again? ah yes, in line 1112) - on the other hand you have everything in one place and don't have to search in several features.
I would really make it up to your personal preference. When you are deploying a Solution to a customer, the customer is certainly more happy to click/install/deploy the "MyCompany Super Solution Feature" instead of several smaller ones, in the end you don't install MS Word with several setup.exe's (and then again, you can choose what features of Word to install...)
It basically depends upon your requirements.
By the way, this problem is resolved in VS 2010 extension
I am currently a single BI developer over a corporate datawarehouse and cube. I use SQL Server 2008, SSAS, and SSIS as my basic toolkit. I use Visual Studio +BIDS and TFS for my IDE and source control. I am about to take on multiple projects with an offshore vendor and I am worried about managing change. My major concern is manging merges and changes between me and the offshore team. Merging and managing changes to SQL & XML for just one person is bad enough but with multiple developers it seems like a nightmare. Any thoughts on how best to structure development knowing that sometimes there is no way to avoid multiple individuals making changes to the same file?
SSIS, SSAS and SSRS files are not merge-friendly. They are stored in an xml file that is changed drastically - even with minor changes (such as changing a property) - so it becomes really impossible to merge.
So stop thinking about parallel development on one file. You need to think how you can achieve that people are not need to do parallel development on one file. So start with disabling the multiple checkout of a file. You might even want to consider to enable the option to get the latest version on a checkout.
Then start thinking how you can achieve that people can work independent. This is more in the way you structure the work and files:
Give people their own area they can work on. One SSIS package is only developed by person X at any given moment in time.
Make smaller files so the change that two people need to work in the same file is small.
I have given feedback to the product team of the imcompatability of BIDS to merge. It is a known issue, but will be hard to tackle. They don't know when it will be possible to really do parallel development on these files. Until then keep away from parallel development.
As Ewald Hofman mentioned, SSAS and SSIS is not merge-friendly.
In one environment I worked solved the problem as follows:
do only use SSIS when you have to (fuzz algorithm or something similar). Replace SSIS packages as often as you can with SQL code (see Linked Server for datasync. and MEARGE Command for dimension/fact-table-creating for instance).
build your data warehouse structure as follows:
build 2 databases, one for the "raw source data" from the source systems and one (the "stage" database) for the dimension and fact views and tables
use procedures that can deploy the whole "stage" database
put the structure for the "stage" database into your Repository
build a C# application that build your dimensions and cubes via the AMO API (I know, that's a tough job at the beginning but it is it worth - think on what you gain - Look at the Pros below )
add the stage database and the C# application to your Repository (TFS/Git etc.)
Pros of that structure:
you have a merge-able structure you can put in your Repository
you are using the AMO API witch has
you can automate the generation of new partitions
you can use procedures to automate and clone measure groups to different cubes (what I think is sometimes a big benefit!)
you could outsource your translation and import it easily (the cube designer is probably not the best translator)
Cons:
the vendor would probably not adapt that structure
you have to pay more (because of either higher skill requirements or for teaching him your individual structure)
you probably need knowledge over a new language C# - if you don't already have
Conclusion:
there are possibilities to get a merge-friendly environment
you will get lost of nice click-and-run tools f.e. BIDS - but will get into process of high automation functionality
outsourcing will be maybe unprofitable because of high individualization
http://code.google.com/p/support/wiki/DVCSAnalysis
maybe a better tag is DVCS?
https://stackoverflow.com/questions/tagged/dvcs
As long as both teams are using bids and TFS this should not be a problem.
assuming that your tsql code is checked in to source control in a single file per object, merging TSQL code is straight forward since it is text based. I have found the VSTS Database projects help with this.
Merging the XML based source files of SSIS and the MSAS can be cumbersome as you indicate below. to alleviate some of the pain, I find that keeping each package limited to a single dataflow or logical unit of work helps reduce developer contention on packages. I then call these packages from one or more master packages. I also try to externalize all of my tsql source queries using sprocs, view or udfs so that the need to edit the package is further reduced. using configuration files and variables also helps to a smaller extent.
MSSAS cubes are a little bit tougher. My best suggestion is to look into a 3rd party xml differencing tool. I have been able to successfully merge small changes use the standard text based tools but it can be a daunting task.
Using the Grails YUI plugin, I've noticed that my GUI tags are replaced with some JavaScript code that is inserted in the HTML page.
Does this behavior contradict the Yahoo rule of making JavaScript and CSS external?
In other words, how do I separate the script code from the HTML page in order to allow external JavaScript script caching?
Should I use the Grails UI performance plugin for that matter? Is there another way to do it?
Everything in software design is a trade-off.
It depends on if the benefit of performance overweights the importance of having well segregated and maintainable code.
In your case, I wouldn't mind having some extra JavaScript code automatically added to dramatically improve the performance.
Complete code and UI separation always comes at a price. More levels of abstraction and intermediate code often translates into slower performance, but better maintainability.
Sometimes, the only way to reach maximum efficiency is to throw away all those abstractions and write optimized code for your platform, minimizing the number of functions and function calls, trying to do as most work as possible in one loop instead of having two meaningful loops, etc. (which is characterized as ugly code).
Well, this is one the features of the UiPerformance Plugin amongst other things:
The UI Performance Plugin addresses some of the 14 rules from Steve Souders and the Yahoo performance team.
[...]
Features
minifies and gzips .js and .css files
configures .js, .css, and image files (including favicon.ico) for caching by renaming with an increasing build number and setting a far-future expires header
[...]
So I'd use it indeed.
What are common reasons to split a development project (e.g. ASP.NET MVC application) into multiple projects? Code organization can be done via folders just as well. Multiple projects tend to generate circular reference conflicts and increase complexity by having to manage/resolve those.
So, why?
Some reasons are
Encapsulation - By packaging a set of routines into another library, either as a static library or a set of dlls, it becomes a black box. For it to be a good black box, all you need to do is to make sure you give the right inputs and get the right outputs. It helps when you re-use that library. It also enforces certain rules and prevent programming by hacks ('hmm...I'll just make that member function public for now')
Reduces compile time - the library is already complied; you don't have to rebuild it at compile time, just link to it (assuming you are doing C++).
Decoupling - By encasing your classes into a standalone libraries, you can reduce coupling and allows you to reuse the library for other purpose. Likewise, as long as the interface of the library does not change, you can make changes to the library all you like, and others who link to it or refer to it does not need to change their code at all. DLLs are useful in this aspect that no re-compilation is required, but can be tricky to work with if many applications install different versions of the same DLLs. You can update libraries without impacting the client's code. While you can do the same with just folders, there is no explicit mechanism to force this behaviour.
Also, by practicing this discipline of having different libraries, you can also make sure what you have written is generic and decoupled from implementation.
Licensing/Commercialization - Well, I think this is quite obvious.
One possibility is to have a system that a given group (or single developer) can work on independently of the rest of the code. Another is to factor out common utility code that the rest of the system needs -- things like error handling, logging, and common utilities come to mind.
Of course, just when thinking about what goes in a particular function / class / file, where the boundaries are is a matter of art, not science.
One example I can think of is that you might find in developing one project that you end up developing a library which may be of more general use and which deserves to be its own project. For instance maybe you're working on a video game, and you end up writing an audio library that's in no way tied specifically to the game project.
Code reuse. Let's say you have project A and you start a new project B which has many of the same functions as project A. It makes sense to pull out the shared parts of A and make them into a library which can be used by both A and B. This allows you to have the code in both without having to maintain the same code in two places.
Code reuse, inverted. Let's say you have a project which works on one platform. Now you want it to work on two platforms. If you can separate out the platform-dependent code, you can start different projects for each platform-dependent library and then compile your central codebase with different libraries for different platforms.
Some tips about split your project into multiple projects:
One reason for separating a project into multiple class libraries is re-usability. I’ve yet to see the BLL or DAL part of application re-used in another application. This is what textbooks from the 90s used to tell us! But most if not all modern applications are too specific and even in the same enterprise, I’ve never seen the same BLL or DAL parts re-used across multiple applications. Most of the time what you have in those class libraries is purely to serve what the user sees in that particular application, and it’s not something that can be easily re-used (if at all).
Another reason for separating a project into multiple class libraries is about deployability. If you want to independently version and deploy these pieces, it does make sense to go down this path. But this is often a use case for frameworks, not enterprise applications. Entity Framework is a good example. It’s composed of multiple assemblies each focusing on different areas of functionality. We have one core assembly which includes the main artifacts, we have another assembly for talking to a SQL Server database, another one for SQLite and so on. With this modular architecture, we can reference and download only the parts that we need.
Imagine if Entity Framework was only one assembly! It would be one gigantic assembly with lots of code that we won’t need. Also, every time the support team added a new feature or fixed a bug, the entire monolithic assembly would have to be compiled and deployed. This would make this assembly very fragile. If we’re using Entity Framework on top of SQL Server, why should an upgrade because of a bug fix for SQLite impact our application? It shouldn’t! That’s why it’s designed in a modular way.
In most web applications out there, we version and deploy all these assemblies (Web, BLL and DAL) together. So, separating a project into 3 projects do not add any values.
Layers are conceptual. They don’t have a physical representation in
code. Having a folder or an assembly called BLL or DAL doesn’t mean
you have properly layered your application, neither does it mean you
have improved maintainability. Maintainability is about clean code,
small methods, small classes each having a single responsibility and
limited coupling between these classes. Splitting a project with fat
classes and fat methods into BLL/DAL projects doesn’t improve the
maintainability of your software. Assemblies are units of versioning
and deployment. Split a project into multiple projects if you want to
re-use certain parts of that in other projects, or if you want to
independently version and deploy each project.
Source: https://programmingwithmosh.com/csharp/should-you-split-your-asp-net-mvc-project-into-multiple-projects/
Ownership for one thing. If you have developers responsible for different parts of the code base then splitting the project up is the natural thing to do. One would also split projects by functionality. This reduces conflicts and complexity. If it increases, that just means a lack of communication and you are just doing it wrong.
Instead of questioning the value of code in multiple assemblies, question the value of clumping all of your code in one place.
Would you put everything in your kitchen in a single cabinet?
Circular references are circular references, whether they happen between assemblies or within them. The design of the offending components is most likely sub-optimal; eschewing organization via assemblies ironically prevents the compiler from detecting the situation for you.
I don't understand the statement that you can organize code just as well with folders as with projects. If that were true, our operating systems wouldn't have the concept of separate drives; they would just have one giant folder structure. Higher-order organizational patterns express a different kind of intent than simple folders.
Projects say "These concepts are closely related, and only peripherally related to other concepts."
There are some good answers here so I'll try not to repeat.
One benefit of splitting code out to it's own project is to reuse the assembly across multiple applications.
I liked the functional approach mentioned as well (e.g. Inventory, Shipping, etc. could all get their own projects). Another idea is to consider the deployment model. Code shared between layers, tiers, or servers should probably be in it's own common project (or set of projects if finer control is desired). Code earmarked for a certain tier may be in it's own project. e.g. if you had a separate web server and application server then you wouldn't want to deploy the UI code on the application server.
Another reason to split may be to allow small incremental deploys once the application is in production. Let's say you get an emergency production bug that needs to be fixed. If the small change requires a rebuild of the entire (one project) application you might have a hard time justifying a small test cycle to QA. You might have an easier sell if you were deploying only one assembly with a smaller set of functionality.