I am trying to implement a multi dimensional data cube in c#. Could somebody point in the direction of resources that would serve as a starting point. I am primarily interested in data structures needed to implement the cube.
I'm not sure if this is helpful, but there is a sourceforge project http://mondrian.pentaho.com/. There is a community edition which is open source, so you may be able to get some ideas on the underlying data structures even though it is not a c# implementation.
There is also a few useful references for MDX at this stackoverflow post
Related
We need to write some restful services to access / creates data on Neo4j. I have found many examples in Traverser Framework but I would like to explore Java CORE API as it is mentioned that the performance of Java Core API is far better than Traverser as per this link
Is it true? that Java CORE API is better than Traverser? Can someone guide me with useful tutorials of Java Core API for Neo4j?
Consider asking a different question here.
I don't dispute the performance finding that the traverser API is slower than the core API, but keep in mind that it's only for the kinds of things they were trying to do in that test.
Which API you should use depends on what you're trying to do. Without providing information on that, we can't suggest which will be the fastest for you.
Here are your tradeoff options: if you use the core API, then you can perform exactly the low-level operations on the graph that you want. On the flipside, you have to do all of the work. If the operations you're trying to do are complex, far reaching, or order-sensitive, you'll find yourself writing so much code that you'll re-implement a buggy version of the Traversal API on you own. Avoid this at all costs! The performance of the Traversal API is almost certainly better than what you'll write on your own.
On the other hand, if the operations you're performing are very simple (look up a node, grab its immediate neighbors by some edge type, then return them) then the core API is an excellent choice. In this (very simple) case, you don't need all the bells and whistles that Traversal gives you.
Bigger than just your question though: in general it's good to avoid "premature optimization". If a library or framework gives you a technique like the Traversal API, as a starting point it's a good bet to learn that abstraction and use it, because the developers gave it to you to make your life easier, not to make your code slower. If it turns out to be more than you need, or performance is really lagging -- then consider using the core API.
In the long run, if you're going to write RESTful services over top of Neo4J, you'll probably end up knowing both APIs. Bottom line - it's not a matter of choosing which one you should use, it's a matter of understanding their differences and which situations play to their strengths.
I am using MVC3-ViewModel with EF model first on my project.
and the view im doing right now is a page where users should see statistic displayed with charts.
Any tips and helps is appreciated on how to be able to do this kind of stuff in a good way.
Any plugins perhaps or packages?
Thanks in advance!
As i understand, your real need is not getting the statistics data (you can do that by calling a pure SQL query or just using LINQ, its your choice), just displaying them. There're many ways to do that but using a Javascript library like Highcharts or Flot will probably be much easier than a full blown Reporting service if you don't need some advanced features. You don't need advanced Javascript knowledge, most of them are documented enough to use instantly. There're also some complementing libraries, they're simply wrappers around those libraries so that you can use them in ASP.NET or ASP.NET MVC projects easier. Some examples:
Flot.Net - http://flotdotnet.codeplex.com/
Highcharts.net - http://highcharts.codeplex.com/
DotNet.Highcharts - http://dotnethighcharts.codeplex.com/
I would use direct SQL / stored procedure for data retrieval or even SQL server reporting services (it offers charts) for any kind of reporting.
I'm not sure to really understand your SQL diagram without knowing much of your application, so for the moment I'm skipping that part.
Displaying statistics as charts can be done, quite quickly and without needing to know much of javascript, using one of the many javascript libraries that you can find for example here . In order to use this libraries you may need to integrate some AJAX functionalities in your MVC application (if it's not already the case).
I use Highcharts for my personal projects and I think it's very well done and it's easy to use, though if you are using it for a commercial purpose you need a license.
I've been trying to make use of the GPU as part of a project of mine. I've looked into both CUDA and OpenCL, but the lack of information showing you how to introduce these into a project is shocking. Even their dedicated forum groups are dead. So now, I'm looking into DirectCompute.
From what I can tell, it's simply a new type of shader file that makes use of HLSL. My question is this, does my program (aside from being DirectX 10 / 11 ) need its structure changed?
I mean, is it simply a case of creating the CS file, setting in the project like I would any other shader, and watch the magic happen?
Any information on this would be appreciated.
Yes CS fits into the usual DirectX programming structure. It works in a similar way to CUDA/OpenCL. Here is a good, simple example:
http://openvidia.sourceforge.net/index.php/DirectCompute
Personally I would suggest using CUDA/OpenCL rather than going the DirectCompute route if your project does not involve graphics. I think CUDA/OpenCL are better for general-purpose computing. It can be a little difficult to find documentation but these are the main aspects to GPU programming:
Setting up data on the CPU to pass to the GPU.
Understanding how many warps/threads need to be started on the GPU, how threads might need to communicate, etc.
Computing on the GPU, reading data back on the CPU
Another option is C++ AMP - please follow links from here for more info and feel free to post questions as you have them: http://blogs.msdn.com/b/nativeconcurrency/archive/2011/09/13/c-amp-in-a-nutshell.aspx
Easiest way - is to make project which uses CS with C# and SlimDX.
And here is good site with basics how to use CS from within C# code.
Later on you can move to full scale CS exploration with C++ and DirectX 11.
This is closely related to another question I asked: Is there functionality that is NOT exposed in the Open XML SDK v2?
I am currently working with Open XML files manually. I recently had a look at the SDK and was surprised to find that it looked pretty low level, quite similar in fact to the helper classes I have created myself. My question is what exactly does the SDK v2 take care of that you would have to do manually when coding by hand with an XML library?
For example, would it automatically patch the _rels files when deleting a PowerPoint slide?
In addition to Otaku's links, this shows an example (near the bottom) of navigating an OpenXML document using the IO.Packaging namespace versus the SDK.
Just like Microsoft states on the download page for the SDK:
The Open XML SDK 2.0 for Microsoft
Office is built on top of the
System.IO.Packaging API and provides
strongly typed part classes to
manipulate Open XML documents. The SDK
also uses the .NET Framework
Language-Integrated Query (LINQ)
technology to provide strongly typed
object access to the XML content
inside the parts of Open XML
documents.
The Open XML SDK 2.0 simplifies the
task of manipulating Open XML packages
and the underlying Open XML schema
elements within a package. The Open
XML Application Programming Interface
(API) encapsulates many common tasks
that developers perform on Open XML
packages, so you can perform complex
operations with just a few lines of
code.
I've worked pretty much only with the SDK, but for example, it's nice to be able to grab a table out of a Word document by just using:
Table table = wordprocessingDocument.MainDocumentPart.Document.Body.Elements<Table>().First();
(I mean, assuming it's the first table)
I'd say the SDK does exactly what it seeks to do by providing a sort of intuitive object-based way to work with documents.
As far as automatically patching the relationships -- no, it doesn't do that. And looking back at how you actually state the question, I guess I might even say that (and I'm fairly new to Open XML so this isn't gospel by means) the SDK2.0 doesn't necessarily offer any extra functionality, so much as it offers a more convenient way to achieve the same functionality. For example, you still need to know about those relationships when you delete an element, but it's a lot easier to deal with them.
Also, there's been some efforts on top of the SDK to add even more abstraction -- see, for example, ExtremeML (Excel library only. I've never used it but I think it does get into things like patching relationships).
So I'm sorry if I've rambled a bit too much here. But I guess my short answer is: there's probably not extra functionality, but there's a nice level of abstraction that makes achieving certain functionality a lot easier to handle -- and if you've been doing it by hand up until now, you'll certainly have the understanding of the OPC to understand what exactly is being abstracted.
As a starting point, read this from the Brian Jones & Zeyad Rajabi blog.
I don't know of a side-by-side comparison, but the following articles/videos do discuss the two:
Using the Open XML SDK 2.0 Classes
Versus Using .NET XML Services is
a good place to start comparing the
two.
Open XML and the Open XML SDK is
a deep dive video which discusses both.
Finally, this is a What's New for 2.0 - it can be assumed that neither 1.0 or hand-coding have these benefits.
it is very hard to find good samples for f# in the web. some samples show a simple web crawler for downloading stock data from yahoo or only code snippets of bigger ideas.
i'm searching for an real world example outside the financial world. what about adventureworks? the current sample database is the base of many c# samples out there.
why is there now f# sample?
i don't want a sample where f# makes a gui for a table of it! isn't there any f# ideal processing? i like a sample where i can see the power of f#. a sample which show's me why i should learn the language instead of simply using (more code) c#.
is there any sample for adventure works online?
are there any real world processes based on the sample database (functional programming)
best regards - michl
IntelliFactory have some nice samples for how you might build a web UI using F#, the examples are based on their WebSharper product. It's not free but if your serious about building a LOB app in F# then it's definitely worth looking at, as in my humble opinion, it's really good:
http://www.intellifactory.com/products/wsp/Tutorial.aspx#
For the data side I'm told Entity Framework V2 works reasonable well for F#, and there's always my FunctionalNHibernate project which is in it's infancy but really quite cute:
http://bitbucket.org/robertpi/functionalnhibernate/
Have you seen this?
http://code.msdn.microsoft.com/fsharpsamples
I'm sorry but I know no AdventureWorks sample in F#, but this might interest you anyways: wcfstorm. It is a tool to test WCF services, written in F#.