I don't know how to normalize a csv file by using Encog C#..My csv data file:
7.7,3.8,6.7,2.2,\n
7.7,2.6,6.9,2.3,\n
6,2.2,5,1.5,6.9,\n
Here http://www.heatonresearch.com/comment/2690#comment-2690 there's an answer to your question. I hope that it will help.
Here's a nice function to do it... of course you need to create an analyst
private EncogAnalyst _analyst;
public void NormalizeFile(FileInfo SourceDataFile, FileInfo NormalizedDataFile)
{
var wizard = new AnalystWizard(_analyst);
wizard.Wizard(SourceDataFile, _useHeaders, AnalystFileFormat.DecpntComma);
var norm = new AnalystNormalizeCSV();
norm.Analyze(SourceDataFile, _useHeaders, CSVFormat.English, _analyst);
norm.ProduceOutputHeaders = _useHeaders;
norm.Normalize(NormalizedDataFile);
}
Related
I use leucene.net for my site and it Index some of the words fine and correct but it doesn't index some words like "الله"!
I have see the indexed file with Luke and it shows that "الله"is not indexed.
I have used ArabicAnalyzer for indexing.
you can see my site at www.qoranic.com , if you search "مریم" it will be ok but if you search "الله" it shows nothing.
any idea is appreciated in forward.
The ArabicAnalyzer does some transformation to that input; it will transform the input الله to له. This is due to the usage of the ArabicStemFilter (and ArabicStemmer) which is documented with ...
Stemming is defined as:
Removal of attached definite article, conjunction, and prepositions.
Stemming of common suffixes.
This shouldn't be an issue since you should be parsing the user provided query through the same analyzer when searching, producing the same tokens.
Here's the sample code I used to see what terms an analyzer produced from a given input.
using System;
using Lucene.Net.Analysis.AR;
using Lucene.Net.Analysis.Tokenattributes;
using System.IO;
namespace ConsoleApplication {
public static class Program {
public static void Main() {
var luceneVersion = Lucene.Net.Util.Version.LUCENE_30;
var input = "الله";
var analyzer = new ArabicAnalyzer(luceneVersion);
var inputReader = new StringReader(input);
var stream = analyzer.TokenStream("fieldName", inputReader);
var termAttribute = stream.GetAttribute<ITermAttribute>();
while(stream.IncrementToken()) {
Console.WriteLine("Term: {0}", termAttribute.Term);
}
Console.WriteLine("Done.");
Console.ReadLine();
}
}
}
You can overcome this behavior (remove the stemming) by writing a custom Analyzer which uses the ArabicNormalizationFilter, just as ArabicAnalyzer does, but without the call to ArabicStemFilter.
public class CustomAnalyzer : Analyzer {
public override TokenStream TokenStream(String fieldName, TextReader reader) {
TokenStream result = new ArabicLetterTokenizer(reader);
result = new LowerCaseFilter(result);
result = new ArabicNormalizationFilter(result);
return result;
}
}
I am trying to run KnnItemBasedRecommender using sample data "intro.csv" using the below code, however I am getting empty set as result.
public static void main(String[] args) throws Exception {
DataModel model = NeuvidisData.convertToDataModel();
//RecommenderEvaluator evaluator = new AverageAbsoluteDifferenceRecommenderEvaluator();
RecommenderBuilder recommenderBuilder = new RecommenderBuilder() {
#Override
public Recommender buildRecommender(DataModel model) {
ItemSimilarity similarity = new LogLikelihoodSimilarity(model);
Optimizer optimizer = new ConjugateGradientOptimizer();
return new KnnItemBasedRecommender(model, similarity, optimizer, 2);
}
};
Recommender rec = recommenderBuilder.buildRecommender(model);
List<RecommendedItem> rcList = rec.recommend(1, 2);
for(RecommendedItem item:rcList)
{
System.out.println("item:");
System.out.println(item);
}
}
Can anybody help me?
Presumably because your data is too small or sparse to make recommendations for user 1 using this algorithm. Without the data it's hard to say.
The following code worked for me.
ItemSimilarity similarity = new PearsonCorrelationSimilarity(dataModel);
Optimizer optimizer = new ConjugateGradientOptimizer();
Recommender recommender = new KnnItemBasedRecommender(dataModel, similarity, optimizer, 5);
Used PearsonCorrelationSimilarity instead of LogLikelihoodSimilarity.
This solution may work for a specific set of data. So, this solution is based on your data set.
I am uploading a file using the following code
[HttpPost]
public ActionResult ImportDeleteCourse(ImportFromExcel model)
{
var excelFile = model.ExcelFile;
if (ModelState.IsValid)
{
OrganisationServices services = new OrganisationServices();
string filePath = Path.Combine(HttpContext.Server.MapPath("../Uploads"),
Path.GetFileName(excelFile.FileName));
excelFile.SaveAs(filePath);
// ... snipped //
}
}
I do not really need to do store the uploaded excel file. Is there anyway I can process it without saving?
Note: The ImportFromExcel class is nothing but a model, which is basically:
public class ImportFromExcel
{
[Required(ErrorMessage = "Please select an Excel file to upload.")]
[DisplayName("Excel File")]
public HttpPostedFileWrapper ExcelFile { get; set; }
}
The most interesting part is that it wraps a HttpPostedFileWrapper.
Sure you can. As Patko suggested, the InputStream property can be used for another stream. For example I did this for an uploaded xml document to use with LINQ to XML:
XDocument XmlDoc = XDocument.Load(new StreamReader(viewmodel.FileUpload.InputStream))
Cheers,
Chris
The HttpPostedFileBase.InputStream property looks promising. You should be able to use that and save the data to whichever other stream you need to.
What is the best method to deep clone objects in actionscript?
The best method to do this is by using the ByteArray with the method writeObject. Like this:
function clone(source:Object):* {
var copier:ByteArray = new ByteArray();
copier.writeObject(source);
copier.position = 0;
return(copier.readObject());
}
More information about this, here: http://www.kirupa.com/forum/showpost.php?p=1897368&postcount;=77
If you are trying to deep clone a display object, this is the only way it worked for me :
public static function clone(target:DisplayObject ):DisplayObject {
var bitmapClone:Bitmap = null;
var bitmapData:BitmapData = new BitmapData(target.width,target.height,true,0x00000000);
bitmapData.draw(target);
bitmapClone = new Bitmap(bitmapData);
bitmapClone.smoothing = true;
return bitmapClone;
}
Note that this will only copy visually the object.It will not copy methods or properties.
I used this when i loaded external images, and used them in multiple places.
I know this question has been asked and answered in several ways, but none of them get to the crux of the matter that I need to understand. In WebForms, we 'subvert' the rendering process and write straight to the Response's output stream. How does one achieve that using a Controller Action, to write CSV to a file for Excel?
Just to elaborate on Omu's FileHelpers answer, I was able to combine #shamp00's ideas here with this answer here in order to render a CSV to a FileContentResult via stream on the fly.
Given a FileHelpers DTO Model like so:
[DelimitedRecord(",")]
public class Foo
{
public string Name { get; set; }
public int Id { get; set; }
}
And a controller action:
public FileContentResult DownloadFoosCSV()
{
var foos = GetFoos(); // IEnumerable<Foo>
var fileHelper = new FileHelperEngine<Foo>();
using (var stream = new MemoryStream())
using (var streamWriter = new StreamWriter(stream, Encoding.UTF8))
{
fileHelper.WriteStream(streamWriter, foos);
streamWriter.Flush();
return File(stream.ToArray(), "application/csv", "NewFoos.csv");
}
}
You can try CsvActionResult described at http://develoq.net/2011/export-csv-actionresult-for-asp-net-mvc/
Same way you'd write any other file -- use FileResult and it's descendants.
I've been using this: http://www.filehelpers.net/ in an asp.net mvc application, look at the getting started guide, you should get it from there