How to use Slickgrid Formatters with MVC - asp.net-mvc

I am working on a first Slickgrid MVC application where the column definition and format is to be stored in a database. I can retrieve the list of columns quite happily and populate them until I ran into the issue with formatting of dates. No problem - for each date (or time) column I can store a formatter name in the database so this can be retrieved as well. I'm using the following code which works ok:
CLOP_ViewColumnsDataContext columnDB = new CLOP_ViewColumnsDataContext();
var results = from u in columnDB.CLOP_VIEW_COLUMNs
select u;
List<dynColumns> newColumns = new List<dynColumns>();
foreach(CLOP_VIEW_COLUMN column in results)
{
newColumns.Add(new dynColumns
{
id = column.COLUMN_NUMBER.ToString(),
name = column.HEADING.Trim(),
field = column.VIEW_FIELD.Trim(),
width = column.WIDTH,
formatter = column.FORMATTER.Trim()
});
}
var gridColumns = new JavaScriptSerializer().Serialize(newColumns);
This is all fine apart from the fomatter. An example of the variable gridColumns is:
[{"id":"1","name":"Date","field":"SCHEDULED_DATE","width":100,"formatter":"Slick.Formatters.Date"},{"id":"2","name":"Carrier","field":"CARRIER","width":50,"formatter":null}]
Which doesn't look too bad however the application the fails with the error Microsoft JScript runtime error: Function expected in the slick.grid.js script
Any help much appreciated - even if there is a better way of doing this!

You are assigning a string to the formatter property, wich is expected to be function.
Try:
window["Slick"]["Formatters"]["Date"];
But i really think you should reconsider doing it this way and instead store your values in the db and define your columns through code.
It will be easier to maintain and is less error prone.
What if you decide to use custom editors and formatters, which you later rename?
Then your code will break or you'll have to rename all entries in the db as well as in code.

Related

Illegal sap.ui.model.odata.type.DateTimeOffset

Posting for prosperity. I had zero hits from google.
I'm writing a SAP Business One Web Client extension. I'm using the worklist template and have the b1s/v2/ Service Layer metadata (auto generated while setting up the template)
Running the sandbox version ("npm run start-local") it automatically generates fake data based on the metadata. My data included an edm.DateTimeOffset which is resolved by Fiori using the sap.ui.model.odata.type.DateTimeOffset model.
Here is an example response object from the test data proxy (all autogenerated)
{
DocEntry: 1,
U_CardCode: "U_CardCode_0",
U_CardName: "U_CardName_0",
U_DocCurrency: "U_DocCurrency_0",
U_DocTotal: "",
U_DueDate: "2017-04-13T14:50:39.000Z",
U_Status: "U_Status_0",
U_SupplierInvNo: "U_SupplierInvNo_0",
}
A perfectly normal U_DueDate value that, according to all the documentation I could find is an accepted format, doublely confirmed by the fact that it's a sap generated value.
This produces an error on screen
Illegal sap.ui.model.odata.type.DateTimeOffset
Adding a formatter doesn't work. If it's unable to parse the value then it won't pass it on to a formatter.
Turns out there is a way to override the expected type in the metadata. I couldn't find much documentation on it, but I changed the element from
text="{'U_DueDate'}" />
to
text="{
path: 'U_DueDate',
targetType: 'any',
formatter: '.formatter.date'
}" />
The date is now accepted as a string so I can use a custom formatter.
Here is the date function in the formetter:
date : function (sValue) {
if (!sValue) {
return "";
}
var date = new Date(sValue)
var asString = date.toLocaleDateString("en-GB")
return asString;
}
You don't have to hard code the localization, but my use case is that niche

Different text used of sorting and filtering

in a app, I use jQueryTablesorter, and the widget https://mottie.github.io/tablesorter/docs/example-widget-filter.html
I have two main features :
- filtering (the widget)
- sorting (default feature)
Both of these feature use textExtraction() function,
https://mottie.github.io/tablesorter/docs/#textextraction
My problem is the following :
for sorting, I would like to use computer form of a date, that is "2020-04-01"
for filtering, I would like to use human form (in French "1er avril 2020").
How can I deal with it ?
You might need to use a date library like sugar or date.js - check out this demo: https://mottie.github.io/tablesorter/docs/example-parsers-dates.html. What that library does is use the parser to convert the filter into a normalized date that will match with the date in the column. You would also need to add a filter-parsed class name to the column (ref).
I have found. I need to use a hook which modify the value parsed for filtering.
$.tablesorter.filter.types.start = function(config, data) {
data.exact = data.$cells[data.index];
data.exact = data.exact.innerText;
data.iExact = data.exact.toLowerCase();
return null;
}

Snowflake Tasks causing error in timezone in queries

I am running a simple insert query inside a stored procedure with to_timeatamp_ntz("column value") along with other columns.
This works fine when I am running it with the snowflake UI and logged in with my account.
This works fine when I am calling it using python scripts from my visual studio instance.
The same stored procedure fails when it is being called by a scheduled task.
I am thinking if it has something to do with the user's timezone of 'System' vs my time zone.
Execution error in store procedure LOAD_Data(): Failed to cast variant
value "2019-11-27T13:42:03.221Z" to TIMESTAMP_NTZ At
Statement.execute, line 24 position 57
I tried to provide timezone as session parameters in task and in the stored proc but does not seem to be addressing the issue. Any ideas?
I'm guessing (since you didn't include the SQL statement that causes the error) that you are trying to bind a Date object when creating a Statement object. That won't work.
The only parameters you can bind are numbers, strings, null, and the special SfDate object that you can only get from a result set (to my knowledge). Most other parameters must be converted to string using mydate.toJSON(), JSON.stringify(myobj), etc., before binding, eg:
var stmt = snowflake.createStatement(
{ sqlText: `SELECT :1::TIMESTAMP_LTZ NOW`, binds: [(new Date).toJSON()] }
);
Date object errors can be misleading, because Date objects causing an error can be converted and displayed as strings in the error message.
I found the issue:
my Task was using a copy paste effect similar to this:
CREATE TASK TASK_LOAD_an_sp
WAREHOUSE = COMPUTE_WH
TIMEZONE = 'US/Eastern'
SCHEDULE = 'USING CRON 0/30 * * * * America/New_York'
TIMESTAMP_INPUT_FORMAT = 'YYYY-MM-DD HH24'
AS
Call LOAD_an_sp();
The Timestamp input format was causing this.

Entity Framework - Oracle Date Comparison

I have an Oracle database column of type date, the date format is dd-MMM-yy, when I try and retrieve an entity from the database using the Entity Framework and a date comparison I get back no records.
I use this method for the query
var calendar = _repository.Single<Calendar>(c => DbFunctions.TruncateTime(c.CALNDR_DATE) == DbFunctions.TruncateTime(calendardate));
I've also tried it in various ways:
var calendar = _repository.Single<Calendar>(c => c.CALNDR_DATE) == calendardate);
I thought this one would work for sure:
var calendar = _repository.Single<Calendar>(c => c.CALNDR_DATE.Year == calendardate.Year && c.CALNDR_DATE.Month == calendardate.Month && c.CALNDR_DATE.Day == calendardate.Day);
No matter what I've tried I cannot get the method to bring back a calendar record even though I have verified there is a record with the date I'm passing. The value in the calendardate variable is formatted as mm/dd/yyyy. If I query the database directly using SQLDeveloper I get back a result when I search with this query select * from table where CALNDR_DATE = '17-DEC-15'
But no result if I search for: select * from braidss.tmmis98 where CALNDR_DATE = '12/17/2015' - This is what I see if I check what is generated by the LINQ
My question is how can I handle this comparison so the dates match? I have tried several different ways but nothing has worked so far.

DBF Large Char Field

I have a database file that I beleive was created with Clipper but can't say for sure (I have .ntx files for indexes which I understand is what Clipper uses). I am trying to create a C# application that will read this database using the System.Data.OleDB namespace.
For the most part I can sucessfully read the contents of the tables there is one field that I cannot. This field called CTRLNUMS that is defined as a CHAR(750). I have read various articles found through Google searches that suggest field larger than 255 chars have to be read through a different process than the normal assignment to a string variable. So far I have not been successful in an approach that I have found.
The following is a sample code snippet I am using to read the table and includes two options I used to read the CTRLNUMS field. Both options resulted in 238 characters being returned even though there is 750 characters stored in the field.
Here is my connection string:
Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:\datadir;Extended Properties=DBASE IV;
Can anyone tell me the secret to reading larger fields from a DBF file?
using (OleDbConnection conn = new OleDbConnection(connectionString))
{
conn.Open();
using (OleDbCommand cmd = new OleDbCommand())
{
cmd.Connection = conn;
cmd.CommandType = CommandType.Text;
cmd.CommandText = string.Format("SELECT ITEM,CTRLNUMS FROM STUFF WHERE ITEM = '{0}'", stuffId);
using (OleDbDataReader dr = cmd.ExecuteReader())
{
if (dr.Read())
{
stuff.StuffId = dr["ITEM"].ToString();
// OPTION 1
string ctrlNums = dr["CTRLNUMS"].ToString();
// OPTION 2
char[] buffer = new char[750];
int index = 0;
int readSize = 5;
while (index < 750)
{
long charsRead = dr.GetChars(dr.GetOrdinal("CTRLNUMS"), index, buffer, index, readSize);
index += (int)charsRead;
if (charsRead < readSize)
{
break;
}
}
}
}
}
}
You can find a description of the DBF structure here: http://www.dbf2002.com/dbf-file-format.html
What I think Clipper used to do was modify the Field structure so that, in Character fields, the Decimal Places held the high-order byte of the size, so Character field sizes were really 256*Decimals+Size.
I may have a C# class that reads dbfs (natively, not ADO/DAO), it could be modified to handle this case. Let me know if you're interested.
Are you still looking for an answer? Is this a one-off job or something that needs doing regularly?
I have a Python module that is primarily intended to extract data from all kinds of DBF files ... it doesn't yet handle the length_high_byte = decimal_places hack, but it's a trivial change. I'd be quite happy to (a) share this with you and/or (b) get a copy of such a DBF file for testing.
Added later: Extended-length feature added, and tested against files I've created myself. Offer to share code with anyone who would like to test it still stands. Still interested in getting some "real" files myself for testing.
3 suggestions that might be worth a shot...
1 - use Access to create a linked table to the DBF file, then use .Net to hit the table in the access database instead of going direct to the DBF.
2 - try the FoxPro OLEDB provider
3 - parse the DBF file by hand. Example is here.
My guess is that #1 should work the easiest, and #3 will give you the opportunity to fine tune your cussing skills. :)

Resources