Query SQLite database for a GUID in the form X'3D98F71F3CD9415BA978C010b1CEF941 - ios

I have an iOS project and data is written into an SQLite Database. For example, 'OBJECTROWID' in a table LDOCLINK stores info about a linked document.
OBJECTROWID starts of as a string with the format <3d98f71f 3cd9415b a978c010 b1cef941> but is cast to (NSData *) before being input into the database. The actual handling of the database insertion was written by a much more experienced programmer than myself. Anyway, as the image below shows, the database displays the OBJECTROWID column in the form X'3D98F71F3CD9415BA978C010b1CEF941'. I am a complete beginner with SQLite queries and cannot seem to return the correct row by using the WHERE clause with OBJECTROWID = or OBJECTROWID like.
SELECT * FROM LDOCLINK WHERE OBJECTROWID like '%';
gives all the rows (obviously) but I want the row where OBJECTROWID equals <3d98f71f 3cd9415b a978c010 b1cef941>. I have tried the following and none of them work:
SELECT * FROM LDOCLINK WHERE OBJECTROWID = 'X''3d98f71f3cd9415ba978c010b1cef941' no error - I thought that I was escaping the single quote that appears after the X but this didn't work
SELECT * FROM LDOCLINK WHERE OBJECTROWID like '%<3d98f71f 3cd9415b a978c010 b1cef941>%'
I cannot even get a match for two adjacent characters such as the initial 3D:
SELECT * FROM LDOCLINK WHERE OBJECTROWID like '%3d%' no error reported but it doesn't return anything.
SELECT * FROM LDOCLINK WHERE OBJECTROWID like '%d%' This is the strangest result as it returns ONLY the two rows that DON'T include my <3d98f71f 3cd9415b a978c010 b1cef941>, seemingly arbitrarily.
SELECT * FROM LDOCLINK WHERE OBJECTTYPE = '0' returns these same rows, just to illustrate that the interface works (SQLite Manager).
I also checked out this question and this one but I still could not get the correct query.
Please help me to return the correct row (actually two rows in this case - the first and third).
EDIT:
The code to write to database involves many classes. The method shown below is I think the main part of serialisation (case 8).
-(void)serializeValue:(NSObject*)value ToBuffer:(NSMutableData*)buffer
{
switch (self.propertyTypeID) {
case 0:
{
SInt32 length = 0;
if ( (NSString*)value )
{
/*
NSData* data = [((NSString*)value) dataUsingEncoding:NSUnicodeStringEncoding];
// first 2 bytes are unicode prefix
length = data.length - 2;
[buffer appendBytes:&length length:sizeof(SInt32)];
if ( length > 0 )
[buffer appendBytes:([data bytes]+2) length:length];
*/
NSData* data = [((NSString*)value) dataUsingEncoding:NSUTF8StringEncoding];
length = data.length;
[buffer appendBytes:&length length:sizeof(SInt32)];
if ( length > 0 )
[buffer appendBytes:([data bytes]) length:length];
}
else
[buffer appendBytes:&length length:sizeof(SInt32)];
}
break;
//depends on the realisation of DB serialisation
case 1:
{
Byte b = 0;
if ( (NSNumber*)value )
b = [(NSNumber*)value boolValue] ? 1 : 0;
[buffer appendBytes:&b length:1];
}
break;
//........
case 8:
{
int length = 16;
[buffer appendBytes:[(NSData*)value bytes] length:length];
}
break;
default:
break;
}
}

So, as pointed out by Tom Kerr, this post answered my question. Almost. The syntax wasn't exactly right. The form: SELECT * FROM LDOCLINK WHERE OBJECTROWID.Id = X'a8828ddfef224d36935a1c66ae86ebb3'; was suggested but I actually had to drop the .Id part.
Making:
SELECT * FROM LDOCLINK WHERE OBJECTROWID = X'3d98f71f3cd9415ba978c010b1cef941';

Related

writing to flash memory dspic33e

I have some questions regarding the flash memory with a dspic33ep512mu810.
I'm aware of how it should be done:
set all the register for address, latches, etc. Then do the sequence to start the write procedure or call the builtins function.
But I find that there is some small difference between what I'm experiencing and what is in the DOC.
when writing the flash in WORD mode. In the DOC it is pretty straightforward. Following is the example code in the DOC
int varWord1L = 0xXXXX;
int varWord1H = 0x00XX;
int varWord2L = 0xXXXX;
int varWord2H = 0x00XX;
int TargetWriteAddressL; // bits<15:0>
int TargetWriteAddressH; // bits<22:16>
NVMCON = 0x4001; // Set WREN and word program mode
TBLPAG = 0xFA; // write latch upper address
NVMADR = TargetWriteAddressL; // set target write address
NVMADRU = TargetWriteAddressH;
__builtin_tblwtl(0,varWord1L); // load write latches
__builtin_tblwth(0,varWord1H);
__builtin_tblwtl(0x2,varWord2L);
__builtin_tblwth(0x2,varWord2H);
__builtin_disi(5); // Disable interrupts for NVM unlock sequence
__builtin_write_NVM(); // initiate write
while(NVMCONbits.WR == 1);
But that code doesn't work depending on the address where I want to write. I found a fix to write one WORD but I can't write 2 WORD where I want. I store everything in the aux memory so the upper address(NVMADRU) is always 0x7F for me. The NVMADR is the address I can change. What I'm seeing is that if the address where I want to write modulo 4 is not 0 then I have to put my value in the 2 last latches, otherwise I have to put the value in the first latches.
If address modulo 4 is not zero, it doesn't work like the doc code(above). The value that will be at the address will be what is in the second set of latches.
I fixed it for writing only one word at a time like this:
if(Address % 4)
{
__builtin_tblwtl(0, 0xFFFF);
__builtin_tblwth(0, 0x00FF);
__builtin_tblwtl(2, ValueL);
__builtin_tblwth(2, ValueH);
}
else
{
__builtin_tblwtl(0, ValueL);
__builtin_tblwth(0, ValueH);
__builtin_tblwtl(2, 0xFFFF);
__builtin_tblwth(2, 0x00FF);
}
I want to know why I'm seeing this behavior?
2)I also want to write a full row.
That also doesn't seem to work for me and I don't know why because I'm doing what is in the DOC.
I tried a simple write row code and at the end I just read back the first 3 or 4 element that I wrote to see if it works:
NVMCON = 0x4002; //set for row programming
TBLPAG = 0x00FA; //set address for the write latches
NVMADRU = 0x007F; //upper address of the aux memory
NVMADR = 0xE7FA;
int latchoffset;
latchoffset = 0;
__builtin_tblwtl(latchoffset, 0);
__builtin_tblwth(latchoffset, 0); //current = 0, available = 1
latchoffset+=2;
__builtin_tblwtl(latchoffset, 1);
__builtin_tblwth(latchoffset, 1); //current = 0, available = 1
latchoffset+=2;
.
. all the way to 127(I know I could have done it in a loop)
.
__builtin_tblwtl(latchoffset, 127);
__builtin_tblwth(latchoffset, 127);
INTCON2bits.GIE = 0; //stop interrupt
__builtin_write_NVM();
while(NVMCONbits.WR == 1);
INTCON2bits.GIE = 1; //start interrupt
int testaddress;
testaddress = 0xE7FA;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
What I see is that the value that is stored in the address 0xE7FA is 125, in 0xE7FC is 126 and in 0xE7FE is 127. And the rest are all 0xFFFF.
Why is it taking only the last 3 latches and write them in the first 3 address?
Thanks in advance for your help people.
The dsPIC33 program memory space is treated as 24 bits wide, it is
more appropriate to think of each address of the program memory as a
lower and upper word, with the upper byte of the upper word being
unimplemented
(dsPIC33EPXXX datasheet)
There is a phantom byte every two program words.
Your code
if(Address % 4)
{
__builtin_tblwtl(0, 0xFFFF);
__builtin_tblwth(0, 0x00FF);
__builtin_tblwtl(2, ValueL);
__builtin_tblwth(2, ValueH);
}
else
{
__builtin_tblwtl(0, ValueL);
__builtin_tblwth(0, ValueH);
__builtin_tblwtl(2, 0xFFFF);
__builtin_tblwth(2, 0x00FF);
}
...will be fine for writing a bootloader if generating values from a valid Intel HEX file, but doesn't make it simple for storing data structures because the phantom byte is not taken into account.
If you create a uint32_t variable and look at the compiled HEX file, you'll notice that it in fact uses up the least significant words of two 24-bit program words. I.e. the 32-bit value is placed into a 64-bit range but only 48-bits out of the 64-bits are programmable, the others are phantom bytes (or zeros). Leaving three bytes per address modulo of 4 that are actually programmable.
What I tend to do if writing data is to keep everything 32-bit aligned and do the same as the compiler does.
Writing:
UINT32 value = ....;
:
__builtin_tblwtl(0, value.word.word_L); // least significant word of 32-bit value placed here
__builtin_tblwth(0, 0x00); // phantom byte + unused byte
__builtin_tblwtl(2, value.word.word_H); // most significant word of 32-bit value placed here
__builtin_tblwth(2, 0x00); // phantom byte + unused byte
Reading:
UINT32 *value
:
value->word.word_L = __builtin_tblrdl(offset);
value->word.word_H = __builtin_tblrdl(offset+2);
UINT32 structure:
typedef union _UINT32 {
uint32_t val32;
struct {
uint16_t word_L;
uint16_t word_H;
} word;
uint8_t bytes[4];
} UINT32;

Snowflake JSON to tabular

I was reading through the documentation of Snowflake and haven't found a solution yet so I come to you. I have a table in Snowflake which contains a variant column where I store JSON data. Do you know of a way to dynamically convert the results of a query on a variant column to a tabular format?
For example I have a query like
select json_data from database.schema.table limit 2
Which would return something like
JSON_DATA
{"EventName": "Test", "EventValue": 100}
{"EventName": "Test", "EventValue": 200}
Is there a way to return it as a table without having to reference the keys? I know I can do
select
json_data['EventName'] EventName,
json_data['EventValue'] EventValue
from
database.schema.table
But I am looking for something more dynamic like
select * from table(json_to_table(select json_data from database.schema.table)) limit 2
That could return
EventName
EventValue
Test
100
Test
200
I'm looking for any internal solutions (like stored procedures, udf, snowflake functions I might have missed...anything except external functions)
While there's no way to create dynamic column lists currently, as described in the comment you can run a stored procedure to build (and rebuild) a view. This will avoid having to manually type and maintain a long list of columns.
After creating the SP at the bottom, you can use it like this:
create or replace table MY_TABLE(JSON_DATA variant);
insert into MY_TABLE select parse_json('{"EventName": "Test", "EventValue": 100}');
insert into MY_TABLE select parse_json('{"EventName": "Test", "EventValue": 200}');
call create_view_over_json('MY_TABLE', 'JSON_DATA', 'MY_VIEW');
select * from MY_VIEW;
Here is the stored procedure to create the view. Note that if the table is very large it will take Snowflake's TYPEOF() function quite a while to determine a column type. If it's known to be consistent, you can point it to a sample table or one created with a limit 1000.
create or replace procedure create_view_over_json (TABLE_NAME varchar, COL_NAME varchar, VIEW_NAME varchar)
returns varchar
language javascript
as
$$
/****************************************************************************************************************
* *
* CREATE_VIEW_OVER_JSON - Craig Warman, Alan Eldridge and Greg Pavlik Snowflake Computing, 2019, 2020, 2021 *
* *
* This stored procedure creates a view on a table that contains JSON data in a column. *
* of type VARIANT. It can be used for easily generating views that enable access to *
* this data for BI tools without the need for manual view creation based on the underlying *
* JSON document structure. *
* *
* Parameters: *
* TABLE_NAME - Name of table that contains the semi-structured data. *
* COL_NAME - Name of VARIANT column in the aforementioned table. *
* VIEW_NAME - Name of view to be created by this stored procedure. *
* *
* Usage Example: *
* call create_view_over_json('db.schema.semistruct_data', 'variant_col', 'db.schema.semistruct_data_vw'); *
* *
* Important notes: *
* - This is the "basic" version of a more sophisticated procedure. Its primary purpose *
* is to illustrate the view generation concept. *
* - This version of the procedure does not support: *
* - Column case preservation (all view column names will be case-insensitive). *
* - JSON document attributes that are SQL reserved words (like TYPE or NUMBER). *
* - "Exploding" arrays into separate view columns - instead, arrays are simply *
* materialized as view columns of type ARRAY. *
* - Execution of this procedure may take an extended period of time for very *
* large datasets, or for datasets with a wide variety of document attributes *
* (since the view will have a large number of columns). *
* *
* Attribution: *
* I leveraged code developed by Alan Eldridge as the basis for this stored procedure. *
* *
****************************************************************************************************************/
var currentActivity;
try{
currentActivity = "building the query for column types";
var elementQuery = GetElementQuery(TABLE_NAME, COL_NAME);
currentActivity = "running the query to get column names";
var elementRS = GetResultSet(elementQuery);
currentActivity = "building the column list";
var colList = GetColumnList(elementRS);
currentActivity = "building the view's DDL";
var viewDDL = GetViewDDL(VIEW_NAME, colList, TABLE_NAME);
currentActivity = "creating the view";
return ExecuteSingleValueQuery("status", viewDDL);
}
catch(err){
return "ERROR: Encountered an error while " + currentActivity + ".\n" + err.message;
}
/****************************************************************************************************************
* *
* End of main function. Helper functions below. *
* *
****************************************************************************************************************/
function GetElementQuery(tableName, columnName){
// Build a query that returns a list of elements which will be used to build the column list for the CREATE VIEW statement
sql =
`
SELECT DISTINCT regexp_replace(regexp_replace(f.path,'\\\\[(.+)\\\\]'),'(\\\\w+)','\"\\\\1\"') AS path_name, -- This generates paths with levels enclosed by double quotes (ex: "path"."to"."element"). It also strips any bracket-enclosed array element references (like "[0]")
DECODE (substr(typeof(f.value),1,1),'A','ARRAY','B','BOOLEAN','I','FLOAT','D','FLOAT','STRING') AS attribute_type, -- This generates column datatypes of ARRAY, BOOLEAN, FLOAT, and STRING only
REGEXP_REPLACE(REGEXP_REPLACE(f.path, '\\\\[(.+)\\\\]'),'[^a-zA-Z0-9]','_') AS alias_name -- This generates column aliases based on the path
FROM
#~TABLE_NAME~#,
LATERAL FLATTEN(#~COL_NAME~#, RECURSIVE=>true) f
WHERE TYPEOF(f.value) != 'OBJECT'
AND NOT contains(f.path, '['); -- This prevents traversal down into arrays
`;
sql = sql.replace(/#~TABLE_NAME~#/g, tableName);
sql = sql.replace(/#~COL_NAME~#/g, columnName);
return sql;
}
function GetColumnList(elementRS){
/*
Add elements and datatypes to the column list
They will look something like this when added:
col_name:"name"."first"::STRING as name_first,
col_name:"name"."last"::STRING as name_last
*/
var col_list = "";
while (elementRS.next()) {
if (col_list != "") {
col_list += ", \n";
}
col_list += COL_NAME + ":" + elementRS.getColumnValue("PATH_NAME"); // Start with the element path name
col_list += "::" + elementRS.getColumnValue("ATTRIBUTE_TYPE"); // Add the datatype
col_list += " as " + elementRS.getColumnValue("ALIAS_NAME"); // And finally the element alias
}
return col_list;
}
function GetViewDDL(viewName, columnList, tableName){
sql =
`
create or replace view #~VIEW_NAME~# as
select
#~COLUMN_LIST~#
from #~TABLE_NAME~#;
`;
sql = sql.replace(/#~VIEW_NAME~#/g, viewName);
sql = sql.replace(/#~COLUMN_LIST~#/g, columnList);
sql = sql.replace(/#~TABLE_NAME~#/g, tableName);
return sql;
}
/****************************************************************************************************************
* *
* Library functions *
* *
****************************************************************************************************************/
function ExecuteSingleValueQuery(columnName, queryString) {
var out;
cmd1 = {sqlText: queryString};
stmt = snowflake.createStatement(cmd1);
var rs;
try{
rs = stmt.execute();
rs.next();
return rs.getColumnValue(columnName);
}
catch(err) {
throw err;
}
return out;
}
function GetResultSet(sql){
try{
cmd1 = {sqlText: sql};
stmt = snowflake.createStatement(cmd1);
var rs;
rs = stmt.execute();
return rs;
}
catch(err) {
throw err;
}
}
$$;

Single array in the hdf5 file

Image of my dataset:
I am using the HDF5DotNet with C# and I can read only full data as the attached image in the dataset. The hdf5 file is too big, up to nearly 10GB, and if I load the whole array into the memory then it will be out of memory.
I would like to read all data from rows 5 and 7 in the attached image. Is that anyway to read only these 2 rows data into memory in a time without having to load all data into memory first?
private static void OpenH5File()
{
var h5FileId = H5F.open(#"D:\Sandbox\Flood Modeller\Document\xmdf_results\FMA_T1_10ft_001.xmdf", H5F.OpenMode.ACC_RDONLY);
string dataSetName = "/FMA_T1_10ft_001/Temporal/Depth/Values";
var dataset = H5D.open(h5FileId, dataSetName);
var space = H5D.getSpace(dataset);
var dataType = H5D.getType(dataset);
long[] offset = new long[2];
long[] count = new long[2];
long[] stride = new long[2];
long[] block = new long[2];
offset[0] = 1; // start at row 5
offset[1] = 2; // start at column 0
count[0] = 2; // read 2 rows
count[0] = 165701; // read all columns
stride[0] = 0; // don't skip anything
stride[1] = 0;
block[0] = 1; // blocks are single elements
block[1] = 1;
// Dataspace associated with the dataset in the file
// Select a hyperslab from the file dataspace
H5S.selectHyperslab(space, H5S.SelectOperator.SET, offset, count, block);
// Dimensions of the file dataspace
var dims = H5S.getSimpleExtentDims(space);
// We also need a memory dataspace which is the same size as the file dataspace
var memspace = H5S.create_simple(2, dims);
double[,] dataArray = new double[1, dims[1]]; // just get one array
var wrapArray = new H5Array<double>(dataArray);
// Now we can read the hyperslab
H5D.read(dataset, dataType, memspace, space,
new H5PropertyListId(H5P.Template.DEFAULT), wrapArray);
}
You need to select a hyperslab which has the correct offset, count, stride, and block for the subset of the dataset that you wish to read. These are all arrays which have the same number of dimensions as your dataset.
The block is the size of the element block in each dimension to read, i.e. 1 is a single element.
The offset is the number of blocks from the start of the dataset to start reading, and count is the number of blocks to read.
You can select non-contiguous regions by using stride, which again counts in blocks.
I'm afraid I don't know C#, so the following is in C. In your example, you would have:
hsize_t offset[2], count[2], stride[2], block[2];
offset[0] = 5; // start at row 5
offset[1] = 0; // start at column 0
count[0] = 2; // read 2 rows
count[1] = 165702; // read all columns
stride[0] = 1; // don't skip anything
stride[1] = 1;
block[0] = 1; // blocks are single elements
block[1] = 1;
// This assumes you already have an open dataspace with ID dataspace_id
H5Sselect_hyperslab(dataspace_id, H5S_SELECT_SET, offset, stride, count, block)
You can find more information on reading/writing hyperslabs in the HDF5 tutorial.
It seems there are two forms of H5D.read in C#, you want the second form:
H5D.read(Type) Method (H5DataSetId, H5DataTypeId, H5DataSpaceId,
H5DataSpaceId, H5PropertyListId, H5Array(Type))
This allows you specify the memory and file dataspaces. Essentially, you need one dataspace which has information about the size, stride, offset, etc. of the variable in memory that you want to read into; and one dataspace for the dataset in the file that you want to read from. This lets you do things like read from a non-contiguous region in a file to a contiguous region in an array in memory.
You want something like
// Dataspace associated with the dataset in the file
var dataspace = H5D.get_space(dataset);
// Select a hyperslab from the file dataspace
H5S.selectHyperslab(dataspace, H5S.SelectOperator.SET, offset, count);
// Dimensions of the file dataspace
var dims = H5S.getSimpleExtentDims(dataspace);
// We also need a memory dataspace which is the same size as the file dataspace
var memspace = H5S.create_simple(rank, dims);
// Now we can read the hyperslab
H5D.read(dataset, datatype, memspace, dataspace,
new H5PropertyListId(H5P.Template.DEFAULT), wrapArray);
From your posted code, I think I've spotted the problem. First you do this:
var space = H5D.getSpace(dataset);
then you do
var dataspace = H5D.getSpace(dataset);
These two calls do the same thing, but create two different variables
You call H5S.selectHyperslab with space, but H5D.read uses dataspace.
You need to make sure you are using the correct variables consistently. If you remove the second call to H5D.getSpace, and change dataspace -> space, it should work.
Maybe you want to have a look at HDFql as it abstracts yourself from the low-level details of HDF5. Using HDFql in C#, you can read rows #5 and #7 of dataset Values using a hyperslab selection like this:
float [,]data = new float[2, 165702];
HDFql.Execute("SELECT FROM Values(5:2:2:1) INTO MEMORY " + HDFql.VariableTransientRegister(data));
Afterwards, you can access these rows through variable data. Example:
for(int x = 0; x < 2; x++)
{
for(int y = 0; y < 165702; y++)
{
System.Console.WriteLine(data[x, y]);
}
}

LINQ, Skip and Take against Azure SQL Databse Not Working

I'm pulling a paged dataset on an ASP.NET MVC3 application which uses JQuery to get data for endlesss scroll paging via $ajax call. The backend is a Azure SQL database. Here is the code:
[Authorize]
[OutputCache(Duration=0,NoStore=true)]
public PartialViewResult Search(int page = 1)
{
int batch = 10;
int fromRecord = 1;
int toRecord = batch;
if (page != 1)
{
//note these are correctly passed and set
toRecord = (batch * page);
fromRecord = (toRecord - (batch - 1));
}
IQueryable<TheTable> query;
query = context.TheTable.Where(m => m.Username==HttpContext.User.Identity.Name)
.OrderByDescending(d => d.CreatedOn)
.Skip(fromRecord).Take(toRecord);
//this should always be the batch size (10)
//but seems to concatenate the previous results ???
int count = query.ToList().Count();
//results
//call #1, count = 10
//call #2, count = 20
//call #3, count = 30
//etc...
PartialViewResult newPartialView = PartialView("Dashboard/_DataList", query.ToList());
return newPartialView;
}
The data returned from each call from Jquery $ajax continues to GROW on each subsequent call rather then returning only 10 records per call. So the results return contain all of the earlier calls data as well. Also, I've added 'cache=false' to the $ajax call as well. Any ideas on what is going wrong here?
The values you're passing to Skip and Take are wrong.
The argument to Skip should be the number of records you want to skip, which should be 0 on the first page;
The argument to Take needs to be the number of records you want to return, which will always be equal to batch;
Your code needs to be:
int batch = 10;
int fromRecord = 0;
int toRecord = batch;
if (page != 1)
{
fromRecord = batch * (page - 1);
}

Getting the index of an array element of EDT Dimension

I need to write a job where i could fetch the index of an array element of EDT Dimension
e.g. In my EDT Dimension i have array elements A B C when i click over them for properties I see the index for A as 1, B as 2 and C as 3. Now with a job ui want to fetch the index value. Kindly Assist.
I'm not sure if I did understand the real problem. Some code sample could help.
The Dimensions Table has some useful methods like arrayIdx2Code.
Maybe the following code helps:
static void Job1(Args _args)
{
Counter idx;
Dimension dimension;
DimensionCode dimensionCode;
str name;
;
for (idx = 1; idx <= dimof(dimension); idx++)
{
dimensionCode = Dimensions::arrayIdx2Code(idx);
name = enum2str(dimensionCode);
// if (name == 'B') ...
info(strfmt("%1: %2", idx, name));
}
}
I found a way but still looking if there is any other solution.
static void Job10(Args _args)
{
Dicttype dicttype;
counter i;
str test;
;
test = "Client";
dicttype = new dicttype(132);//132 here is the id of edt dimension
for (i=1;i<=dicttype.arraySize();i++)
{
if ( dicttype.label(i) == test)
{
break;
}
}
print i;
pause;
}
Array elements A B C from your example are nothing else but simple labels - they cannot be used as identifiers. First of all, for user convenience the labels can be modified anytime, then even if they aren't, the labels are different in different languages, and so on and so forth.
Overall your approach (querying DictType) would be correct but I cannot think of any scenario that would actually require such a code.
If you clarified your business requirements someone could come up with a better solution.

Resources