I'm trying to use the column type with OrionMySQLSink. My agent has this code:
cygnusagent.sinks.mysql-sink.type = com.telefonica.iot.cygnus.sinks.OrionMySQLSink
cygnusagent.sinks.mysql-sink.channel = mysql-channel
cygnusagent.sinks.mysql-sink.enable_grouping = false
cygnusagent.sinks.mysql-sink.mysql_host = localhost
cygnusagent.sinks.mysql-sink.mysql_port = 3306
cygnusagent.sinks.mysql-sink.mysql_username = ********
cygnusagent.sinks.mysql-sink.mysql_password = *********
cygnusagent.sinks.mysql-sink.table_type = table-by-destination
cygnusagent.sinks.mysql-sink.attr_persistence = column
cygnusagent.sinks.mysql-sink.batch_size = 1
cygnusagent.sinks.mysql-sink.batch_timeout = 10
I'm having this error when I use the sink:
2015-12-14 08:43:58,118 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - com.telefonica.iot.cygnus.sinks.OrionSink.process(OrionSink.java:187)] Persistence error (Unknown database 'fiw-serv')
I don't have any problem with row mode, only with column mode.
That's because the column persistence mode of OrionMySQLSink requires the table is created in advance. A detailed explanation can be found here. Basically:
Please observe not always the same number of attributes is notified; this depends on the subscription made to the NGSI-like sender. This is not a problem for the row persistence mode, since fixed 8-fields data rows are inserted for each notified attribute. Nevertheless, the column mode may be affected by several data rows of different lengths (in term of fields). Thus, the column mode is only recommended if your subscription is designed for always sending the same attributes, event if they were not updated since the last notification.
In addition, when running in column mode, due to the number of notified attributes (and therefore the number of fields to be written within the Datastore) is unknown by Cygnus, the table can not be automatically created, and must be provisioned previously to the Cygnus execution. That's not the case of the row mode since the number of fields to be written is always constant, independently of the number of notified attributes.
Something similar occurs with OrionCKANSink, where the resource, datastore and view must be created in advance.
Related
I'm trying to save an Entry but Craft errors because of an invalid matrix field. The Entry includes a matrix field but I haven't changed it. I'm trying to edit another field. When I save the entry manually from the admin panel, it saves fine without any errors.
I have researched this problem online and a lot of people were recommending that I provide the matrix's ids when saving the entry. However, even then, I still get an error.
In the code below you'll see that I'm trying to save 3 fields:
Cover Image
Language
Tracks
Each of these fields required me to manually save them because they are relations. That's OK, however, as mentioned earlier, the matrix field (named tracks) errors.
Here's my code below
$criteria = craft()->elements->getCriteria(ElementType::Entry);
$criteria->section = "programmes";
$entry = $criteria->first([
"slug" => $programme["slug"]
]);
if ($entry) {
// Update Entry attributes
$entry->getContent()->coverImage = $entry->coverImage->ids();
$entry->getContent()->tracks = $entry->tracks->ids();
$entry->getContent()->language = $entry->language->ids();
// Save Entry
if (!craft()->entries->saveEntry($entry)) {
return $entry->getErrors();
}
}
The error that comes back is the following
Argument 1 passed to Craft\MatrixService::validateBlock() must be an instance of Craft\MatrixBlockModel, string given, called in craft/app/fieldtypes/MatrixFieldType.php on line 451
Including the matrices themselves and not their ids has worked.
$entry->getContent()->tracks = $entry->tracks->all();
I have an iPhone app, where I want to show how many people are currently viewing an item as such:
I'm doing that by running this transaction when people enter a view (Rubymotion code below, but functions exactly like the Firebase iOS SDK):
listing_reference[:listings][self.id][:viewing_amount].transaction do |data|
data.value = data.value.to_i + 1
FTransactionResult.successWithValue(data)
end
And when they exit the view:
listing_reference[:listings][self.id][:viewing_amount].transaction do |data|
data.value = data.value.to_i + -
FTransactionResult.successWithValue(data)
end
It works fine most of the time, but sometimes things go wrong. The app crashes, people loose connectivity or similar things.
I've been looking at "onDisconnect" to solve this - https://firebase.google.com/docs/reference/ios/firebasedatabase/interface_f_i_r_database_reference#method-detail - but from what I can see, there's no "inDisconnectRunTransaction".
How can I make sure that the viewing amount on the listing gets decremented no matter what?
A Firebase Database transaction runs as a compare-and-set operation: given the current value of a node, your code specifies the new value. This requires at least one round-trip between the client and server, which means that it is inherently unsuitable for onDisconnect() operations.
The onDisconnect() handler is instead a simple set() operation: you specify when you attach the handler, what write operation you want to happen when the servers detects that the client has disconnected (either cleanly or as in your problem case involuntarily).
The solution is (as is often the case with NoSQL databases) to use a data model that deals with the situation gracefully. In your case it seems most natural to not store the count of viewers, but instead the uid of each viewer:
itemViewers
$itemId
uid_1: true
uid_2: true
uid_3: true
Now you can get the number of viewers with a simple value listener:
ref.child('itemViewers').child(itemId).on('value', function(snapshot) {
console.log(snapshot.numChildren());
});
And use the following onDisconnect() to clean up:
ref.child('itemViewers').child(itemId).child(authData.uid).remove();
Both code snippets are in JavaScript syntax, because I only noticed you're using Swift after typing them.
Recently, a program that creates an Access db (a requirement of our downstream partner), adds a table with all memo columns, and then inserts a bunch of records stopped working. Oddly, there were no changes in the environment that I could see and nothing in any diffs that could have affected it. Furthermore, this repros on any machine I've tried, whether it has Office or not and if it has Office, whether it's 32- or 64-bit.
The problem is that when you open the db after the program runs, the destination table is empty and instead there's a MSysCompactError table with a bunch of rows.
Here's the distilled code:
var connectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=corrupt.mdb;Jet OLEDB:Engine Type=5";
// create the db and make a table
var cat = new ADOX.Catalog();
try
{
cat.Create(connectionString);
var tbl = new ADOX.Table();
try
{
tbl.Name = "tbl";
tbl.Columns.Append("a", ADOX.DataTypeEnum.adLongVarWChar);
cat.Tables.Append(tbl);
}
finally
{
Marshal.ReleaseComObject(tbl);
}
}
finally
{
cat.ActiveConnection.Close();
Marshal.ReleaseComObject(cat);
}
using (var connection = new OleDbConnection(connectionString))
{
connection.Open();
// insert a value
using (var cmd = new OleDbCommand("INSERT INTO [tbl] VALUES ( 'x' )", connection))
cmd.ExecuteNonQuery();
}
Here are a couple of workarounds I've stumbled into:
If you insert a breakpoint between creating the table and inserting the value (line 28 above), and you open the mdb with Access and close it again, then when the app continues it will not corrupt the database.
Changing the engine type from 5 to 4 (line 1) will create an uncorrupted mdb. You end up with an obsolete mdb version but the table has values and there's no MSysCompactError. Note that I've tried creating a database this way and then upgrading it to 5 programmatically at the end with no luck. I end up with a corrupt db in the newest version.
If you change from memo to text fields by changing the adLongVarWChar on line 13 to adVarWChar, then the database isn't corrupt. You end up with text fields in the db instead of memo, though.
A final note: in my travels, I've seen that MSysCompactError is related to compacting the database, but I'm not doing anything explicit to make the db compact.
Any ideas?
As I replied to HasUp:
According MS support, creation of Jet databases programmatically is deprecated. I ended up checking in an empty model database and then copying it whenever I needed a new one. See http://support.microsoft.com/kb/318559 for more info.
I want to use amazon Dynamo DB with rails.But I have not found a way to implement pagination.
I will use AWS::Record::HashModel as ORM.
This ORM supports limits like this:
People.limit(10).each {|person| ... }
But I could not figured out how to implement following MySql query in Dynamo DB.
SELECT *
FROM `People`
LIMIT 1 , 30
You issue queries using LIMIT. If the subset returned does not contain the full table, a LastEvaluatedKey value is returned. You use this value as the ExclusiveStartKey in the next query. And so on...
From the DynamoDB Developer Guide.
You can provide 'page-size' in you query to set the result set size.
The response of DynamoDB contains 'LastEvaluatedKey' which will indicate the last key as per the page size. If response does't contain 'LastEvaluatedKey' it means there are no results left to fetch.
Use the 'LastEvaluatedKey' as 'ExclusiveStartKey' while fetching next time.
I hope this helps.
DynamoDB Pagination
Here's a simple copy-paste-run proof of concept (Node.js) for stateless forward/reverse navigation with dynamodb. In summary; each response includes the navigation history, allowing user to explicitly and consistently request either the next or previous page (while next/prev params exist):
GET /accounts -> first page
GET /accounts?next=A3r0ijKJ8 -> next page
GET /accounts?prev=R4tY69kUI -> previous page
Considerations:
If your ids are large and/or users might do a lot of navigation, then the potential size of the next/prev params might become too large.
Yes you do have to store the entire reverse path - if you only store the previous page marker (per some other answers) you will only be able to go back one page.
It won't handle changing pageSize midway, consider baking pageSize into the next/prev value.
base64 encode the next/prev values, and you could also encrypt.
Scans are inefficient, while this suited my current requirement it won't suit all!
// demo.js
const mockTable = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
const getPagedItems = (pageSize = 5, cursor = {}) => {
// Parse cursor
const keys = cursor.next || cursor.prev || [] // fwd first
let key = keys[keys.length-1] || null // eg ddb's PK
// Mock query (mimic dynamodb response)
const Items = mockTable.slice(parseInt(key) || 0, pageSize+key)
const LastEvaluatedKey = Items[Items.length-1] < mockTable.length
? Items[Items.length-1] : null
// Build response
const res = {items:Items}
if (keys.length > 0) // add reverse nav keys (if any)
res.prev = keys.slice(0, keys.length-1)
if (LastEvaluatedKey) // add forward nav keys (if any)
res.next = [...keys, LastEvaluatedKey]
return res
}
// Run test ------------------------------------
const runTest = () => {
const PAGE_SIZE = 6
let x = {}, i = 0
// Page to end
while (i == 0 || x.next) {
x = getPagedItems(PAGE_SIZE, {next:x.next})
console.log(`Page ${++i}: `, x.items)
}
// Page back to start
while (x.prev) {
x = getPagedItems(PAGE_SIZE, {prev:x.prev})
console.log(`Page ${--i}: `, x.items)
}
}
runTest()
I faced a similar problem.
The generic pagination approach is, use "start index" or "start page" and the "page length".
The "ExclusiveStartKey" and "LastEvaluatedKey" based approach is very DynamoDB specific.
I feel this DynamoDB specific implementation of pagination should be hidden from the API client/UI.
Also in case, the application is serverless, using service like Lambda, it will be not be possible to maintain the state on the server. The other side is the client implementation will become very complex.
I came with a different approach, which I think is generic ( and not specific to DynamoDB)
When the API client specifies the start index, fetch all the keys from
the table and store it into an array.
Find out the key for the start index from the array, which is
specified by the client.
Make use of the ExclusiveStartKey and fetch the number of records, as
specified in the page length.
If the start index parameter is not present, the above steps are not
needed, we don't need to specify the ExclusiveStartKey in the scan
operation.
This solution has some drawbacks -
We will need to fetch all the keys when the user needs pagination with
start index.
We will need additional memory to store the Ids and the indexes.
Additional database scan operations ( one or multiple to fetch the
keys )
But I feel this will be very easy approach for the clients, which are using our APIs. The backward scan will work seamlessly. If the user wants to see "nth" page, this will be possible.
In fact I faced the same problem and I noticed that LastEvaluatedKey and ExclusiveStartKey are not working well especially when using Scan So I solved Like this.
GET/?page_no=1&page_size=10 =====> first page
response will contain count of records and first 10 records
retry and increase number of page until all record come.
Code is below
PS: I am using python
first_index = ((page_no-1)*page_size)
second_index = (page_no*page_size)
if (second_index > len(response['Items'])):
second_index = len(response['Items'])
return {
'statusCode': 200,
'count': response['Count'],
'response': response['Items'][first_index:second_index]
}
Summary
We are experiencing a problem where the systemDateGet() function is returning the
AOS wher the local date is required when determining the date for posting purposes.
Details
We have branches around the world, both sides of the international date line, with
users connecting to local terminal servers with the appropriate time zone settings
for the user's branch.
Each branch is associated with a different company, which has been configured with
the correct time zone.
We run a single AOS with the time zone setting set for the head office. There is
not an option to introduce additional AOS's.
The SystemDateGet() function is returning the AOS date as the user is not changing
their Axapta session date.
A number of system fields in the database related to posting dates are DATE based and
not UTCDATETIME based.
We are running AX2009 SP1 RU7.
Kernel version 5.0.1500.4570
Application version 5.0.1500.4570
Localization version: Brazil, China, Japan, Thailand, India
I am aware that the SystemDateGet() function was designed to return the AOS date unless
the user changes their session date in which case that date is returned.
Each user has the appropriate time zone setting in there user profile.
Problem
One example of the problem is when a user attempts to post a journal involving financial
transactions, where the ledger period is checked to see if it is open. For example,
the user is in England attempting to post a journal at 3:00pm on the 30st of November, local
time, the standard Axapta code uses the systemDateGet() function to determine the date to use
in the validation (determining if the period is open). In our case, the AOS is based in
Brisbane Australia and the systemDateGet() function is returning the 1st of December
(local time 1:00am on the 1st of December).
Another example of the problem is where an invoice is posted in the United States on a Friday
and the day of the week where the AOS is situated is a Saturday. We need the system to
record the local date.
Question
Besides rewriting all system code involving systemDateGet(), over 2000 entities, is there
any other options that can be used to get around the problem of getting the correct local
date?
Solution limitations.
In the example given above of the ledger period being closed, it is not possible from a
business practices standpoint to open the next period before end of month processing
has been completed.
There is currently no option for the purchase of additional AOS's.
Create a function in the Global class:
static date systemDateGetLocal()
{
return DateTimeUtil::date(DateTimeUtil::applyTimeZoneOffset(DateTimeUtil::utcNow(), DateTimeUtil::getUserPreferredTimeZone()));
}
Then in Info.watchDog() do a:
systemDateSet(systemDateGetLocal());
This may only be a partial solution, the watchDog is executed on the client side only.
Here is a quick update. Let me know if there are any issues or situations that need to be addressed.
USING LOCAL TIME
Introduction:
The following is a work in progress update, as the solution has yet to be fully tested in all situations.
Problem:
If the user has a different time zone to the AOS, the timeNow() and systemDateGet() functions are returning the AOS details when the local date time is required.
Clarifiers:
When the local date time is changed within Axapta, the systemDateGet() function will return the local date, however the timeNow() function still returns the AOS time.
Coding changes:
A number of coding changes have been made to handle a number of different situations. Comments will be added where an explanation may be required.
The brackets were changed to { and } to allow this to be posted.
CLASS:GLOBAL
One requirement I was given was to allow the system to handle multiple sites within a company that may have different time zones. Presently this functionality is not required.
static server void setSessionDateTime(
inventSiteId inventSiteId = '',
utcDateTime reference = dateTimeUtil::utcNow())
{
str sql;
sqlStatementExecutePermission perm;
connection conn = new UserConnection();
timeZone timeZone;
int ret;
;
if (inventSiteId)
{
timeZone = inventSite::find(inventSiteId).Timezone;
}
else
{
timeZone = dateTimeUtil::getCompanyTimeZone();
}
//This is to get around the kernel validation of changing timezones
//when the user has more than one session open.
sql = strfmt("Update userInfo set preferredTimeZone = %1 where userInfo.id = '%2'", enum2int(timeZone), curUserId());
perm = new SQLStatementExecutePermission(sql);
perm.assert();
ret = conn.createStatement().executeUpdate(sql);
dateTimeUtil::setUserPreferredTimeZone(timeZone);
dateTimeUtil::setSystemDateTime(reference);
CodeAccessPermission::revertAssert();
}
static int localTime()
{
utcDateTime tmp;
;
setSessionDateTime();
tmp = dateTimeUtil::applyTimeZoneOffset( dateTimeUtil::utcNow(), dateTimeUtil::getCompanyTimeZone());
return dateTimeUtil::time(tmp);
}
The following method was implemented as a cross check to ensure that systemDateGet() returns the expected value.
static date localDate()
{
utcDateTime tmp;
;
setSessionDateTime();
tmp = dateTimeUtil::applyTimeZoneOffset( dateTimeUtil::utcNow(), dateTimeUtil::getCompanyTimeZone());
return dateTimeUtil::date(tmp);
}
CLASS:APPLICATION
Modify the method setDefaultCompany. Add the line setSessionDateTime(); directly after the super call. This is to allow the time to change when the user changes company (another requirement I was given).
CLASS:INFO
So that the system uses the correct date/time from the start of the session.
void startupPost()
{
;
setSessionDateTime();
}
Modify the method canViewAlertInbox() adding the line setSessionDateTime(); as the first line. This is to handle if the user has multiple forms open for different companies.
Localization dependent changes:
Depending on your service pack and localizations, you will need to change a function of objects to use the localTime() function, replacing timeNow(). IMPORTANT NOTE: Do not change the class BatchRun to use the new localTime function as this will stop it working correctly.
In our system there were around 260 instances that could be changed. If you do not use all modules and localizations the actual number of lines you need to change will be less.
Final note:
There are a number of today() calls in the code. I have not yet gone through each line to ensure it is coded correctly, i.e. using today() instead of systemDateGet().
Known issues:
I have come across a situation where the timezone change function did not work completely as expected. This was when one session was doing a database synchronisation and another session was opened in a different company. As normal users will never be able to do this, I have not spent much time on its solution at this stage. It is something I do intend to resolve.