Common way to execute a stored proc from both ColdFusion and Railo - stored-procedures

I think I've gotten the most simplest scenario built. I just want to pass it by everyone for a sanity check. Here's the idea:
GetErrorCodes.cfm does the following:
<cfscript>
response = new ErrorCodes().WhereXXX(); // ACF or Railo, doesn't matter
</cfscript>
ErrorCodes.cfc:
function WhereXXX() {
return new sproc().exec('app.GetErrorCodes'); // All my functions will do this instead of executing the sproc themselves.
}
sproc.cfc:
component {
function exec(procedure) {
local.result = {};
if (server.ColdFusion.productname == 'Railo') {
return new Railo().exec(arguments.procedure); // Has to be outside of sproc.cfc because ColdFusion throws a syntax error otherwise.
}
local.svc = new storedProc();
local.svc.setProcedure(arguments.procedure);
local.svc.addProcResult(name='qry');
try {
local.obj = local.svc.execute();
local.result.Prefix = local.obj.getPrefix();
local.result.qry = local.obj.getProcResultSets().qry;
} catch(any Exception) {
request.msg = Exception.Detail;
}
return local.result;
}
Railo.cfc:
component {
function exec(procedure) {
local.result = {};
try {
storedproc procedure=arguments.procedure result="local.result.Prefix" returncode="yes" {
procresult name="local.result.qry";
}
} catch(any Exception) {
request.msg = Exception.Message;
}
return local.result;
}
}
So I've been working on this all day, but tell me, is this a sane way to keep the source code the same if it's to be run on either a ColdFusion server or a Railo server?

Um... just use <cfstoredproc> instead of trying to use two different CFScript approaches that are mutually exclusive to each other of the CFML platforms.

Related

Android dataStore with flow not get update after edit

I'm use DataStore with flow but I cant get any update on the flow when editing DataStore.
Store.kt
private class IStore(private val context: Context): Store {
val eventIDKey = stringPreferencesKey("EventID")
override suspend fun setEventID(eventID: String) {
context.dataStoreSettings.edit { settings ->
settings[eventIDKey] = eventID
}
}
override fun getEventID(): Flow<String> {
return context.dataStoreSettings.data.map { settings -> settings[eventIDKey].orEmpty() }
}
}
and manipulate getEventID() with data from room database in event service
EventService.kt
fun getSelectedEventLive() = store.getEventID()
.onEach { Log.d("EventService", "income new event id $it") }
.flatMapConcat { if(it.isNotBlank()) eventDao.get(it) else flowOf(null) }
onEach called when I collect the data but when updated it's not called again and need to close and open the app to show the latest data
MainViewModel.kt
val selectedEvent = eventService.getSelectedEventLive()
.stateIn(viewModelScope, SharingStarted.Lazily, null)
and use on Compose with this
val currentEvent by mainViewModel.selectedEvent.collectAsState()
Maybe I doing wrong or maybe there is something I miss?
Usually, you want to use flow.collect {...}, since Flow is cold and need to know that it is being collected to start producing new values.
// MainViewModel.kt
private val _selectedEvent = MutableStateFlow<TypeOfYourEvent>()
val selectedEvent: StateFlow<TypeOfYourEvent> = _selectedEvent
init {
viewModelScope.launch {
getSelectedEventLive().collect { it ->
_selectedEvent.value = it
}
}
}
This example should be fine with your composable's code, you still can collect selectedEvent as state.
Yeah i found the solusion its works if i change the flatMapConcat with flatMapLatest in EventService.kt
fun getSelectedEventLive() = store.getEventID()
.filterNot { it.isBlank() }
.flatMapLatest { eventDao.get(it) }

Dart streams error with .listen().onError().onDone()

I have an issue with some code that looks like this. In this form I have an error
The expression here has a type of 'void', and therefore can't be used.
Try checking to see if you're using the correct API; there might be a function or call that returns void you didn't expect. Also check type parameters and variables which might also be void.dart(use_of_void_result).
If I remove the .onDone() the error goes away. Why? ELI5 please :-)
I was looking at https://api.dart.dev/stable/2.7.0/dart-async/Stream/listen.html but seem to still be misundertanding something.
I also read https://api.dart.dev/stable/2.7.0/dart-async/StreamSubscription/onDone.html
serviceName.UploadThing(uploadRequest).listen((response) {
uploadMessageOutput = response.message;
if (response.uploadResult) {
showSuccess();
} else {
showError();
}
getUploadFileList(event);
isSaveInProgress = false;
}).onError((error) {
isSaveInProgress = false;
_handleFileUploadError(uploadRequest, error);
}).onDone(() {
isSaveInProgress = false;
});
Your code is almost right, but will only require a simple change to work correctly.
You would be seeing the same error if you swapped the ordering of onError and onDone, so the issue has nothing to do with your stream usage. However, you're attempting to chain together calls to onError and then onDone which won't work since both of these methods return void.
What you're looking for is cascade notation (..), which will allow for you to chain calls to the StreamSubscription returned by listen(). This is what your code should look like:
serviceName.UploadThing(uploadRequest).listen((response) {
uploadMessageOutput = response.message;
if (response.uploadResult) {
showSuccess();
} else {
showError();
}
getUploadFileList(event);
isSaveInProgress = false;
})..onError((error) { // Cascade
isSaveInProgress = false;
_handleFileUploadError(uploadRequest, error);
})..onDone(() { // Cascade
isSaveInProgress = false;
});

How to create a constraints, indexes and nodes in a single procedure/plugin call?

similarly the code is for creating indexes and millions of nodes the respective methods. This is for creating fresh DB from JSON file.
I encounter the following error:
Exception: Cannot perform data updates in a transaction that has performed schema updates. Simple begin transaction and close it doesn't work?
After some time the session crashes in CreateNodes() method?
How exactly we separate the schema creation and data update?
Also refer the question I have posted before trying to get the similar answer, but no success. (I tried both injecting GraphDatabaseService as well as with Bolt Driver the result is the same).
How to use neo4j bolt session/transaction in a procedure as plugin for neo4j server extension?
for (int command = 4; command < inputNeo4jCommands.size(); command++) {
log.info(inputNeo4jCommands.get(command));
NEO4JCOMMANDS cmnd = NEO4JCOMMANDS.valueOf(inputNeo4jCommands.get(command).toUpperCase());
log.info(NEO4JCOMMANDS.valueOf(inputNeo4jCommands.get(command).toUpperCase()).toString());
if (NEO4JCOMMANDS.CONSTRAINT.equals(cmnd)) {
CreateConstraints1();
}
if (NEO4JCOMMANDS.INDEX.equals(cmnd)) {
CreateIndexes();
}
if (NEO4JCOMMANDS.MERGE.equals(cmnd)) {
log.info("started creating nodes........");
CreateNodes();
}
}
private void CreateIndexes1() {
log.info("Adding indexes.....");
log.info("into started adding index ......");
try (Transaction tx = db.beginTx()) {
log.info("got a transaction .....hence started adding index ......");
Iterator<Indx> itIndex = json2neo4j.getIndexes().iterator();
while (itIndex.hasNext()) {
Indx indx = itIndex.next();
Label lbl = Label.label(indx.getLabelname());
Iterable<IndexDefinition> indexes = db.schema().getIndexes(lbl);
if (indexes.iterator().hasNext()) {
for (IndexDefinition index : indexes) {
for (String key : index.getPropertyKeys()) {
if (!key.equals(indx.getColName())) {
db.schema().indexFor(lbl).on(indx.getColName());
}
}
}
} else {
db.schema().indexFor(lbl).on(indx.getColName());
}
tx.success();
tx.close();
}
log.info("\nIndexes Created..................Retured the method call ");
}
}
A lot of context is missing from your question and code examples, so it's hard to give a definite answer. Where is the exception thrown in the code example? There's no CreateNodes() method, so we can't find out why it's failing (Out Of Memory Error due to a transaction too large?).
However, there's an issue with your transaction management in the CreateIndexes1() method (not following the Java naming conventions, by the way):
try (Transaction tx = db.beginTx()) {
// ...
while (/* ... */) {
// ...
tx.success();
tx.close();
}
}
You're closing the transaction multiple times, when it's actually in a try-with-resources block where you don't need to close it yourself at all:
try (Transaction tx = db.beginTx()) {
// ...
while (/* ... */) {
// ...
}
tx.success();
}
I guess json2neo4j is the deserialization of a JSON describing the indices to create on labels. The logic is flawed: you try to create an index for a property as soon as you find an index for another property, when you should find out if an index for the current property exists and only create the index if it's missing then.
for (Indx indx : json2neo4j.getIndexes()) {
Label lbl = Label.label(indx.getLabelname());
boolean indexExists = false;
for (IndexDefinition index : db.schema().getIndexes(lbl)) {
for (String property : index.getPropertyKeys()) {
if (property.equals(indx.getColName())) {
indexExists = true;
break;
}
}
if (indexExists) {
break;
}
}
if (!indexExists) {
db.schema().indexFor(lbl).on(indx.getColName());
}
}

Is my implementation of unloaders proper?

I was re-reading this post here: https://stackoverflow.com/a/24473888/1828637
And got concerned about if I did things correctly. This is how I do unloading:
So I set up some stuff per window. And unload them on shutdown. (i dont unload on window close, i havent found a need to yet, as when it closes, everything i added to it goes with with the close [such as my mutation observers]).
All code below is theoretical, the mutation stuff is example, so there might be typos or bugs in it. I was wondering if the idea behind it is appropriate:
var unloadersOnShutdown = [];
var unloadersOnClose = [];
function startup() {
let DOMWindows = Services.wm.getEnumerator(null);
while (DOMWindows.hasMoreElements()) {
let aDOMWindow = DOMWindows.getNext();
var worker = new winWorker(aDOMWindow);
unloadersOnShutdown.push({DOMWindow: aDOMWindow, fn: worker.destroy});
}
}
function shutdown() {
Array.forEach.call(unloadersOnShutdown, function(obj) {
//should probably test if obj.DOMWindow exists/is open, but just put it in try-ctach
try {
obj.fn();
} catch(ex) {
//window was probably closed
console.warn('on shutdown unlaoder:', ex);
}
});
}
function winWorker(aDOMWindow) {
this.DOMWindow = aDOMWindow;
this.init();
}
winWorker.prototype = {
init: function() {
this.gMutationObserver = new this.DOMWindow.MutationObserver(gMutationFunc.bind(this));
this.myElement = this.DOMWindow.querySelector('#myXulEl');
this.gMutationObserver.observe(this.myElement, gMutationConfig);
if (this.DOMWindow.gBrowser && this.DOMWindow.gBrowser.tabContainer) {
this.onTabSelectBinded = this.onTabSelect.bind(this);
this.gBrowser.tabContainer.addEventListener('TabSelect', this.onTabSelectBinded, false);
}
},
destroy: function() {
this.gMutationObserver.disconnect();
if (this.onTabSelectBinded) {
this.gBrowser.tabContainer.removeEventListener('TabSelect', this.onTabSelectBinded, false);
}
},
onTabSelect: function() {
console.log('tab selected = ', thisDOMWindow.gBrowser.selectedTab);
}
};
var windowListener = {
onOpenWindow: function (aXULWindow) {},
onCloseWindow: function (aXULWindow) {
var DOMWindow = aXULWindow.QueryInterface(Ci.nsIInterfaceRequestor).getInterface(Ci.nsIDOMWindowInternal || Ci.nsIDOMWindow);
for (var i=0; i<unloadersOnClose.length; i++) {
if (unloadersOnClose.DOMWindow == DOMWindow) {
try {
unloadersOnClose.fn();
} catch(ex) {
console.warn('on close unloader:', ex);
}
unloadersOnClose.splice(i, 1);
i--;
}
}
},
onWindowTitleChange: function (aXULWindow, aNewTitle) {},
}
I think one problem is me not using weak references with DOMWindows but I'm not sure.
The idea around unloaders in general seems to be OK, but very limited (to windows only).
The implementation is lacking. E.g. there is a big, fat bug:
unloadersOnShutdown.push({DOMWindow: aDOMWindow, fn: worker.destroy});
// and
obj.fn();
// or
unloadersOnClose.fn();
This will call winWorker.prototype.destroy with the wrong this.
The i++/i-- loop also looks, um... "interesting"?!
Also, keep in mind that there can be subtle leaks, so you should mind and test for Zombie compartments.
Not only can a window leak parts of your add-on (e.g. bootstrap.js) but it is also possible to leak closed windows by keeping references in your add-on. And of course, it's not just windows you need to care about, but also e.g. observers, other types of (XPCOM) listeners etc.

How do I detect a first-run in Firefox a addon?

I would like to know the simplest way to detect a first-run in a Firefox addon. I prefer not to use the (SQLite) Storage API as this seems way overkill for this simple usecase.
I guess my question could also be: what is the simplest way to store a flag?
There you go: http://mike.kaply.com/2011/02/02/running-add-on-code-at-first-run-and-upgrade/
var firstrun = Services.prefs.getBoolPref("extensions.YOUREXT.firstrun");
var curVersion = "0.0.0";
if (firstrun) {
Services.prefs.setBoolPref("extensions.YOUREXT.firstrun", false);
Services.prefs.setCharPref("extensions.YOUREXT.installedVersion", curVersion);
/* Code related to firstrun */
} else {
try {
var installedVersion = Services.prefs.getCharPref("extensions.YOUREXT.installedVersion");
if (curVersion > installedVersion) {
Services.prefs.setCharPref("extensions.YOUREXT.installedVersion", curVersion);
/* Code related to upgrade */
}
} catch (ex) {
/* Code related to a reinstall */
}
}
Maybe a better solution would be:
/**
* Check if this is the first run of the addon
*/
function checkFirstRun(){
if(ss.storage.firstRun == undefined){
ss.storage.firstRun = false;
return true;
}
else{
return false;
}
}

Resources