Snowflake Rollback - rollback

Just wanted to confirm that all the successfully completed jobs that completed after the 'Commit' (view image) and before "Rollback: were in fact 'rolled back' and thus the successfully queries are not reflected in the DB ie were redacted?.
If so any way to get those outputs back?

I take it you're looking in the history tab?
I tried to recreate your scenario and issued 2 inserts and then a rollback - but the history tab was still showing both inserts, so I doubt anything is disappearing from it.
Also, not sure you're aware, but Snowflake has autocommit set to true by default. Have you altered this setting before trying to rollback?
On top of that, any DDL in Snowflake will also issue an autocommit (or even 2 of them, one prior to the DDL and another one after it's done).

Related

What is the different when we change the "Execute in new LUW" property in a procedure from False to True

I want to ask about this new property in procedure, can anyone give me an example of this property. I've read the information on the GeneXus wiki but I don't understand.
When it is set to Execute in new LUW, another connection to the database is opened and a new DBMS transaction is started in the database. You can commit and rollback on that object (or the ones called by it) and it won't affect the rest of your process.
It is used to perform commits in other tables and without losing the transactional integrity in the main process.

getting 'Duplicate resource' error after I created 2 transport requests in CDS

I created two transport requests (TR) for same project while making changes in CDS views after that a duplicate resource error with error code 400 is showing and I'm unable to get any data in my UI5 table.
I transferred the changes which was locked in new TR to old TR but it is still giving the same error.
HTTP request failed400,Bad Request,{"error":{"code":"/IWBEP/CM_MGW_RT/030","message":{"lang":"en","value":"Duplicate resource"},"innererror":{"application":...
First of all: Double check if there's actually no duplicate key by reading the underlying SQL view (annoted in the CDS definition #AbapCatalog.sqlViewName) using the transaction se16 (n/h).
If there are really no duplicates in the SQL view, the error can be caused by various bugs in the ABAP CDS framework. These bugs mostly do occur after you changed a CDS source/definition. Here a few of them:
Open transaction segw and refresh the entity structure by right clicking "refresh all".
.
Afterwards click on the red white beachball to regenerate the MPC/DPC classes.
What the red white beachball actually does is kind of merging a the changed structure with the existing classes. Right click on the project and choose "Generate runtime" to really re-generate all of the runtime objects.
Sometimes there's a clean up button in the entities overview. Click it.
In transaction /iwfnd/gw_client choose Metadata→Cleanup Cache→On both systems
Cleaning the cache works quite well for OData views that have been manually created from ABAP types in segw but Core Data Services might still be cached. In case none of the above helped:
logout and login again
restart the transaction
wait for an hour or two
Try to manually test the failing OData request directly in /iwfnd/gw_client. You can activate logging in /iwfnd/traces to double check what the requests from your client actually look like.
Check your OData client. Does it maybe internally cache the $metadata?
Check that the transport request was successfully processed, using e.g. transaction se10. Transports/Imports to another system might be blocked by long running SADL queries. Kill them using sm50 if necessary.

MVC wizard how to save a temporary result

I'm developing this really important squirrel application.
There is a wizard where squirrels are added to the database.
So say there are three screens to this wizard:
1.Squirrel name details
2.Height and weight
3.Nut storage
What I'm wanting to do is save the results of the wizard when all details have been added at step 3.
The users however are wanting this "Save to continue later" button. So on screens 1 and 2 they want to be able to save the data they've entered so far and come back and complete it later.
The problem with this is the squirrels height and weight are mandatory field so I would have to make them nullable in the database to be able to save at step 1.
What would be the best way of dealing with this?
I could:
Make the fields nullable and have something like a pending
completion flag on the squirrel table in the database.
Not such a big fan of this it seems to go against best practises.
Somehow store the incomplete squirrels somewhere else until they are
fully complete and ready to be saved to the database.
Not sure of where the incomplete squirrels could be stored.
There's bound to be other options too.
Anyone have any good suggestions?
The isValidated flag in the database seems a good approach. You could enrich the record at each step adding more and more columns and at the last step when the user finishes the wizard set the flag to true to indicate that the user has finished editing this record. The width and height columns might indeed have to be made nullable in the database because until the transaction is fully complete, they can contain null values.
Depending on how big your data is going to be you could use HTML5 Storage. Would mean you would only need to call the database when your pushing your data up which in turn should improve performance as everything is happening client-side.

How to get information about destroyed workitems?

I use TFS 2010 and I need using TFS API to retrieve an information about work items that were deleted. There is a table [WorkItemsDestroyed] in the TFS DB that contains the information about destroyed work items. Is there any way to get that information using TFS API?
It depends on what information you want to retrieve. If you want to find out who deleted the work item then you can do that with sql (#pantelif comment).
If you want to retrieve information about the work item itself I think there is not any way to do that, either from TFS API or sql command. As described at this post, you cannot recover deleted work items:
Deleting Work Item Action Is Not Recoverable
Actually, as long as the test plan has not been deleted, there should be full history of the actual test results allowing you to recover from the deletion of a test suite...it may take a bit of time, but process works.
Try this to re-create your test suites and associated results.
Recreate the suite.
Add tests if not a query-enable suite.
From Test tab, select your suite within the hierarchy.
Create some initial results to allow you to see full history for each test. Within the test lists pane, mass-select all test results and set them to blocked.
Now when you open each test result, you will see full list of previous test results history associated to each test case at the bottom of the results window.
In other words, you need to trigger an initial result to see the full history.
For any results only carrying a single “Blocked” result, the test has not been executed. (first time the result has been made)
For tests that have additional results associated to it, identify the last known state (see the Created date column), then set it appropriately (Pass/Fail/Blocked)
NOTE: This will only work as long as the Test Plan has not been deleted. If it is simply a test suite, this should get you back up and running quickly.

Updating DBGrids in Master-Detail views when updating Cells in Delphi

I am using TADOConn and TADODataSet units pulling data and connected to TDataSources and TDBGrids. I have my DBGrids displaying information properly and editing information in the detail view accurately reflects in the backing database.
What I would like to do is have it so that an update to a field in the detail DBGrid causes a refresh on both data sets so that the most up-to-date data is always displayed.
I have tried putting the refresh calls in several event handlers at various levels of DB access but they all seem to have a similar (but different) issue of reentry.
The best I've been able to come up with so far is getting the Master view updated by calling refresh on the details DBGrid.onColExit event.
If I leave the refresh calls out all together the updated information isn't displayed until the next time the application is run.
Any ideas of how to achieve this? Am I going about it the wrong why? Thanks in advance.
You imply that the changes you make in the DBGrid are posted to the database but are not displayed in the grid or maintained in its dataset and that you must get them back from the database. All the dataset components I have used maintain its copy of the data including all the changes that passed through it to the database. If you expect the data to be changed by triggers or another process, you may need to refresh the data. Then you will have to deal with the possibility that the current record position is lost, i.e. the current record was deleted in the database.
I would try using the Dataset.AfterPost event to initiate the refresh. And I would consider using a Timer to delay the refresh if strange things happen.

Resources