Dremio provides a really nice GUI to download and save data generated after your query run.
However, I want to save my query (instead of query result) in dremio so that I can anytime (in future) refer the query that I wrote. Is there a way to achieve this?
Really appreciate the help!
Although this is an old post, I thought it might be helpful to provide a solution. What you are describing can be solved with a key functionality of Dremio. Instead of going through the difficulty of searching for your old query; I would have suggested creating a VDS (Virtual Data Set) by way of the UI. After a successful run of your query you can save it
Dremio Save Dataset Button as a VDS.
After selecting the save button you will be asked where you wish to save it; you can either save it to your default directory or a named Space you have created previously Dremio VDS Save Dialog.
You can query against this new VDS as though it was an actual table. Any changes made to the VDS will be saved in a history - utilizing the breadcrumb trail on the right side of the UI one can navigate to prior versions.
You can now further accelerate this query through creation of a Dremio Reflection...but that goes beyond the scope of your question ;)
In the left upper corner, you should look for Jobs menu:
Related
I'm developing this really important squirrel application.
There is a wizard where squirrels are added to the database.
So say there are three screens to this wizard:
1.Squirrel name details
2.Height and weight
3.Nut storage
What I'm wanting to do is save the results of the wizard when all details have been added at step 3.
The users however are wanting this "Save to continue later" button. So on screens 1 and 2 they want to be able to save the data they've entered so far and come back and complete it later.
The problem with this is the squirrels height and weight are mandatory field so I would have to make them nullable in the database to be able to save at step 1.
What would be the best way of dealing with this?
I could:
Make the fields nullable and have something like a pending
completion flag on the squirrel table in the database.
Not such a big fan of this it seems to go against best practises.
Somehow store the incomplete squirrels somewhere else until they are
fully complete and ready to be saved to the database.
Not sure of where the incomplete squirrels could be stored.
There's bound to be other options too.
Anyone have any good suggestions?
The isValidated flag in the database seems a good approach. You could enrich the record at each step adding more and more columns and at the last step when the user finishes the wizard set the flag to true to indicate that the user has finished editing this record. The width and height columns might indeed have to be made nullable in the database because until the transaction is fully complete, they can contain null values.
Depending on how big your data is going to be you could use HTML5 Storage. Would mean you would only need to call the database when your pushing your data up which in turn should improve performance as everything is happening client-side.
I am looking for a simple way to get data displayed in an issue as just plain text. Basically, I want to be able to type in a lookup id in the issue creation and then once the issue is created, it would call one of our web services to retrieve data connected with that ID.
This wouldn't be coming from another issue tracker, but rather straight from one of my databases.
What would be the easiest way of accomplishing this? I would like the workflow to be: Enter id #, hit save, see the data with that ID displayed in the ticket (Doesn't need to be editable, just displayed in the ticket view).
The easiest way is to create a workflow function that is triggered at Create transition to do the job. There your code can query information from the database and replicate them into JIRA standard and custom fields of the issue itself.
Then you can prevent edition of replicated fields by tuning Edit screen for your issues.
You can also use your function to update field content from time to time, either at transition or in a trigger.
An option is to create some read-only custom fields than query each piece information from the database. It will prevent data replication but it will be probably slow and it does not apply to default fields.
I'm building a web app for bookmark storage with a directory system.
I've already got these collections set up:
Path(s)
---> Directories (embedded documents)
---> Links (embedded documents)
User(s)
So performance wise, should I:
- add the user id to the created path
- embed the whole Paths collection into the specific user
I want to pick option 2, but yeah, I dunno...
EDIT:
I was also thinking about making the whole interface ajaxified. So, that means I'll load the directories and links from a specific path (from the logged in user) through ajax. That way, it's faster and I don't have to touch the user collection. Maybe that changes things?
Like I've said in the comments, 1 huge collection in the whole database seems kinda strange. Right?
Well the main purpose of the mongoDB is to support redundant data.I will recommend second option is better because In your scenario what I feel that if you embed path collection into the specific user then by using only single query you can get all data about user as well as related to path collection as well.
And if you follow first option then you have to fire two separates queries to get all data which will increase your work somewhat.
As mongodb brings data into the RAM so after getting data from one collection you can store it into cursor and from that cursor data you can fetch data from another collection. So if we see performance wise I dont think it will affect a lot.
RE: the edit. If you are going to store everything in a single doc and use embedded docs, then when you make your queries make sure you just select the data you need, otherwise you will load the whole doc including the embedded docs.
I'm developing a webapp that allows the editing of records. There is a possibility that two users could be working on the same screen at a time and I want to minimise the damage done, if they both click save.
If User1 requests the page and then makes changes to the Address, Telephone and Contact Details, but before he clicks Save, User2 requests the same page.
User1 then clicks save and the whole model is updated using TryUpdateModel(), if User2 simply appends some detail to the Notes field, when he saves, the TryUpdateModel() method will overwrite the new details User1 saved, with the old details.
I've considered storing the original values for all the model's properties in a hidden form field, and then writing a custom TryUpdateModel to only update the properties that have changed, but this feels a little too like the Viewstate we've all been more than happy to leave behind by moving to MVC.
Is there a pattern for dealing with this problem that I'm not aware of?
How would you handle it?
Update: In answer to the comments below, I'm using Entity Framework.
Anthony
Unless you have any particular requirements for what happens in this case (e.g. lock the record, which of course requires some functionality to undo the lock in the event that the user decides not to make a change) I'd suggest the normal approach is an optimistic lock:
Each update you perform should check that the record hasn't changed in the meantime.
So:
Put an integer "version" property or a guid / rowversion on the record.
Ensure this is contained in a hidden field in the html and is therefore returned with any submit;
When you perform the update, ensure that the (database) record's version/guid/rowversion still matches the value that was in the hidden field [and add 1 to the "version" integer when you do the update if you've decided to go with that manual approach.]
A similar approach is obviously to use a date/time stamp on the record, but don't do that because, to within the accuracy of your system clock, it's flawed.
[I suggest you'll find fuller explanations of the whole approach elsewhere. Certainly if you were to google for information on NHibernate's Version functionality...]
Locking modification of a page while one user is working on it is an option. This is done in some wiki software like dokuwiki. In that case it will usually use some javascript to free the lock after 5-10 minutes of inactivity so others can update it.
Another option might be storing all revisions in a database so when two users submit, both copies are saved and still exist. From there on, all you'd need to do is merge the two.
You usually don't handle this. If two users happen to edit a document at the same time and commit their updates, one of them wins and the other looses.
Resources lockout can be done with stateful desktop applications, but with web applications any lockout scheme you try to implement may only minimize the damage but not prevent it.
Don't try to write an absolutely perfect and secure application. It's already good as it is. Just use it, probably the situation won't come up at all.
If you use LINQ to SQL as your ORM it can handle the issues around changed values using the conflicts collection. However, essentially I'd agree with Mastermind's comment.
I'm looking for some ideas about saving a snapshot of some (different) records at the time of an event, for example user getting a document from my application, so that this document can be regenerated later. What strategies do you reccomend? Should I use the same table as the table with current values or use a historical table? Do you know of any plugins that could help me with the task? Please share your thoughts and solutions.
There are several plugins for this.
Acts_audited
acts as audited creates a single table for all of the auditable objects and requires no changes to your existing tables. You get on entry per change with the changes stored as a hash in a memo field, the type of change (CRUD). It is very easy to setup with just a single statement in the application controller specifying which models you want audited.
Rolling back is up to you but the information is there. Because the information stored is just the changes building the whole object may be difficult due to subsequent changes.
Acts_as_versioned
A bit more complicated to setup- you need a separate table for each object you want to version and you have to add a version id to your existing table. Rollback is a very easy. There are forks on github that provide a hash of changes since last version so you can easily highlight the differences (it's what I use). My guess is that this is the most popular solution.
Ones I have no experience with: acts_as_revisable. I'll probably give this one a go next time I need versioning as it looks much more sophisticated.
I did this once awhile back. We created a new table that had a very similar structure to the table we wanted to log and whenever we needed to log something, we did something similar to this:
attr = object_to_log.attributes
# Remove things like created_at, updated_at, other unneeded columns
log = MyLogger.new(attrs)
log.save
There's a very good chance there are plugins/gems to do stuff like this, though.
I have used acts_as_versioned for stuff like this.
The OP is a year old but thought I'd add vestal_versions to the mix. It uses a single table to track serialized hashes of each version. By traversing the record of changes, the models can be reverted to any point in time.
Seems to be the community favorite as of this post...