I have a shiny app
where I upload a csv file
and then render it in UI using renderdatatable.
Now, I take the uploaded file as a a dataframe annd save it to a rdata file.
If the user interacts with the application again
I would like to load the csv file from the file system using renderdatatable.
To accomplish this I just use the load function and then call the dataframe.
I would like to know if there is a similar implementation on the same
A few things unclear here are when you say user interacting with the application again, do you mean a reactive response like changing a value in a text box or dropdown and accordingly re-displaying the contents of your dataframe; or a completely new session where the user wants to retrieve the previously viewed data.
If it is about a reactive response, then once the csv is read, you could access it by calling it with the input object - input$file1 or reading the temp location where the uploaded file gets saved (eg tmpdir())
Related
I meet a problem when I import data in an ios app.
The data is stored with coredata. I was trying to import the data with a button. After it is clicked, the data, which is firstly stored in a txt file in JSON format will be stored in the sqllite file.
My question is this, it is very slow to import such amount of data and it is not friendly for user to click button or wait to import the initial data. Is there a better way to import data?
Thanks.
It depends. For example you could just import data in – applicationDidFinishLaunching:or when the user touches a specific button.
In both cases I would import data in background. This allows you to avoid UI freeze (if you have a lot amount of data) and to display a sort of progress indicator. Maybe the user could be more happy to know what is going on.
To import data in background, you could just use new iOS 5 API for Core Data or follow Marcus Zarra tutorial on importing-and-displaying-large-data-sets-in-core-data/.
Another way could be to start with a pre-populated db. Create a dummy project where you populate that db (with your JSON file) and then use that db in your real application project.
Hope that helps.
Edit
It is not user friendly to import the data when the app begins.
Why not?
So I was trying to put the data - the db file into the archive and
send it to app store. In this way, I was wondering if I could get the
db file during test, which is finished importing the data and the
initial data is acceptable. And put this test db file in the archive
and publish on appstore. So user do not need to import the data at
first. Just use the copy of the testing data
I'm not sure I got the point here. Here what I mean with preload and import existing data. You need to ship the db file with your app when you submit it to the app store. For example within the application directory. You could ship it also within the bundle. But in this case pay attention since the db file it is read-only (you need to move somewhere elese if you want to modify).
I suggested you to create a dummy project since it's my personal way to do thing when I need to create a prepolutade db. This allows you to maintain cleaner your project. But you can also populated that db in your real project. If you follow the first way you can simply move the sql file in the application directory for your app and say to core data to read that.
We want to save documents to individual OneDrive Folders.
Currently:
User "Tim" generates a customer overview (Last visits, Revenue etc.) in our ERP-Sytem from Customer "TomCompany" and it will be automatically saved in an FTP-Folder. He's now able to have a look on this file at customers site with Good Reader on his iPad.
Plan:
First step: The customer overview should be saved directly to OneDrive, instead of an FTP-Folder.
Second step: Every Sales Person has his own OneDrive account, so it should be saved to his own account with user-Parameters etc. (which is not a Problem to manage in our ERP-API).
The question is: Is it possible to connect to OneDrive from a different System like ERP. "SaveFileToOneDrive with Authentication"
You can 'connect' to OneDrive through the given API with JavaScript.
Here is an example: https://dev.onedrive.com/sdk/js-v7/js-picker-save.htm .
You can now add the 'Save to OneDrive' button on every page you need it.
If not noticed yet, some examples for the API: https://dev.onedrive.com/sample-code.htm
Hope this helps you to solve your (for me still unknown) problem ;-)
I implemented own windows live API because of I found some problems with standard live api. It is based on REST API so there is layer with objects (file, folder, etc...) and each object has some equipment (i.e. file has method for upload and download file). Second layer is for communication with server side and object layer send requests into second layer which send it into server. Server sends response and second layer return this response into object layer.
I implemented onedrive function mainly because of I developed application which uploads some files into onedrive.
So it is very simple to use it. I describe it on webpage https://wlivefw.codeplex.com/
You can sign as user which onedrive want to use by connection object. Then you will need folder id where you want to create new file. Then you create file object with parent_id set to folder id, name (is required) and description (optional). And now you call File.Create(file object which you created, Stream object - data of origin file, OverWriteOption - if you want to overwrite file if exists or not or create with new name, and progress handler - delegate to method which you want to invoke when progress changed).
File uploading is implemented by BITS protocol, so you can upload file greater than 60MB. File is uploaded by fragment so if fragment uploading fails you can very easy send this fragment again - in exception when uploading fails is delegate to continue method which continue upload from last successfull fragment.
I would like to improve this library so library is free to use as well as source code. Please if you will expand this library send me your changes and I will build new version, etc... Thank you and I hope it is usefull.
I am trying to upload custom formatted data files from the UK climate site e.g.this file. There are 5 lines of metadata and 1 header line.
1) Can CKAN preprocess the file according to a format I give it so that only data are picked up. Possibly saving the metadata in the description?
I would prefer a frontend option because I want users to be able to do this themselves.
2) Is it possible to have a dataset uploaded automatically once the url is entered. I currently have to go to the manage -> datastore page and click on upload to datastore to have the data populated.
3) Can the dataset be updated at a regular interval?
Thanks
Not currently. Doing ETL on incoming data is something that is discussed a lot recently, so it may happen soon.
You shouldn't have to manually trigger a load into datastore. Is this when creating a new resource, or if you're editing an existing resource? When editing a resource I believe it is only triggered if the URL changes.
You can use https://github.com/ckan/ckanext-harvest to have data pulled into CKAN on a regular schedule - there are harvesters for various different stores, so it depends on where it is updated from.
i am currently trying to implement a xls file read write operation in an iOS application.
So basically the requirement is, there is a big xls file which is having many dropdown , data or empty space present in a particular server.So very first time when multiple users open the app , they have to download that xls file & based on xls file a form will be created on app & later user can perform read & write operation on that form(although network is not available). But Once network is available , all the users can sync it back to the server.
Now i have 2 options
Option 1:
Create a CSV file from xls sheet on server side & send it to user.So user will perform Read & Write operation and save all data in sqlite db & on network availability they sync it back to server.
Option 2:
Create a webservice which will be created by using that xls file & send the XML to device, so based on XML user will create form and do offline mode read write operation & on network availability app will create a new XML file and sync it back to server.
So between option 1 and 2 which one is better & why ?
Any webservice is available to do such operation ?
It depends on which type of data you are getting, how many columns your csv file has, if the no. of columns are less i.e 2-5, 1st option would be better.
But if you have data with many columns, you must use XML, and XML is very good for storing hierarchical data.
My app uses Core Data to store 6 attributes for each entry as well as an image. I have several users asking for a feature where they can print out what they've entered. I can export the core data entries to a .csv file, but what about the images? cvs isn't pretty for the common user. They'd have to pull it into a spreadsheet app (if they have one) and play around with it to make it usable. But there's no way to export images into a .csv file.
What I'd really like is a way to push a button and have the app generate a report or a .pdf or something that they can email to themselves, or pull out of iTunes and it'll be formatted in rows with the entries, attributes and the images.
Any ideas? Can anyone point me to something that I can pass my core data attributes (6 text strings and an image) and output them to a pretty .pdf or webpage?
For this requirement, it should be fairly easy to create your own extremely simple HTML templating method.
Pick one sample entity and create a HTML page that formats it the way you want. Replace the actual data from the entity with placeholder text strings that you can search for at run time. Copy this template into your project. When the user presses the button, open the template, merge in the actual data, and save the result as a new HTML file or mail message.
If you want something more flexible, try searching GitHub for "HTML templating" or look at http://mattgemmell.com/2008/05/20/mgtemplateengine-templates-with-cocoa/