Do not replace existing file when uploading file with same name Microsoft Graph API - microsoft-graph-api

How can I prevent replacing the existing file with a new file which has the same name, when I upload file to one drive?
I am using PUT /me/drive/items/{parent-id}:/{filename}:/content docs end point.
I instead need to keep indexing (test.jpg, test (1).jpg) or, just like google drive does, add two files with the same name.

You can control this behavior using Instance Attributes, specifically the #microsoft.graph.conflictBehavior query parameter. There are three supported conflict behaviors; fail, replace (the default), and rename.
The conflict resolution behavior for actions that create a new item. You can use the values fail, replace, or rename. The default for PUT is replace. An item will never be returned with this annotation. Write-only.
In order to have it automatically rename the file, you add #microsoft.graph.conflictBehavior=rename as a query parameter to your URI.
PUT /me/drive/items/{parent-id}:/{filename}:/content?#microsoft.graph.conflictBehavior=rename

Related

Do the apoc.import use merge or create to add new data?

CALL apoc.import.csv(
[{fileName: 'file:/persons.csv', labels: ['Person']}],
[{fileName: 'file:/knows.csv', type: 'KNOWS'}],
{delimiter: '|', arrayDelimiter: ',', stringIds: false}
)
For this example, internally, does the 'import' use merge or create to add nodes, relationships and properties? I tested, it seems it uses 'create' to add new rows even for a new ID record. Is there a way to control this? When to use apoc.load VS apoc.import? It seems apoc.load is a lot more flexible, where users can choose to use cypher commands specifically for purposes. Right?
From the source of CsvEntityLoader (which seems to be doing the work under the covers), nodes are blindly created rather than being merged.
While there's an ignoreDuplicateNodes configuration property you can set, it just ignores IDs duplicated within the incoming CSV (i.e. it's not de-duplicating the incoming records against your existing graph). You could protect yourself from creating duplicate nodes by creating an appropriate unique constraint on any uniquely-identifying properties, which would at least prevent you accidentally running the same import twice.
Personally I'd only use apoc.import.csv to do a one-off bulk load of data into a fresh graph (or to load a dump from another graph that was exported as a CSV by something like apoc.export.csv.*). And even then, you've got the batch import tool that'll do that job with higher performance for large datasets.
I tend to use either the built-in LOAD CSV command or apoc.load.csv for most things, as you can control exactly what you do with each record coming in from the file (such as performing a MERGE rather than a CREATE).
As indicated by by #Pablissimo's answer, the ignoreDuplicateNodes config option (when explicitly set to true) does not actually check for duplicates in the DB - it just checks within the file. A request to address this hole was brought up before, but nothing has been done yet to address it. So, if this is a concern for your use case, then you should not use apoc.import.csv.
The rest of this answer applies iff your files never specify nodes that already exist in your DB.
If your node CSV file follows the neo4j-admin import command's import file header format and has a header that specifies the :ID field for the column containing the node's unique ID, then the apoc.import.csv procedure should, by default, fail when it encounters duplicate node IDs (within the same file). That is because the procedure's ignoreDuplicateNodes config value defaults to false (you can specify true to skip duplicate IDs instead of failing).
However, since your node imports are not failing but are generating duplicate nodes, that implies your node CSV file does not specify the :ID field as appropriate. To fix this, you need to add the :ID field and call the procedure with the config option ignoreDuplicateNodes:true. Or, you can modify those CSV files somehow to remove duplicate rows.

Pre-/postprocessing of DSL edited with TMF-based editor

Given:
Some kind of DSL parsed with Xtext parser and then edited by user in TMF-based editor.
When user open file for editing I want first get access to the parse tree of just opened file, modify loaded file content in a some way and then provide to user modified source for editing.
When user wish to save file I again want to preprocess text representation based on actual parse tree and save such altered version.
Is there any Xtext/EMF API to implement such pre-/post- processing?
The goal is to add some content not presented in the physical file, allow user to edit this content and remove it before saving to file. This extra content should be stored separately from DSL source file.
If I understand your question correctly, you want to display additional information in the text editor itself (and not add additional information only to the EMF model, not to the text, for which IDerivedStateComputer could be used).
If the user is not supposed to edit the additional text, the "Code Mining" feature might be useful: https://www.eclipse.org/Xtext/documentation/310_eclipse_support.html#code-mining and https://blogs.itemis.com/en/code-mining-support-in-xtext
To answer the question itself:
Is there any Xtext/EMF API to implement such pre-/post- processing?
No, I am pretty sure there is no such Xtext API for pre-/post-processing files based on their own parse tree (EMF is irrelevant as you want to change the physical content). You could try to mess around with the XtextDocumentProvider (i.e. create your own subclass and register it in the UI module), but this is very likely to break the UI because the line numbers and offsets won't match.
You might have more luck implementing a custom Eclipse action that is executed on the original file and creates a temporary modified copy based on the parsed original file and then opens an editor for the temporary file. Then you could implement a IXtextBuilderParticipant that writes the result back to the original file on save (you have to register it using the org.eclipse.xtext.builder.participant extension point).
Another idea would be not to use an Eclipse action but a tabbed editor using MultiPageEditorPart, with the original as one of three tabs (the composite file and the 'additional info' file being the other two).
The goal is to add some content not presented in the physical file, allow user to edit this content and remove it before saving to file. This extra content should be stored separately from DSL source file.
Couldn't you present this information in another view similar to the 'Properties' view of EMF ? e.g. the user opens file, the Xtext editor opens as well as the 'Properties' view, which presents a way to edit these "extra" information. Upon save of either view, the Xtext save is called and your extra properties are serialized in their own model.

Generate URL of resources that are handled by Grails AssetPipeline

I need to access a local JSON file. Since Grails 2.4 implements the AssetPipeline plugin by default, I saved my local JSON file at:
/grails-app/assets/javascript/vendor/me/json/local.json
Now what I need is to generate a URL to this JSON file, to be used as a function parameter on my JavaScript's $.getJSON() . I've tried using:
var URL.local = ""${ raw(asset.assetPath(src: "local.json")) }";
but it generates an invalid link:
console.log(URL.local);
// prints /project/assets/local.json
// instead of /project/assets/vendor/me/json/local.json
I also encountered the same scenario with images that are handled by AssetPipeline1.9.9— that are supposed to be inserted dynamically on the page. How can I generate the URL pointing this resource? I know, I can always provide a static String for the URL, but it seems there would be a more proper solution.
EDIT
I was asked if I could move the local JSON file directly under the assets/javascript root directory instead of placing it under a subdirectory to for an easier solution. I prefer not to, for organization purposes.
Have you tried asset.assetPath(src: "/me/json/local.json")
The assets plugin looks in all of the immediate children of assets/. Your local.json file would need to be placed in /project/assets/foo/ for your current code to pick it up.
Check out the relevant documentation here which contains an example.
The first level deep within the assets folder is simply used for organization purposes and can contain folders of any name you wish. File types also don't need to be in any specific folder. These folders are omitted from the URL mappings and relative path calculations.

Delphi overwrite file and wrong modified date time

I'd like to get a file last modified time in Delphi.
Normally something like FileAge() would do the trick, only the problem is: if I overwrite *File A* with File B using CopyFile, File A's modified date is not updated with current overwrite time as it should(?)
I get that: CopyFile also copy file attributes, but I really need to get the modified date that also works when a file is overwritten.
Is there such function? My whole application relies on modification time to decide whether or not I should proceed with files!
EDIT Just to clarify: I'm only monitoring the files. It's not my application who's modifying them.
The documentation for CopyFile says:
File attributes for the existing file are copied to the new file.
Which means that you cannot use base your program on the last modified attribute of the file, or indeed any attribute of the file. Indeed there are all sorts of ways for the last modified attribute of the file to change. It can in fact go backwards in time.
Instead I suggest that you use ReadDirectoryChangesW to keep track of modifications. That will allow you to receive notifications whenever a file is modified. You can write your program in an event based manner based on the ReadDirectoryChangesW API.
If you can't use ReadDirectoryChangesW and the file attributes, then you'll have to base your decisions on the contents of the file.

Custom Action in Deployment Project - prompt user for values, and then extract them from custom actions?

I am building a Windows Service which will be deployed on four servers. My user wants to have the service read a configuration file from a common location, and load it OnStart.
I want the installation to prompt the user for the file path and file name to the configuration file when the service is installed, and then save that data in My.Settings.
I have figured out how to set the EDITA1 and EDITA2 variables in the Deployment project's UI, so that the user will be prompted for path and file name, but I don't know how to get those values out and into the settings of the service.
Help, please.
-Jennifer
Did you try passing it to the custom action using CustomActionData Property in the Custom Action property window. syntax is /param=[EDITA1]
Context.Parameters will contain a dictionary with 1 entry key being "param" (in my example above that's the key I gave it).
I'm having a problem with passing in parameters which contain spaces. the guidelines say:
For custom actions that are installation components (ProjectInstaller
classes), the CustomActionData property takes a format of /name=value.
Multiple values must be separated by a single space: /name1=value1
/name2=value2.
If the value has a space in it, it must be surrounded by
quotes: /name="a value".
Windows Installer properties can be passed using the bracketed syntax:
/name=[PROPERTYNAME].
For Windows Installer properties such as [TARGETDIR]
that return a directory, in addition to the brackets you must include quotes
and a trailing backslash: /name="[TARGETDIR]\".
When I try the "[EDITA1]\" for the file path I need.. I get the 'FileNotFound' error for "C..\Microsoft..." while my path didn't have Microsoft

Resources