How to import directory tree into reference management software and synchronize papers - mendeley

I previously managed my papers locally in different folders. I'm trying to import the structure of my directory tree into Mendeley with the hope of synchronizing all the papers to Mendeley automatically. However, it failed.
I am wondering how can I import the directory tree together with papers into any reference management software. Besides, I hope that the locally added paper can be automatically synchronized to the software as well.
Any suggestion would be appreciated. Many thanks!

I found a way round to import local papers into a literature management software and synchronize them online. The software I used is Zotero and two add-ons are required: a) ZotFile and b) Folder Import for Zotero.
Folder Import for Zotero is responsible for importing all your papers and the structure of your repository into Zotero. Once you import all the folders and papers, ZotFile can help with rename and synchronizing using cloud services such as Dropbox, One-drive etc. (cloud service setting required)
New papers can be added either by using Zotero Connector online or imported into Zotero locally by dragging papers into the app windows. After clicking "Rename attachments" for each paper, these papers will be automatically renamed and moved to the clound folder, and synchronized.

Related

How can I sync the literature from a Mendeley group with Sharelatex?

I have a group in Mendeley with a bunch of articles, and I would like to sync it with Sharelatex. However it only seems to sync with the articles in my personal Mendeley and I'm having a hard time making Sharelatex sync with only my desired group.
However, even testing it out with my personal Mendeley, I can't quite seem to get it to generate the Mendeley list in Sharelatex. Is there any special package that I need?
My solution for this is as follow:
Use Dropbox integration for the project, and find the project folder
in your Dropbox Apps folder.
Use Mendeley to save the whole library in that folder. I think you can only get the file name as library.bib. Your group references will be there. In a moment it gets synced by Dropbox, sharelatex will see it.
This solution works for me but seems a bit dirty.

How can I maintain a Tensorflow dependency in iOS and keep it portable?

I have questions about how to best integrate Tensorflow with my team's existing iOS application.
I am currently adding Tensorflow to an existing iOS project. I built the library for iOS following the instructions in the makefile README.md, and was able to compile successfully.
I have now been trying to follow the instructions here to get the library integrated. These instructions tell you to add search paths to a number of folders in your Tensorflow build, which will cause problems as I need this project to be easily sharable with my team in git. I would prefer to not have to force anyone else working on this to run a 20-plus minute build before they can get up and running.
The options I've considered are below:
Option 1: Embed the entire Tensorflow library
The problems here would be making the size of the project enormous. In addition, the build time will be significant and unnecessary. How often would we build to keep updated? I would like to avoid this.
Option 2: Link to separate Tensorflow project
With this option we would statically link to a peer project and have the same structure on each developer's machine. This seems to imply we'd have to have each team member pull and build the Tensorflow library before being able to build our project at all. Is there a way with this option to copy over only the necessary output files when they're updated by someone who has built Tensorflow while at the same time not requiring developers to build Tensorflow on their machine?
I'm also curious if there is a 3rd option I haven't considered here.
The usual way I think about this is by treating TensorFlow as a framework. That means that you need to build a copy once, and can then share the resulting files as something you can distribute to other developers in your organization.
You should just be able to zip up the folder you've built TF in (the directory you cloned from github) after you've built it, and then have other developers install it in a known location on their machine. I think this is what you're saying with option #2, but the difference is that other developers can avoid building it, unless they want to.
This also has the advantage that you can make sure your team members are all on the same TensorFlow version, to make debugging a bit easier.
Does that help?

Import and export JIRA requirements

Until now we based our development work on Word documents. Is there an effective way to import the contents of these documents as requirements / issues into JIRA and use JIRA for requirement management, only exporting them to documents when really necessary?
Provided you own docs are well structured you will be able to import them via #Bob Dalgleish 's solution proposal.
You will need to convert your files to CSVs, map all "keys" in your current structure to the one described in the guide and check your file's structure. Then you will be able to import them.
Something more, you have the ability to import attachments. This way you can attach both the original document and resulting CSV file to each newly-created issue. This way you will have a nice backup of your "old document system". If you are considering doing such a transition between different organizational workflows use it as a read-only backup.

Is it possible to set up continuous integration for MS dynamics crm 2011?

We are just beginning development and implementation for dynamics crm 2011 on premises. Is it possible to implement automation for code check-in to promote code from development to test systems? It looks like this would involve export/import of unmanaged solutions containing the development code that was checked in. I have not been able to find APIs around this functionality.
If that is not possible, how close can you get? It looks like there are APIs to automate the uploading of web resources and plug-ins (e.g. webresourceutility in the sdk), but the web resources still need to be manually linked to the form they are to be used on (in the case of javascript etc). Has anyone made progress in automating parts of their CRM environments?
for reference, we're using vs 2010 & tfs 2010 using MSuild for current continuous integration.
We have a few techniques that provides us a very solid CI structure.
Plugins
All our Plugins are CI Compiled on Check-In
All plugin code we write has self-registration details as part of the component.
We have written a tool which plays the Plugins to the database, uninstalling the old ones first based on the self-registration
details.
Solution
We have an unmanaged solution in a Customisation organisation which
is clean and contains no data. Development is conducted out of this
organisation. It has entities, forms, Jscript, Views, Icons, Roles,
etc.
This Customisation database has all the solutions we've imported from 3rd parties, and customisations are made into our solution which is the final import into a destination organisation.
The Solution is exported as managed and unmanaged and saved into
TFS
We store the JScript and SSRS RDLs in TFS and have a custom tool
which plays these into the customisation database before it is
exported.
We also have a SiteMap unmanaged Solution which is exported as unmanaged (to ensure we get a final resultant Sitemap we are after)
Deployment
We have a UI and Command Line driven tool which does the following :-
Targets a particular Organisation
Imports the Customisation managed solution into a selected environment. e.g. TEST. Additionally imports the unmanaged Sitemap.
Uninstalls the existing solution which was there (we update the solution.xml file giving it a name based on date/time when we import)
Installs/Uninstalls the Plugin Code
Installs any custom SQL scripts (for RDLs)
Re-enables Duplicate Detection Rules
Plays in certain meta-data we store under source control. e.g. Custom Report entity we built which has attachments and XML configuration.
It isn't entirely perfect, but via command line we refresh TEST and all the Developer PCs nightly. It takes about 1 hour to install and then uninstall the old solution per organisation.
We use CI extensively for Dynamics CRM. For managing solutions, I would recommend using a "clean" Dynamics CRM implementation which will be the master for your solutions and also for your "domain data". See http://msdn.microsoft.com/en-us/library/microsoft.crm.sdk.messages.importsolutionrequest.aspx for importing solutions. Also check out - http://msdn.microsoft.com/en-us/library/hh547388.aspx

TFS Best Practices Project Hierarchy

I've recently installed and started using TFS. Mainly using for source repository initially and then will get into using the Work Item features. I'm moving from using Vault as repository and have some questions on best practices for setting up the project structure.
My current structure from Vault is:
Projects
- CustomerName1
-- Application1
-- Application2
- CustomerName2
-- Application1
-- Application2
Can I have a smiliar structure in TFS? Is there any good documentation that has real examples and
instructions on how to set this up? From what I see is all real basic and the books I have don't have real-life repository examples that mimic the structure I have.
I have created a new Team Project called CustomerName1, then added other Team Projects, Application1, underneath CustomerName1. However, I lose on the Application1 the separate folders like Work Items, Documents, Reports, and Builds.
So this doesn't appear set-up correctly.
Thanks ...
A few questions for clarification.
Do you have any shared assemblies between Customer1 and Customer2? If so, put a single Team Project and then add sub-folders in source control explorer for Customer1\App1, Customer1\App2, etc, etc. Also a shared libraries or some such parallel to the CustomerX folders.
Do you have an existing branch/merge strategy?
You will have shared SharePoint sites, Builds, WorkItems, Reports, and Documents by default for the Team Project. You will have shared TFS Databases in SQL (effecting WorkItem numbers) for Team Project Collections.
You can, however, set permissions for any user/group to folders via Source Control Explorer.

Resources