Vimdiff equivalent for Select Lines - vimdiff

When used as a mergetool for Git, what is the equivalent in vimdiff to kdiff3's "Select Lines(s) From A/B/C"? Is there a shortcut for that like Ctrl+1/2/3 in kdiff3?

Based on the Vim Reference Manual section for vimdiff, there are no built-in commands with the full functionality of Ctrl+1/2/3 in vimdiff. What I mean by "full functionality" is that in kdiff3 you could do the commands Ctrl+2, Ctrl+3, Ctrl+1 in that order, and in the merged version you end up with the diff lines from buffer B followed by the lines from buffer C followed by the lines from buffer A.
There is, however, a command for performing a more limited version of the functionality available in kdiff3. If you only want to use lines from one of your input files, then the command [count]do is available, where [count] is typically 1,2, or 3 depending on which vim buffer you want to pull the lines from. (do stands for "diff obtain".)
For example, if you had the following merge situation:
then you could move your cursor to the merge conflict in the bottom buffer and type 1do if you wanted "monkey", 2do if you wanted "pig", or 3do if you wanted "whale".
If you do need to grab lines from multiple buffers when merging with vimdiff, then my recommendation would be to set the Git config option merge.conflictstyle to diff3 (git config merge.conflictstyle diff3) so that the common ancestor appears in the merged buffer of the file, as shown in the screenshot above. Then just move the lines around to your liking using vim commands and delete the diff notations and any unused lines.

Related

How can I export the data from the results table in Image J Pendant Drop plugin?

I am using the pendant drop plug in (http://www.msc.univ-paris-diderot.fr/~daerr/misc/pendent_drop.html) to get the surface tension of droplets. It produces a table of results in a window called Results, however, it does not have the usual file, save as etc options. Also, when I try the getResults and nResults command in macro it doesnt give me any results and says the number of results is .
Do I need to edit the plug in to be able to output the results? My aim is to output the results as a csv file.
Pendent Drop is an ImageJ2-style plugin that generates a SciJava Table. In an up-to-date Fiji installation, you can save such tables using File > Export > Table....
The macro functions getResults and nResults do not work on those tables, because they require an ImageJ1 ResultsTable window.
See also this topic on the image.sc forum. In general, questions like this one are much better asked on https://forum.image.sc (see also the description of the imagej tag).

Question about SPSS modeler (There is an obstacle for make the stream run automatically)

I have SPSSmodeler stream which is now used and updated every week constantly to generate a certain dataset. A raw data for this stream is also renewed on a weekly basis.
In part of this stream, there is a chunk of nodes that were necessary to modify and update manually every week, and the sequence of this part is below: Type Node => Restructure Node => Aggregate Node
To simplify the explanation of those nodes' role, I drew an image of them as bellow.
Because the original raw data is changed weekly basis, the range of Unit value above is always varied, sometimes more than 6 (maybe 100) others less than 6 (maybe 3). That is why somebody has to modify there and update those chunk of nodes on a weekly basis until now. *Unit value has a certain limitation (300 for now)
However, now we are aiming to run this stream automatically without touching any human operations on it that we need to customize there to work perfectly, automatically. Please help and will appreciate your efforts, thanks!
In order to automatize, I suggest to try to use global nodes combined with clem scripts inside the execution (default script). I have a stream that calculates the first date and the last date and those variables are used to rename files at the end of execution. I think you could use something similar as explained here:
1) Create derive nodes to bring the unit values used in the weekly stream
2) Save this information in a table named 'count_variable'
3) Use a Global node named Global with a query similar to this:
#GLOBAL_MAX(variable created in (2)) (only to record the number of variables. The step 2 created a table with only 1 values, so the GLOBAL_MAX will only bring the number of variables).
4) The query inside the execution tab will be similar to this:
execute count_variable
var tabledata
var fn
set tabledata = count_variable.output
set count_variable = value tabledata at 1 1
execute Global
5) You now can use the information of variables just using the already creatde "count_variable"
It's not easy to explain just by typing, but I hope to have been helpful.
Please mark as +1 in this answer if it was relevant one.
I think there is a better, simpler and more effective (yet risky, due to node's requirements to input data) solution to your problem. It is called Transpose node and does exactly that - pivot your table. But just from version 18.1 on. Here's an example:
https://developer.ibm.com/answers/questions/389161/how-does-new-feature-partial-transpose-work-in-sps/

Delete variables based on the number of observations

I have an SPSS file that contains about 1000 variables and I have to delete the ones having 0 valid values. I can think of a loop with an if statement but I can't find how to write it.
The simplest way would be to use the spssaux2.FindEmptyVars Python function like this:
begin program.
import spssaux2
spssaux2.FindEmptyVars(delete=True)
end program.
If you don't already have the spssaux2 module installed, you would need to get it from the SPSS Community website or the IBM Predictive Analytics site and save it in the python\lib\site-packages directory under your Statistics installation.
Otherwise, the VALIDATEDATA command, if you have it, will identify the variables violating such rules as maximum percentage of missing values, but you would have to turn that output into a DELETE VARIABLES command. You could also look for variables with zero missing values using, say, DESCRIPTIVES and select out the ones with N=0.
If you've never worked with python in SPSS, here's a way to get the job done without it (not as elegant, but should do the job):
This will count the valid cases in each variable, and select only those that have 0 valid cases. Then you'll manually copy the names of these variables into a syntax command that will delete them.
DATASET NAME Orig.
DATASET DECLARE VARLIST.
AGGREGATE /OUTFILE='VARLIST'/BREAK=
/**list_all_the_variable_names_here = NU(*FirstVarName to *LastVarName).
DATASET ACTIVATE VARLIST.
VARSTOCASES /MAKE NumValid FROM *FirstVarName to *LastVarName/INDEX=VarName(NumValid).
SELECT IF NumValid=0.
EXECUTE.
Pause here to copy the remaining names in the list and complete the syntax, then continue:
DATASET ACTIVATE Orig.
DELETE VARIABLES *paste_here_all_the_remaining_variable_names_from_varlist .
Notes:
* I put stars where you have to replace my text with your variable names.
** If the variables are neatly named like Q1, Q2, Q3 .... Q1000, you can use the "FirstVarName to LastVarName" form (Q1 to Q1000) instead of listing all the variable names.
BTW it is of course possible to do this completely automatically without manually copying those names (using only syntax, no Python), but the added complexity is not worth bothering with for a single use...

Find size contributed by each external library on iOS

I'm trying to reduce my app store binary size and we have lots of external libs that might be contributing to the size of the final ipa. Is there any way to find out how much each external static lib takes up in the final binary (Other than going about removing each one ?) ?
All of this information is contained in the link map, if you have the patience for sifting through it (for large apps, it can be quite large). The link map has a listing of all the libraries, their object files, and all symbols that were packaged into your app, all in human-readable text. Normally, projects aren't configured to generate them by default, so you'll have to make a quick project file change.
From within Xcode:
Under 'Build Settings' for your target, search for "map"
In the results below, under the 'Linking' section, set 'Write Link Map File' to "Yes"
Make sure to make note of the full path and file name listed under 'Path to Link Map File'
The next time you build your app you'll get a link map dumped to that file path. Note that the path is relative to your app's location in the DerivedData folder (usually ~/Library/Developer/Xcode/DerivedData/<your-app-name>-<random-string-of-letters-and-numbers>/Build/Intermediates/..., but YMMV). Since it's just a text file, you can read it with any text editor.
The contents of the link map are divided into 3 sections, of which 2 will be relevant to what you're looking for:
Object Files: this section contains a listing of all of the object files included in your final app, including your own code and that of any third-party libraries you've included. Importantly, each object file also lists the library where it came from;
Sections: this section, not relevant to your question, contains a list of the processor segments and their sections;
Symbols: this section contains the raw data that you're interested in: a list of all symbols/methods with their absolute location (i.e. address in the processor's memory map), size, and most important of all, a cross-reference to their containing object module (under the 'File' column).
From this raw data, you have everything you need to do the required size calculation. From #1, you see that, for every library, there are N possible constituent object modules; from #2, you see that, for every object module, there are M possible symbols, each occupying size S. For any given library, then, your rough order of size will be something like O(N * M * S). That's only to give you an indication of the components that would go into your actual calculations, it's not any sort of a useful formula. To perform the calculation itself, I'm sorry to say that I'm not aware of any existing tools that will do the requisite processing for you, but given that the link map is just a text file, with a little script magic and ingenuity you can construct a script to do the heavy lifting.
For example, I have a little sample project that links to the following library: https://github.com/ColinEberhardt/LinqToObjectiveC (the sample project itself is from a nice tutorial on ReactiveCocoa, here: http://www.raywenderlich.com/62699/reactivecocoa-tutorial-pt1), and I want to know how much space it occupies. I've generated a link map, TwitterInstant-LinkMap-normal-x86_64.txt (it runs in the simulator). In order to find all object modules included by the library, I do this:
$ grep -i "libLinqToObjectiveC.a" TwitterInstant-LinkMap-normal-x86_64.txt
which gives me this:
[ 8] /Users/XXX/Library/Developer/Xcode/DerivedData/TwitterInstant-ecppmzhbawtxkwctokwryodvgkur/Build/Products/Debug-iphonesimulator/libLinqToObjectiveC.a(LinqToObjectiveC-dummy.o)
[ 9] /Users/XXX/Library/Developer/Xcode/DerivedData/TwitterInstant-ecppmzhbawtxkwctokwryodvgkur/Build/Products/Debug-iphonesimulator/libLinqToObjectiveC.a(NSArray+LinqExtensions.o)
[ 10] /Users/XXX/Library/Developer/Xcode/DerivedData/TwitterInstant-ecppmzhbawtxkwctokwryodvgkur/Build/Products/Debug-iphonesimulator/libLinqToObjectiveC.a(NSDictionary+LinqExtensions.o)
The first column contains the cross-references to the symbol table that I need, so I can search for those:
$ cat TwitterInstant-LinkMap-normal-x86_64.txt | grep -e "\[ 8\]"
which gives me:
0x100087161 0x0000001B [ 8] literal string: PodsDummy_LinqToObjectiveC
0x1000920B8 0x00000008 [ 8] anon
0x100093658 0x00000048 [ 8] l_OBJC_METACLASS_RO_$_PodsDummy_LinqToObjectiveC
0x1000936A0 0x00000048 [ 8] l_OBJC_CLASS_RO_$_PodsDummy_LinqToObjectiveC
0x10009F0A8 0x00000028 [ 8] _OBJC_METACLASS_$_PodsDummy_LinqToObjectiveC
0x10009F0D0 0x00000028 [ 8] _OBJC_CLASS_$_PodsDummy_LinqToObjectiveC
The second column contains the size of the symbol in question (in hexadecimal), so if I add them all up, I get 0x103, or 259 bytes.
Even better, I can do a bit of stream hacking to whittle it down to the essential elements and do the addition for me:
$ cat TwitterInstant-LinkMap-normal-x86_64.txt | grep -e "\[ 8\]" | grep -e "0x" | awk '{print $2}' | xargs printf "%d\n" | paste -sd+ - | bc
which gives me the number straight up:
259
Doing the same for "\[ 9\]" (13016 bytes) and "\[ 10\]" (5503 bytes), and adding them to the previous 259 bytes, gives me 18778 bytes.
You can certainly improve upon the stream hacking I've done here to make it a bit more robust (in this implementation, you have to make sure you get the exact number of spaces right and quote the brackets), but you at least get the idea.
Make a .ipa file of your app and save it in your system.
Then open the terminal and execute the following command:
unzip -lv /path/to/your/app.ipa
It will return a table of data about your .ipa file. The size column has the compressed size of each file within your .ipa file.
I think you should be able to extract the information you need from this:
symbols -w -noSources YourFileHere
Ref: https://devforums.apple.com/message/926442#926442
IIRC, it isn't going to give you clear summary information on each lib, but you should find that the functions from each library should be clustered together, so with a bit of effort you can calculate the approximate contribution from each lib:
Also make sure that you set Generate Debug Symbols to NO in your build settings. This can reduce the size of your static library by about 30%.
In case it's part of your concern, a static library is just the relevant .o files archived together plus some bookkeeping. So a 1.7mb static library — even if the code within it is the entire 1.7mb — won't usually add 1.7mb to your product. The usual rules about dead code stripping will apply.
Beyond that you can reduce the built size of your code. The following probably isn't a comprehensive list.
In your target's build settings look for 'Optimization Level'. By switching that to 'Fastest, Smallest -Os' you'll permit the compiler to sacrifice some speed for size.
Make sure you're building for thumb, the more compact ARM code. Assuming you're using LLVM that means making sure you don't have -mno-thumb anywhere in your project settings.
Also consider which architectures you want to build for. Apple doesn't allow submission of an app that supports both ARMv6 and the iPhone 5 screen and have dropped ARMv6 support entirely from the latest Xcode. So there's probably no point including that at this point.

TFS: Merging back into main branch

We have a Current branch where the main development happens. For a while I have been working on something kind of experimental in a separate branch. In other words I branched what I needed from the Current branch into an Experimental branch. While working I have regularly merged Current into Experimental so that I have the changes others have made, so that I am sure what I make work with their changes.
I now want to merge back into Current. First I merged Current into Experimental, compiled and made sure everything was working. So in my head, Experimental and Current should be "in sync". But when I try to merge Experimental back into Current, I get a whole bunch of conflicts. But I thought I had already kind of solved those when I merged Current into Experimental.
What is going on? Have I totally misunderstood something? How can I do this smoothly? Really don't want to go through all of those conflicts...
When you click Resolve on an individual conflict, what does the summary message say? If your merges from Current -> Experimental were completed without major manual work, it should be something like "X source, 0 target, Y both, 0 conflicting." In other words, there are no content blocks in the target (Current) file that aren't already in the source branch's copy (Experimental). You can safely use the AutoMerge All button.
Note: AutoMerge should be safe regardless. It's optimized to be conservative about early warnings, not for the ability to solve every case. But I recognize that many of us -- myself included -- like to fire up the merge tool when there's any question. In the scenario described, IMO, even the most skittish can rest easy.
Why is there a conflict at all? And what if the summary message isn't so cut & dry? Glad you asked :) Short answer - because the calculation that determines the common ancestor ("base") of related files depends heavily on how prior merge conflicts between them were resolved. Simple example:
set up two branches, A and B.
make edits to A\foo.cs and B\foo.cs in separate parts of the file
merge A -> B
AutoMerge the conflict
merge B -> A
TFS must flag this sequence of events as conflicting. The closest common ancestor between B\foo.cs;4 and A\foo.cs;2 lies all the way back at step 1, and both sides have obviously changed since then.
It's tempting to say that A & B are in sync after step 4. (More precisely: that the common ancestor for step 5's merge is version #2). Surely a successful content merge implies that B\foo.cs contains all the changes made to date? Unfortunately there are a number of reasons you cannot assume this:
Generality: not all conflicts can be AutoMerged. You need criteria that apply to both scenarios.
Correctness: even when AutoMerge succeeds, it doesn't always generate valid code. A classic example arises when two people add the same field to different parts of a class definition.
Flexibility: every source control user has their own favorite merge tools. And they need the ability to continue development/testing between the initial Resolve decision ["need to merge the contents somehow, someday"] and the final Checkin ["here, this works"].
Architecture: in a centralized system like TFS, the server simply can't trust anything but its own database + the API's validation requirements. So long as the input meets spec, the server shouldn't try to distinguish how various types of content merges were performed. (If you think the scenarios so far are easily distinguished, consider: what if the AutoMerge engine has a bug? What if a rogue client calls the webservice directly with arbitrary file contents? Only scratching the surface here...servers have to be skeptical for a reason!) All it can safely calculate is you sent me a resulting file that doesn't match the source or target.
Putting these requirements together, you end up with a design that lumps our actions in step 4 into a fairly broad category that also includes manual merges resulting from overlapping edits, content merges [auto or not] provided by 3rd party tools, and files hand-edited after the fact. In TFS terminology this is an AcceptMerge resolution. Once recorded as such, the Rules of Merge(tm) have to assume the worst in pursuit of historical integrity and the safety of future operations. In the process your semantic intentions for Step 4 ("fully incorporate into B every change that was made to A in #2") were dumbed down to a few bytes of pure logic ("give B the following new contents + credit for handling #2"). While unfortunate, it's "just" a UX / education problem. People get far angrier when the Rules of Merge make bad assumptions that lead to broken code and data loss. By contrast, all you have to do is click a button.
FWIW, there are many other endings to this story. If you chose Copy From Source Branch [aka AcceptTheirs] in step 4, there would be no conflict in step 5. Ditto if you chose an AcceptMerge resolution but happened to commit a file with the same MD5 hash as A\foo.cs;2. If you chose Keep Target [aka AcceptYours] instead, the downstream consequences change yet again, though I can't remember the details right now. All of the above get quite complex when you add other changetypes (especially Rename), merge branches that are far more out of sync than in my example, cherry pick certain version ranges and deal with the orphans later, etc....
EDIT: as fate would have it, someone else just asked the exact same question on the MSDN forum. As tends to be my nature, I wrote them another long answer that came out completely different! (though obviously touching on the same key points) Hope this helps: http://social.msdn.microsoft.com/Forums/en-US/tfsversioncontrol/thread/e567b8ed-fc66-4b2b-a330-7c7d3a93cf1a
This has happened to me before. When TFS merges Experimental into Current, it does so using the workspaces on your hard drive. If your Current workspace is out of date on your local computer, TFS will get merge conflicts.
(Experimental on HD) != (Current in TFS) != (Old Current on HD)
Try doing a forced get of Current to refresh your local coppy of Current and try the merge again.
You probably have lines like this before you start the merge...
Main branch - Contains code A, B, C
Current branch - Contains code A, B, C, D, E
Experimental branch - Contains code A, B, C, D, F, G, H
When you push from Current to Exp, you are merging feature E into the experimental branch.
When you then push from Exp to Current, you still have to merge F, G, and H. This is where your conflicts are likely rooted.
----Response to 1st comment----
Do you auto merge, or use the merge tool?
What is an example of something that is "in conflict"?

Resources