VS2019 .editorconfig is not respected - visual-studio-2019

We're struggling to set a team-standard indent style for javascript/typescript and from all indications, the settings in .editorconfig are not overriding user preferences as indicated at the bottom of VS when we open the solution
For testing purposes, I've created the world's simplest .editorconfig with a ridiculous indent size value, and added it to the root folder of the solution:
# All files
[*]
indent_style = space
indent_size = 27
Then, under Tools | Options | Text Editor | JavaScript/TypeScript | Tabs, I've set a different indent size value:
So - if .editorconfig is really being used, any attempt to reformat a Typescript file should result in 27 spaces of indenting at each level. No dice.
I've tried moving the file to the same folder as the Typescript files I want to format. No dice.
I've verified that "Follow project coding conventions" is selected under Tools | Options | Text Editor | General. I've also turned it off. No dice.
It always formats to the indent size specified in Tools | Options.
Is there some magic sauce I'm missing?

Project proprities
Code Analysis
enable Run On live Analysis opt

Related

How to print Java code on A3 page avoiding line-wrapping

I have to print Java code that some times reaches 300 columns (characters per line) in A3 paper and whatever editor I use (e.g. textmate) wraps the lines in order to fit in A4 paper.
Any suggestions?
cheers,
Asterios
Your editor undoubtably has either a Page Setup dialog or a Preferences dialogue as part of the Print Dialogue which will allow you to set the Paper Size to use for printing.
Even Notepad supports this
I finally made it to print using enscript. Here is the command I used to print Java code into PDF (and the used the pdf to print).
enscript -r -Ejava -M A3 -C -f "Courier8" input.java -o - | ps2pdf - output.pdf
where:
-r prints in landscape mode
-C prints the line numbers
-f changes the font and size
-M sets the output media to A3 (default is A4)
-Ejava adds syntax highlighting (you can also use --color if you need
colors in syntax highlighting but
they are not nicely printed in
greyscale)
It seems unlikely that every editor tries to format for A4. Which other editors have you tried? Does textmate not have a page size option? (Hmm... seems not)
Try a different editor that does let you set page size. Word, even.

Joe's own editor - how to change the tab size

I'm having trouble changing the tab size in Joe.
I have copied joerc to $HOME and have edited the -tab line to -tab 4 but this hasn't changed the option in Joe. Also the number 4 is green instead of blue when I edit joerc so I think its reading it wrong.
The real solution is:
Create a file $HOME/.joerc (NOT .joe as at least the Debian joerc suggests!)
FIRST LINE must be :include /etc/joe/joerc
Then, a line containing just a * and a newline character
Then, -tab 4 and -istep 4, each on a single line.
Add a blank line at the end.
You may also add further options with other masks.
I've wasted about 20 mins trying to set tab size too. Here is the solution:
I. Open:
/etc/joe/joerc
II. Find row containing -tab nnn and change it to:
-tab 4
(I assume that you want to change tab size to 4. If you want different value, please replace all the 4s with your value)
III. Find -istep nnn and change it to:
-istep 4
IV. Save & exit
This will set tab size 4 for files WITHOUT extension. If you want to change tab size for files with common extensions like *.java:
I. open /etc/joe/ftyperc
II. Find your extension, for example *.java. Initially it looks like:
JAVA
*.java
-autoindent
-syntax java
-smarthome
-smartbacks
-purify
-cpara >#!;*/%
III. You have to comment (insert tab before it) -autoindent and add -istep 4 bellow -cpara. It should look like:
JAVA
*.java
-autoindent
-syntax java
-smarthome
-smartbacks
-purify
-cpara >#!;*/%
-istep 4
In case anyone else runs into this, I am running an ancient version of joe on AIX and after some painful trial and error it turned out that -smartbacks was the problem for me. I commented that line out and tabs work, put it back and they go back to 2. Probably fixed in a later version, but hopefully this helps someone else with the same problem.
JAVA
*.java
-spaces
-tab 4
-istep 4
-indentc 32
-autoindent
-syntax java
-smarthome
-smartbacks
-purify
Each time the tab key is pressed (using joe 4.6) it inserts 4 spaces after having followed those steps:
Execute sudo joe /etc/joe/joerc
Find the row containing -tab nnn Tab width, change it to -tab 4 Tab width and make sure that there is no whitespace at its left side.
A few lines down, find the row containing -spaces TAB inserts spaces instead of tabs and make sure that there is no whitespace at its left side.
Save and exit
This works here for files without extension, for .cpp files, for .java files, for .c files, for .txt files, etc.

Correct word-count of a LaTeX document

I'm currently searching for an application or a script that does a correct word count for a LaTeX document.
Up till now, I have only encountered scripts that only work on a single file but what I want is a script that can safely ignore LaTeX keywords and also traverse linked files...ie follow \include and \input links to produce a correct word-count for the whole document.
With vim, I currently use ggVGg CTRL+G but obviously that shows the count for the current file and does not ignore LaTeX keywords.
Does anyone know of any script (or application) that can do this job?
I use texcount. The webpage has a Perl script to download (and a manual).
It will include tex files that are included (\input or \include) in the document (see -inc), supports macros, and has many other nice features.
When following included files you will get detail about each separate file as well as a total. For example here is the total output for a 12 page document of mine:
TOTAL COUNT
Files: 20
Words in text: 4188
Words in headers: 26
Words in float captions: 404
Number of headers: 12
Number of floats: 7
Number of math inlines: 85
Number of math displayed: 19
If you're only interested in the total, use the -total argument.
I went with icio's comment and did a word-count on the pdf itself by piping the output of pdftotext to wc:
pdftotext file.pdf - | wc - w
latex file.tex
dvips -o - file.dvi | ps2ascii | wc -w
should give you a fairly accurate word count.
To add to #aioobe,
If you use pdflatex, just do
pdftops file.pdf
ps2ascii file.ps|wc -w
I compared this count to the count in Microsoft Word in a 1599 word document (according to Word). pdftotext produced a text with 1700+ words. texcount did not include the references and produced 1088 words. ps2ascii returned 1603 words. 4 more than in Word.
I say that's a pretty good count. I am not sure where's the 4 word difference, though. :)
In Texmaker interface you can get the word count by right clicking in the PDF preview:
Overleaf has a word count feature:
Overleaf v2:
Overleaf v1:
I use the following VIM script:
function! WC()
let filename = expand("%")
let cmd = "detex " . filename . " | wc -w | perl -pe 'chomp; s/ +//;'"
let result = system(cmd)
echo result . " words"
endfunction
… but it doesn’t follow links. This would basically entail parsing the TeX file to get all linked files, wouldn’t it?
The advantage over the other answers is that it doesn’t have to produce an output file (PDF or PS) to compute the word count so it’s potentially (depending on usage) much more efficient.
Although icio’s comment is theoretically correct, I found that the above method gives quite accurate estimates for the number of words. For most texts, it’s well within the 5% margin that is used in many assignments.
If the use of a vim plugin suits you, the vimtex plugin has integrated the texcount tool quite nicely.
Here is an excerpt from their documentation:
:VimtexCountLetters Shows the number of letters/characters or words in
:VimtexCountWords the current project or in the selected region. The
count is created with `texcount` through a call on
the main project file similar to: >
texcount -nosub -sum [-letter] -merge -q -1 FILE
<
Note: Default arguments may be controlled with
|g:vimtex_texcount_custom_arg|.
Note: One may access the information through the
function `vimtex#misc#wordcount(opts)`, where
`opts` is a dictionary with the following
keys (defaults indicated): >
'range' : [1, line('$')]
'count_letters' : 0/1
'detailed' : 0
<
If `detailed` is 0, then it only returns the
total count. This makes it possible to use for
e.g. statusline functions. If the `opts` dict
is not passed, then the defaults are assumed.
*VimtexCountLetters!*
*VimtexCountWords!*
:VimtexCountLetters! Similar to |VimtexCountLetters|/|VimtexCountWords|, but
:VimtexCountWords! show separate reports for included files. I.e.
presents the result of: >
texcount -nosub -sum [-letter] -inc FILE
<
*VimtexImapsList*
*<plug>(vimtex-imaps-list)*
The nice part about this is how extensible it is. On top of counting the number of words in your current file, you can make a visual selection (say two or three paragraphs) and then only apply the command to your selection.
For a very basic article class document I just look at the number of matches for a regex to find words. I use Sublime Text, so this method may not work for you in a different editor, but I just hit Ctrl+F (Command+F on Mac) and then, with regex enabled, search for
(^|\s+|"|((h|f|te){)|\()\w+
which should ignore text declaring a floating environment or captions on figures as well as most kinds of basic equations and \usepackage declarations, while including quotations and parentheticals. It also counts footnotes and \emphasized text and will count \hyperref links as one word. It's not perfect, but it's typically accurate to within a few dozen words or so. You could refine it to work for you, but a script is probably a better solution, since LaTeX source code isn't a regular language. Just thought I'd throw this up here.

tf diff: why doesn't the command line diff recognize a valid version of a file sometimes?

I'm using the TFS Power Toys with PowerShell to get the history of a file. Like so:
$fileName = "$/MyDir/MyFile.cs"
$results = #(Get-TfsItemHistory $fileName )
I get a nice result set that has many ChangesetId's. However, when I run tf diff (tf diff /version:C36826~C36680 "$/MyDir/MyFile.cs" /format:unified) for some of the ChangesetIds I get:
Item $/MyDir/MyFile.cs;C37400 was not found in source control.
However I can use the compare tool from Visual Studio to compare those two versions of the file. Am I doing something wrong? It doesn't seem to have anything to do with the age of the file, there's instances where the command line diff will show a changeset but not a changeset that happened earlier in the day. When I view those changesets with the gui tool they have many lines that have changed, the changeset isn't empty.
What's up with this thing? Should I submit a bug report? This looks like a bug to me.
Maybe this has something to do with it: the last diff that works gives me "\ No newline at end of file".
I'll bet the file has been renamed. Luckily you are already using Powershell, so this is fairly straightforward to track down:
tfhistory "$/MyDir/MyFile.cs" -all | select changesetid, #{name="Path"; expression={$_.changes[0].item.serveritem}} | ft -auto
You'll then need to run diff using a slightly more verbose syntax:
tf diff "$/MyOtherDir/MyFile.old.cs;1234" "$/MyDir/MyFile.cs;5678"
[EDIT] The first command should print something like:
C:\workspaces\temp> tfhist rentest2 -all | select changesetid, #{name="Path"; expression={$_.changes[0].item.serveritem}} | ft -auto
ChangesetId Path
----------- ----
10725 $/Test-ConchangoV2/rentest2
10142 $/Test-ConchangoV2/rentest
As you can see, I personally have Get-TfsItemHistory aliased to 'tfhist' for even shorter typing. 'tfhistory' is what the PS console in the Power Tools uses, so that's what I put in my original instructions.

get a set of files that have been modified after a certain date

Does anyone have a handy powershell script that gets a set of files from TFS based on a modification date? I'd like to say "give me all the files in this folder (or subfolder) that were modified after X/Y/ZZZZ" and dump those files to a folder other than the folder they would normally go to. I know enough powershell to hack about and get this done, eventually, but I'm hoping to avoid that.
Make sure you have the Team Foundation 2015 Power Tools installed. It comes with a PowerShell snapin. You can run the PowerShell console file right from its startup group or you can execute Add-PSSnapin Microsoft.TeamFoundation.PowerShell. Then cd to your workspace and execute:
Get-TfsItemProperty . -r | Where {$_.CheckinDate -gt (Get-Date).AddDays(-30)} |
Format-Table CheckinDate,TargetServerItem -auto
CheckinDate TargetServerItem
----------- ----------------
9/14/2009 1:29:23 PM $/Foo/Trunk/Bar.sln
9/29/2009 5:08:26 PM $/Foo/Trunk/Baz.sln
To dump that info to a dir:
Get-TfsItemProperty . -r | Where {$_.CheckinDate -gt (Get-Date).AddDays(-30)} |
Select TargetServerItem > c:\recentlyChangedFiles.txt
To copy those files to another dir (this assumes you have them pulled down locally into a workfolder):
Get-TfsItemProperty . -r | Where {$_.CheckinDate -gt (Get-Date).AddDays(-30)} |
CopyItem -Path $_.LocalItem -Destination C:\SomeDir -Whatif
Note this copies files into a flat folder structure. If you want to maintain the dir structure it is a bit more involved.
Using Get-TfsItemProperty like Keith doesn't just require a workspace for the file copies. It's the wrapper for GetExtendedItems(), the server query for local info most commonly seen in Source Control Explorer. By relying on the version info it reports, you assume the files themselves were downloaded (more generally: synchronized, in the case of renames & deletes) in the last 30 days. If the workspace is not up to date, you'll miss some files / give them out-of-date names / etc. It's also quite expensive as informational commands go.
Some alternative examples:
# 1
Get-TfsChildItem $/FilesYouWant -R |
? { $_.CheckinDate -gt (Get-Date).AddDays(-30) } |
% { $_.DownloadFile(join-path C:\SomeDir (split-path $_.ServerItem -leaf)) }
# 2
Get-TfsItemHistory $/FilesYouWant -R -All -Version "D$((Get-Date).AddDays(-30).ToString('d'))~" |
Select-TfsItem |
Select -Unique -Expand Path |
Sort |
Out-File c:\RecentlyChanged.txt
The first is a straightforward adaptation of Keith's code, using a cheaper query and eliminating the workspace dependency. It's the best option if you know a high % of the items under that dir were modified recently.
The second option queries the changeset history directly. By letting the Where clause be computed in SQL instead of on the client, this can be an order of magnitude more efficient if a low % of the items were changed recently (as is often the case). However, it will lag the item-based queries if there are lots of large changesets returned, making the server's JOIN to grab the item properties expensive and forcing our client-side duplicate removal to do a lot of work.
[Yes, I know that having -Version require a string is not very Powershell-esque; mea culpa. You could create a DateVersionSpec with new-object and call its ToString(), but that's even more work.]
I didn't show every combination of API call + desired task. Goes without saying you can use #1 to generate the file list and #2 to (re)download by modifying the latter half of the pipeline. You can even combine that copy technique with the efficiency of Get-TfsItemHistory:
# 2b, with local-to-local copying
Get-TfsItemHistory $/FilesYouWant -R -All -Version "D$((Get-Date).AddDays(-30).ToString('d'))~" |
Select-TfsItem |
Select -Unique -Expand Path |
Get-TfsItemProperty |
Copy $_.LocalItem -Dest C:\SomeDir
It's true this makes a 2nd roundtrip to the server, but thanks to the initial query the GetExtendedItems() call will be scoped to the precise set of items we're interested in. And of course we remove any chance that download time becomes the bottleneck. This is likely the best solution of all when the # of changesets is small and the concerns I raised about Keith's workspace synchronization aren't relevant for whatever reason.
Can I just say that having to user powershell to do this seems absurd.
FWIW, I've been involved with TFS from inside & outside MS for 4.5yr and never seen this feature requested. If you could expand on what goal you're actually trying to accomplish, my guess is we could suggest a better way. Don't get me wrong, I wrote the Powershell extensions precisely to handle oddball scenarios like this. But frequently it's a job for another tool entirely, eg: Annotate, MSBuild, DB schema compare...

Resources