FTP transfer adds white space to after every line of code - upload

I use WinSCP for FTP, to upload files on host servers. On some of them, but very rarely, it creates white space after every line of code (new lines in my case). So when I download it back from the server, number of code lines is doubled.

I found that some hosts are using ASCII(text) as default transfer mode, and others use BINARY. So when I change this to the one that server use, there is no more new lines in transfered files.
In WinSCP this is located under Options->Transfer->Edit->Transfer Mode

Related

Make BitBucket Server Recognize Readme by Different Name

Using BitBucket Server, when viewing a folder in a repository that contains a README file, the contents of that file are automatically displayed on screen. It seems to automatically detect files names, for instance, "README.txt" or "README.md". However, it appears to require that the filename match must be exact. (except for the file extension) For instance, "_README.txt" is not automatically displayed.
Is there a way to change the parsing logic for BitBucket Server for which file(s) are automatically detected and displayed? More specifically, I'd like it to detect and display files named either "README" or "_README".
The reason for adding the underscore is it ensures that, when browsing the source code in Windows File Explorer or similar, sorting files alphabetically will (usually) show the "_README" first, making it easier to notice/find.

Viewing a large production log file

I have a 22GB production.log file in my Ruby on Rails app. I want to browse/search the contents over SSH without downloading. Is this possible? Are there any tools?
22GB is a very large file so it would be risky for your server to open the whole file using any tools. I'd recommend to split the file into multiple parts and search in each part. For example, using this command to split your file into small chunks of 1GB.
split -b 1GB very_large_file small_file
Also, you should set logrotate for your server to avoid log file getting too big.

File encoding seems to break edited PHP file

I have a WordPress site hosted with BlueHost. When I FTP on to the server, edit a WordPress theme file and re-upload, I get a white screen with the following error
Parse error: syntax error, unexpected '}' in
/home/challey3/public_html/wp-content/themes/challengers/page-invoice-payment.php
on line 1
The code is being downloaded directly to my hard drive and edited using PhpStorm, I've noticed then when I open the file in PhpStorm there is an additional blank line between each line of code, whereas the additional lines are not present when editing through Notepad.
The changes to the code were adding a jQuery snippet to the HTML and no modifications were made to the PHP itself. Undoing adding the snippet and re-uploading have the same effect, however, if I do a Git revert and re-upload the problem is resolved.
The only thing I can think is that the file is being encoded differently through PhpStorm/Windows and uploading it back to the server is somehow breaking things. The server is running Ubuntu.
PhpStorm does not modify the file during transfer (upload or download).. so it must be server-side (FTP) setting.
As per my knowledge it's about line ending used in that file + specific FTP server config.
My assumption (based on personal experience working with 2 such already "broken" sites + info from other users who faced the same) is that during upload FTP server reacts on CR (used as line ending symbol -- CRLF is what Windows uses) and tries to "fix" it by replacing it by LF.
The FTP server may simply be doing it wrong -- instead of replacing whole CRLF it just does it on CR only.. so you may simply ending up with LFLF (2 line endings in Unix style -- 2nd one makes that extra empty line).
If I'm correct -- try converting that file in IDE to use Unix style LF as line separators first (either via Status Bar (next to encoding) or File | Line Separators).
In any case: here is the ticket in PhpStorm's Issue Tracker -- maybe one day they can offer some better solution: https://youtrack.jetbrains.com/issue/WI-9103 -- watch it (star/vote/comment) to get notified on any progress.

Why am I sometimes getting files filled with zeros at their end after being downloaded?

I'm developing a download manager using Indy and Delphi XE (The application uses Multithreading to attempt several connections to the server). Everything works fine but sometimes the final downloaded file is broken and when I check downloaded temp files I see that 2 or 3 of them is filled with zero at their end. (Each temp file is download result of each connection).
The larger the file is, the more broken temp files I get as the result.
For example in one of the temp files which was 65,536,000 bytes, only the range of 0-34,359,426 was valid and from 34,359,427 to 64,535,999 it was full of zeros. If I delete those zeros, application will automatically download the missing segments and what I get as the result, well if the problem wouldn't happen again, is the healthy downloaded file.
I want to get rid of those zeros at the end of the temp files without having a lost in download speed.
P.S. I'm using TFileStream and I'm sending it directly to TIdHTTP and downloading the files using GET method.
Additional Info: I handle OnWork event which assigns AWorkCount to a public int64 variable. Each time the file is downloaded, the downloaded file size (That Int64 variable) is logged to a text file and from what the log says is that the file has been downloaded completely (even those zero bytes).
Make sure the server actually supports downloading byte ranges before you request a range to download. If the server does not support ranges, a requested range will be ignored by the server and the entire file will be sent instead. If you are not already doing so, you should be using TIdHTTP.Head() to text for range support before then calling TIdHTTP.Get(). You also need to do this anyway to detect if the remote file has been altered since the last time you downloaded it. Any decent download manager needs to be able to handle things like that.
Also keep in mind that if TIdHTTP knows up front how many bytes are being transferred, it will pre-allocate the size of the destination TStream before then downloading data into it. This is to speed up the transfer and optimize disc I/O when using a TFileStream. So you should NOT use TFileStream to access the same file as the destination for multiple simultaneous downloads, even if they are writing to different areas of the file. Pre-allocating multiple TFileStream objects will likely trample over each other trying to set the file size to different positions. If you need to download a file in multiple pieces simultaneously then either:
1) download each piece to a separate file and copy them into the final file as needed once you have all of the pieces that you need.
2) use a custom TStream class, or Indy's TIdEventStream class, to manage the file I/O yourself so you can ignore TIdHTTP's pre-allocation attempts and ensure that multiple file I/O operatons do not overlap each other incorrectly.

Does Windows.CopyFile create a temporary local file while source and destination are network shares?

I have a D2007 application that uses Windows.CopyFile to copy MS Word and PowerPoint files from one network folder to another network folder. Our organization is migrating to Windows 7 from Vista. One of my migrated users got an error message that displayed a partial local folder (C:\Users\(username)\...\A100203.doc) during the copy. Does the CopyFile function cache a local copy of the document when it is copying from one network folder to another network folder or is it a direct write? I have never seen this error before and the application has been running for years on Win95, Win 98, Win2000, WinXP and Vista.
Windows.CopyFile does NOT cache the file on your hard drive... instead, it instructs Windows to handle the copying of the file itself (rather than you managing the streams in your own program). The output file buffer (destination) is opened, and the input buffer simply read and written. Essentially this means that the source file is spooled into system memory, then offloaded onto the destination... at no point is an additional cache file created (this would slow file copying down).
You need to provide more specific information about your error... such as either the text or an actual screenshot of the offending error message. This will allow people to provide more useful answers.
The user that launches the copy will require read access to the original and write access to the target, regardless of caching (if the user has read access to the file, then the file can be written to a local cache, so caching/no-caching is irrelevant).
It's basic security to disallow someone to be able to copy files/directories among machines just because the security attributes between the machines are compatible.
There's little else to say without the complete text of the error message.

Resources