azcopy v10 - copy to destination only if destination file does not exist - azcopy

My command is .\azcopy cp "source" "dest" --recursive=true
Where both source and dest are storage containers.
When I run the cp command, it seems like azcopy iterates over every file and transfers to destination.
Is there way to only copy the file if the file does not exist or is different in the destination?
azcopy sync does something similar, but only supports dest/origin of local/container and not container/container, as is my understanding.

We've just added container to container support in version 10.3

If you want to stick with AzCopy v10, it looks like there is an --overwrite parameter which you can set to true (default), false, or prompt. By setting to false, it won't overwrite any files that already exist. However, it also won't overwrite any files which are newer in the source -- not sure if that is a deal-breaker for you.

Your understanding is right, currently, the azcopy sync supports only between the local file system to the blob storage container, not container/container. Since Synchronization is one-way. As a workaround, you could perform the synchronous process in two steps by syncing from the specified blob storage source to the local file path and then syncing them to the Blob storage destination from the local file path.
Another option is to use AzCopy v8.1. The /XO and /XN parameters allow you to exclude older or newer source resources from being copied, respectively. If you only want to copy source resources that don't exist in the destination, you can specify both parameters in the AzCopy command:
/Source:http://myaccount.blob.core.windows.net/mycontainer /Dest:http://myaccount.blob.core.windows.net/mycontainer1 /SourceKey:<sourcekey> /DestKey:<destkey> /S /XO /XN

Related

gsutil path name not passed from the command line to the destination bucket

I have spotted an odd change to the Google SDK I cannot see any release notes about.
It appears somewhere between versions 4.28 and 4.34 the way path names are passed through the gsutil command have altered.
Before:
gsutil cp myfolder/myfile.csv gs://mybucket/
Would copy the file into a sub folder called gs://mybucket/myfolder
Now, with the latest version, it is only copied to the top folder specified gs://mybucket/
The issue I have is that I have dozens of batch files which all do the following...
for %%f in (./Myfolder/*.csv) do (
call gsutil cp Myfolder/%%f gs://mis_sc/
)
Now I realise its a simple (but rather tedious) change to add the folder on the end of all of my gsutil commands but we have a mix of versions across PC's and if the older version runs with changed script I get two folders the same name, one under the other. Also the logic was the folder name on the network = bucket name so the jobs can be very generic.
We have tested on two PC's, pre and post upgrade to ensure its not a PC config causing the difference in behaviour.
Any ideas, was this a deliberate change?
We are concerned that if we do update everything, will it ever revert back?
Thanks
Steve
The "Before" behavior you describe is not how gsutil was ever supposed to have worked: As noted here, "copying individually named files will result in objects named by the final path component of the source files".
If you are able to get gsutil to reproduce the "Before" behavior you describe using an older version of gsutil please provide the details to reproduce it: the gsutil version, bucket contents before copying, and source folder/object name. You can get to all the previous versions under gs://pub/gsutil_*

ADD command with network path in Windows Containers Dockerfiles

I'm creating some Windows Container images that I need but the source file I want to ADD are in a network share \\myserver\myshare\here.
I've tried in any possible way but I always get the message error The system cannot find the path specified.
Is it because I have not yet found the right way to set it or is it that it is just not possible?
From the Docker site:
Multiple resource may be specified but if they are files or directories then they must be relative to the source directory that is being built (the context of the build).
Is that why I can't accomplish what I need?
Full error message: GetFileAttributesEx \\myserver\myshare\here\: The system cannot find the path specified.
Whatever you ADD or COPY must be in the docker build context.
When you do this:
docker build .
That directory param (the . in the example) is the context that is copied and sent to the Docker daemon. Then the docker daemon use those files to COPY or ADD. It won't use any file that is not in that context.
That is the issue that you are experiencing. I'm not sure how you can solve it anything than copying the files from \\myserver to your build directory.
ADD is capable of download files by providing an URL (should investigate if it supports Windows' shares)

Is git unpack-objects of any use?

I don't understand what is the need/use of the git unpack-objects command.
If I have a pack file outside of my repository and run the git unpack-objects on it the pack file will be "decompressed" and all the object files will be placed in .git/objects. But what is the need for this? If I just place the pack and index files in .git/objects I can still see all the commits and have a functional repo and have less space occupied by my .git since the pack file is compact.
So why would anyone need to run this command?
Pack file uses the format that is used with normal transfer over the network. So, I can think of two main reasons to use the manual command instead of network:
having a similar update workflow in an environment without network configuration between the machines or where that cannot be used for other reasons
debugging/inspecting the contents of the transfer
For 1), you could just use a disk or any kind of mobile media for your files. It could be encrypted, for instance.

Why is my dockerfile not copying directories

in my dockerfile I have these two lines:
ADD /ansible/inventory /etc/ansible/hosts
ADD /ansible/. /ansiblerepo
The first line works, as I can run the container and see my hosts file has been populated with all the ips from my inventory file.
The second line doesn't appear to be working though. I'm just trying to copy all the files/subdirectories of ansible and copy them over to the ansiblerepo directory inside the new container.
There are no errors while building the image, but again ansiblerepo is just an empty directory and nothing has copied over to it. I assume I'm just missing a back slash or something.
Docker ADD and COPY commands work relative to the build directly, and only for files in that directory that weren't excluded with a .dockerignore file. The reason for this is that builds actually run on the docker host, which may be a remote machine. The first step of a docker build . is to package up all the files in the directory (in this case .) and send them to the host to run your build. Any absolute paths you provide are interpreted as relative to the build directory and anything you reference that wasn't sent to the server will be interpreted as a missing file.
The solution is to copy /ansible to your build directory (typically the same folder as your Dockerfile).
Make sure that in your ".dockerignore" file, it does not excluded everything. usually, dockerignore file has these lines
*
!obj\Docker\publish\*
!obj\Docker\empty\
this means that everything is ignored except publish and empty folders.
Removing trailing /. from source directory should fix the ADD command.
On a related note, Docker Best Practices suggest using COPY over ADD if you don't need the URL download feature.

D5 library folder (not the Library Paths)

just had a crash with an SSD (a days work went missing!) and have had to go back to a HDD.
I have just installed D5 on the HDD and would like to try and find the Library Paths file so I can just copy it all across. There are about 40 Path-entries in it.
Using a USB adapter I searched the SSD for file contents with a fragment of a Path that the file contains but it came up zip.
$(DELPHI)\Lib;$(DELPHI)\Bin;
Can anyone one please point me at where the Library Paths are actually stored?
Thank you.
There is no "Library Paths file".
The $(DELPHI) part of what you quoted refers to your Delphi installation root (base) folder, which in the case of Delphi 5 defaults to C:\Program Files\Borland\Delphi5, so the $(DELPHI)\Lib folder would be C:\Program Files\Borland\Delphi5\Lib.
This path information is configured when you install Delphi, and is stored in the Windows Registry in HKEY_CURRENT_USER\Software\Borland\Delphi\5.0\RootDir for Delphi 5.
The "about 40 paths" probably refers to what you've configured in Tools->Environment Options->Library->Library Path; that information is also saved in the Windows Registry. If you can't boot Windows from the SSD drive, you're out of luck; you need to start Windows and then use RegEdit to export that key from the registry in order to recover that information. You'll need to reinstall your third-party components, I'm afraid.
Further to Ken's answer, if the SSD is readable and mounted as an additional drive you can get the registry settings: DISCLAIMER: This is mainly from memory, but I have done this in a similar situation.
Copy NTUSER.DAT from (SSD):\Users\<username> to a "safe place". You will have to uncheck the "Hide protected operating system files" option from Explorer Folder Options, or use the command line for this.
Run regedit. Select the HKEY_USERS key, the use File -> Load Hive and select the NTUSER.dat file uou copied from the old drive.
Hopefully this will load your registry settings from the old computer into a new key under HKEY_USERS
Find Software\Borland\Delphi\5.0\ in the new hive, then export whatever subkeys you need to a .reg file
Tweak the exported file(s) - you will need to change the key names to HKEY_CURRENT_USER\Software\Borland\Delphi...\*
Unload the registry hive
Backup your existing HKEY_CURRENT_USER\Software\Borland\Delphi reg key
Check then import the tweaked registry file
This is almost a question for SuperUser!

Resources