Copy directories overwriting the target - docker

Is there any way, in artifactory's rest API, to copy a directory, replacing the target with the new copy?
POST /api/copy/repo/dirA?to=repo/dirB
I would like dirB to be exactly like dirA after that.
In my current use case they are both docker directories.

You can make a pair of requests:
DELETE /repo/dirB
POST /api/copy/repo/dirA?to=repo/dirB
As far as I know, deleting the existing directory and copying the new one in its place is the only way to do what you want. Specifically, there's no option you can pass to the copy command that changes its behavior. There isn't really a reason to have one, since nearly everyone expects copied directories to merge into the destination: that's how it works in every filesystem I've heard of, and that's what people are used to.

Related

Custom houdini module path

I need to store Houdini *.hda files on a network share.
This folder needs to be sourced by all users.
Usually, for those kind of requests, I use an environment variable in ~/houdini17.0/houdini.env like for exemple:
HOUDINI_TEMP_DIR="/my/custom/temp/path"
But the issue is that I can find a solution for hda/otls files.
Adding it to HOUDINI_PATH="${HOUDINI_PATH};/my/custom/hda/path" or HOUDINI_OTLSCAN_PATH doesn't work and worst, it seems to break other links since a few other houdini nodes aren't available anymore.
Can someone point me to the right environnement variables?
Try using $HSITE and/or $JOB environment variables. Houdini will scan sub folders of the paths defined by $HSITE and $JOB for all relevant files and folders so you don't need to set a bunch of different env vars. You can mirror the folder structure found in C:\Users\username\Documents\houdini16.5
Obviously replace the Houdini version with yours. Also note that $HSITE needs to point the the folder that contains the houdini16.5 folder not the folder itself. This way you can support multiple houdini versions with a single env var.
http://www.sidefx.com/docs/houdini/basics/config.html
For example if $HSITE= //myNetworkShare/Houdini
You would need this folder structure:
//myNetworkShare/Houdini
/Houdini16.5
/otls
/scripts
/python2.7libs
/.....
Note you can only give $HSITE a single path.

How to remove a volume in a Dockerfile

I'm experiencing the same problem as found here:
mkdir .ssh in a Dockerfile, folder is not there?
I'm wondering if there is a way for my dockerfile to remove a volume declared by its parent?
My reasoning for this is that; the volume was declared to mount to an external database, the image I'm creating is for testing purposes and contains the data present for the volume internally. Ideally I don't want to have to populate this in my entry point as it's an expensive operation.
See also How to remove configure volumes in docker images
Actually I did have a very similar use case in using an image from production that would be modified for testing. The only chance is to modify the metadata of the parent image. As I need to that regulary, I have created a little script for that, have a look at docker-copyedit if that can help you.

Extract ZIP and move all files to parent directory

I'm only just getting started with Yeoman, trying to create a Generator that downloads WordPress, unzips it, and then proceeds to download my own WordPress starter theme.
The problem I'm having is that when I extract the latest.zip from wordpress.org (using this.extract()) it contains a wordpress/ directory resulting in my directory structure being my-project/wordpress/ rather than my-project/.
I've tried moving, copying and deleting the wordpress/ directory with various degrees of success; using this.fs.copy() I actually managed to get the files in the correct folder, but when trying to delete the original wordpress/ directory the user has to confirm deletion of every single file (not ideal). When I tried this.fs.move() I had to confirm each and every move instead.
I've found similar gulp/node.js questions on here, but I would prefer to use Yeoman's built in this.fs API.
Please note that I am aware of YEOPress but this is mostly for learnig purposes.
I ended up using the Node Package fs-extra instead as it deletes or moves without confirmation.

In org-mode, how do I keep the original path to images when using #+INCLUDE:?

I can use:
#+INCLUDE:
to include an org file in another org file, which allows me to assemble, say, a website from various org files. I'm exporting from the C-c C-e exporter in org-mode 7.5.
I could maintain a quite complex publication this way. This modular approach is quite common in, e.g. LaTeX and Texinfo publications.
However, links to images no longer work from the #+INCLUDEd org files. What seems to be happening is that the path to the images is taken as being from the org file that I am exporting from, rather than the actual org file that references the image.
The only ways I can see to resolve this are to:
use a flat file structure; or
make the image path from the referencing file (which I might not know in advance) rather than itself.
Neither of these is really sustainable.
How do I tell org to use the correct image path from its own relevant org file rather than the parent org file?
From what I know of the exporter, INCLUDE files are inserted into the document before export. Therefore the content is part of the document before it starts following paths to reach any links to files (images).
After a bit of testing you likely will need to use absolute file paths. Since you move between Windows and Linux your best bet would be to use a consistent scheme on both starting from your home directory.
Like that you can make the Org link:
[[~/path/to/image.jpg]], which will work on both systems (assuming you have set %HOME% on Windows).
Option 1 is potentially an alternative (although I agree it wouldn't be ideal at all), whereas the second option would have obvious pitfalls if you INCLUDE the file in more than one future document.

File repository in ruby on rails

I would like to create a simple file repository in Ruby on Rails. Users have their accounts, and after one logs in they can upload a file or download files previously uploaded.
The issue here is the security. Files should be safe and not available to anyone but the owners.
Where, in which folder, should I store the files, to make them as safe as possible?
Does it make sense, to rename the uploaded files, store the names in a database and restore them when needed? This might help avoid name conflicts, though I'm not sure if it's a good idea.
Should the files be stored all in one folder, or should they be somewhat divided?
rename the files, for one reason, because you have no way to know if today's file "test" is supposed to replace last week's "test" or not (perhaps the user had them in different directories)
give each user their own directory, this prevents performance problems and makes it easy to migrate, archive, or delete a single user
put metadata in the database and files in the file system
look out for code injection via file name
This is an interesting question. Depending on the level of security you want to apply I would recommend the following:
Choose a folder that is only accessible by your app server (if you chose to store in the FS)
I would always recommend to rename the files to a random generated hash (or incremntally generated name like used in URL shorteners, see the open source implementation of rubyurl). However, I wouldn't store them in a database because filesystems are built for handling files, so let it do the job. You should store the meta data in the database to be able to set the right file name when the user downloads the file.
You should partition the files among multiple folders. This gives you multiple advantages. First, filesystems are not built to handle millions of files in a single folder. If you have operations that try to get all files from a folder this takes significantly more time. If you obfuscate the original file name you could create one directory for each letter in the filename and would get a fairly good distributed number of files per directory.
One last thing to consider is the possible collision of file names. A user should not be able to guess a filename from another user. So you might need some additional checks here.
Depending on the level of security you want to achieve you can apply more and more patterns.
Just don't save the files in the public folder and create a controller that will send the files.
How you want to organise from that point on is your choice. You could make a sub folder per user. There is no need to rename from a security point of view, but do try to cleanup the filename, spaces and non ascii characters make things harder.
For simple cases (where you don't want to distribute the file store):
Store the files in the tmp directory. DON'T store them in public. Then only expose these files via a route and controller where you do the authentication/authorisation checks.
I don't see any reason to rename the files; you can separate them out into sub directories based on the user ID. But if you want to allow the uploading of files with the same name then you may need to generate a unique hash or something for each file's name.
See above. You can partition them any way you see fit. But I would definitely recommend partitioning them and not lumping them in one directory.

Resources