Using grep to remove specific objects in GCS with gsutil -rm - grep

I have a bucket with many objects, and I can successfully use grep to take the specific objects and output them into a text file. I want to use gsutil -rm and read the text file line by line and remove the corresponding object in gcs, but how can I go about doing this?
Or is there a way to directly remove objects from GCS using gsutil -rm and grep? Thanks!

Assuming you output the list of objects to remove to the file remove.txt, you could use this command to remove the named objects:
gsutil -m rm -I < remove.txt

Related

Exclude a directory from `podman/docker export` stream and save to a file

I have a container that I want to export as a .tar file. I have used a podman run with a tar --exclude=/dir1 --exclude=/dir2 … that outputs to a file located on a bind-mounted host dir. But recently this has been giving me some tar: .: file changed as we read it errors, which podman/docker export would avoid. Besides the export I suppose is more efficient. So I'm trying to migrate to using the export, but the major obstacle is I can't seem to find a way to exclude paths from the tar stream.
If possible, I'd like to avoid modifying a tar archive already saved on disk, and instead modify the stream before it gets saved to a file.
I've been banging my head for multiple hours, trying useless advices from ChatGPT, looking at cpio, and attempting to pipe the podman export to tar --exclude … command. With the last I did have small success at some point, but couldn't make tar save the result to a particularly named file.
Any suggestions?
(note: I do not make distinction between docker and podman here as their export command is completely the same, and it's useful for searchability)

Evaluating multiple files from a folder using opa eval command

I have seen this on the OPA website that I could use the following:
To Evaluate a policy on the command line.
./opa eval -i input.json -d example.rego
In my case, I have multiple input files and I have modified it to
./opa eval -i /tmp/tfplan.json -d /tmp/example.rego -d /tmp/input1.json -d /tmp/input2.json
Rather than specifying the files as input1, input2 and so on individually, can I directly modify it to read all the json files present in the tmp folder?
Yes! The CLI will accept directories for the -d/--data and -b/--bundle parameters and recursively load files from them.
Ex with:
/tmp/foo/input1.json
/tmp/foo/input2.json
opa eval -d /tmp/foo/input1.json -d /tmp/foo/input2.json ..
Is the same as
opa eval -d /tmp/foo ..
Keep in mind that -d/--data will attempt to load ANY files found in the directory, which can sometimes lead to conflicts (eg, duplicate keys in the data document) or loading incorrect files. So be careful about pointing at /tmp as its likely to include additional files you didn't neccisarily want (note the example above used a sub-directory). Typically we recommend using -b/--bundle and providing files as data.json with a directory structure following the bundle spec https://www.openpolicyagent.org/docs/latest/management/#bundle-file-format which can help avoid most of those common problems.

Upload Files with Space in Name on Google Cloud SDK Shell

I'm new to Google Cloud Storage, but have managed to find every answer to my questions online, but now I'm trying to upload files using the Google Cloud SDK and it works great for files with no spaces "this_file_001.txt" but if I try to upload a file with spaces "this file 001.txt" the system won't recognize the command. The command I'm using that works is
gsutil -m cp -r this_file_001.txt gs://this_file_001.txt
Now the same command with spaces doesn't work
gsutil -m cp -r this file 001.txt gs://this file 001.txt
Is there any way to accomplish this task?
Thanks in advance.
Putting the argument into quotes should help. I just tried the commands below using Google Cloud Shell terminal and it worked fine:
$ gsutil mb gs://my-test-bucket-55
Creating gs://my-test-bucket-55/...
$ echo "hello world" > "test file.txt"
$ gsutil cp "test file.txt" "gs://my-test-bucket-55/test file.txt"
Copying file://test file.txt [Content-Type=text/plain]...
Uploading gs://my-test-bucket-55/test file.txt: 12 B/12 B
$ gsutil cat "gs://my-test-bucket-55/test file.txt"
hello world
That said, I'd avoid file names with spaces if I could.
Alexey's suggestion about quoting is good. If you're on Linux or a Mac, you can likely also escape with a backslash (). On Windows, you should be able to use a caret (^).
Linux example:
$> gsutil cp test\ file.txt gs://bucket
Windows example:
c:\> gsutil cp test^ file.txt gs://bucket
Quotes work for both platforms, I think.

docker add extract to custom directory

A docker add will nicely extract the supplied compressed file into the directory specified in the zip/tar file
How can I extract it into a different directory?
Eg. if the file extracts to /myfile but I would prefer /otherFile
Don't believe there's any way to do this just using the ADD instruction. ADD supports a target directory obviously, like ADD ["<src>", "<dest>"] however it's still going to extract into the dir you have in the tar within that.
2 options, either rename the dir in the tar or do a RUN mv myfile otherfile after adding.
Is there a specific reason you need it to be named something in particular?
Think about this scenario where you build a tomcat image,
ADD apache-tomcat-8.0.48.tar.gz /opt
This cmd will extract the tar to /opt/apache-tomcat-8.0.48 , if you don't like the long folder name(apache-tomcat-8.0.48) then the requirement happens.

How can I extract all localizable strings from all XIB files into one file?

I am looking for a way of extracting all localizable strings from .xib files and have all of them saved in a single file.
Probably this involves ibtool but I was not able to determine a way of merging all these in only one translation dictionary (could be .strings, .plist or something else).
Open terminal and cd to the root directory of the project (or directory where you store all XIB files) and type in this command:
find . -name \*.xib | xargs -t -I '{}' ibtool --generate-strings-file '{}'.txt '{}'
The magic is the find and xargs commands working together. -I option generates placeholder. -t is just for verbose output (you see what commands has been generated and executed).
It generates txts files with the same name as xib files in the same directory.
This command can be improved to concatenate output into one file but still is a good starting point.
Joining them together:
You can concatenate those freshly created files into one using similar terminal command:
find . -name \*.xib.txt | xargs -t -I '{}' cat '{}' > ./xib-strings-concatenated.txt
This command will put all strings into one file xib-strings-concatenated.txt in root directory.
You can delete generated partial files (if you want) using find and xargs again:
find . -name \*.xib.txt | xargs -t -I '{}' rm -f '{}'
this is a lot easier now.
in xcode, select your project (not a target)
then use menu/editor/export for localisation
xcode will output an xliff file with all localisable strings from your entire project.

Resources