I have a folder with about 450k images in sequentially numbered. However, from about 0-200k there are quite a few gaps. I want to only zip any of the images from 0-200k into a folder.
I've been looking and grep -E keeps coming up but it looks like I'd have to specify all the ranges every 100 which isn't great.
Is there a quicker way to do it (on amazon linux)
The images are named 1.jpg, 2.jpg, 3.jpg and so on to 199999.jpg
Not sure about amazon linux but it worked on Ubuntu 17.10:
tar -czvf up_to_200K.tar.gz `for FILE in $(ls|grep -oP '^\d+(?=.jpg)'); do if [ $FILE -le 200000 ]; then echo $FILE.jpg;fi;done | xargs`
Related
I have 100 images named img0.jpg to img99.jpg to be converted to a pdf file. problem is
convert img*.jpg out.pdf
adds pages in the order of 1,11,2,22,etc. how is order defined in imagemagick?
Either number your pages with zero-padded numbers like this so ImageMagick takes them in order:
img000.jpg
img001.jpg
img002.jpg
...
img098.jpg
Then your original command should work.
Or, have bash enumerate the files in order and feed the names into ImageMagick like this:
magick img{0..99}.jpg result.pdf
Or:
for file in img{0..99}.jpg; do echo $file; done | magick #- result.pdf
Or rename your files as per the first example above, but using Perl rename:
rename --dry-run 's/\D//g; $_=sprintf("f-%05d.jpg",$_)' f*jpg
Sample Output
'f0.jpg' would be renamed to 'f-00000.jpg'
'f1.jpg' would be renamed to 'f-00001.jpg'
'f10.jpg' would be renamed to 'f-00010.jpg'
'f11.jpg' would be renamed to 'f-00011.jpg'
'f12.jpg' would be renamed to 'f-00012.jpg'
You may have ls -v available to you, in which case you can try:
magick $(ls -v img*jpg) result.pdf
I am using asset catalog for emoji. The question is, how I could get all emoji file list from asset catalog to prevent hard coded array of file names?
I tried to create run script., but it's not working. creating txt file and put directory path only
for file in "./Images.xcassets/Smiles/"; do
echo $file >> ./Sparkle/smiles.txt
done
Maybe anyone could help me with this or other solution. Thanks
Ok, there is solution to create txt file with specific file list:
Run Script:
rm './Sparkle/smiles.txt'
for file in './YOURAPP/Resources/Images.xcassets/Smiles/smile_*'; do
filename=$(basename $file | cut -f 1 -d '.')
echo -e "$filename\n" > './YOURAPP/smiles.txt'
done
I could do this for .zip files in the folder using the command below:
for f in "!"; do unzip -d "${f%*.zip}" "$f"; done
The above command extracts all .zip files in a given folder to subfolders, having content and name of respective .zip files.
But I couldn't find a command that would do the same for .tar files. Please help.
Btw, I am trying to do this on a remote server using WinSCP/putty. So, I cannot use a GUI software. I need a command, thus the question.
After a bit of fiddling I came up with for f in $(find -maxdepth 1 | grep .tar); do mkdir ${f%.tar}; tar -xaf $f -C ${f%.tar} ; done appears to work, so long as the file name does not contain any spaces. I assume you wanted the directory from foo.tar to be named foo (no file extension). If you want the directory to be named foo.tar (with file extension) then try using for f in $(find -maxdepth 1 | grep .tar); do mkdir $f ; tar -xaf $f -C $f ; done.
IIRC, the remote access client Cyberduck can handle compressed files in a GUI - so you can try that if you're fine with a GUI solution.
I have a huge file file.tar.xz containing many smaller text files with a similar structure. I want to quickly examine a file out of the compressed file and have a glimpse of files content structure. I don't have information about names of the files within the compressed file. Is there anyway to extract a single file out given the above the above scenario?
Thank you.
EDIT: I don't want to tar -xvf file.tar.xz.
Based on the discussion in the comments, I tried the following which worked for me. It might not be the most optimal solution, the regex might need some improvement, but you'll get the idea.
I first created a demo archive:
cd /tmp
mkdir demo
for i in {1..100}; do echo $i > "demo/$i.txt"; done
cd demo && tar cfJ ../demo.tar.xz * && cd ..
demo.tar.xz now contains 100 txt files.
The following lists the contents of the archive, selects the first file and stores the path within the archive into the variable firstfile:
firstfile=`tar -tvf demo.tar.xz | grep -Po -m1 "(?<=:[0-9]{2} ).*$"`
echo $firstfile will output 1.txt.
You can now extract this single file from the archive:
tar xf demo.tar.xz $firstfile
The problem:
I have a back-end process that at some point he collect and build a big tar file.
This tar receive few directories and an exclude files.
the process can take up to few minutes and i want to report in my front-end process (GUI) about the progress of the taring process (This is a big issue for a user that press download button and it seems like nothing is happening...).
i know i can use -v -R in the tar command and count files and size progress but i am looking for some kind of tar pre-run mode / dry run to help me evaluate either the expected number of files or the expected tar size.
the command I am using: tar -jcf 'FILE.tgz' 'exclude_files' 'include_dirs_and_files'
10x for everyone who is willing to assist.
You can pipe the output to the wc tool instead of actually making a file.
With file listing (verbose):
[git#server]$ tar czvf - ./test-dir | wc -c
./test-dir/
./test-dir/test.pdf
./test-dir/test2.pdf
2734080
Without:
[git#server]$ tar czf - ./test-dir | wc -c
2734080
Why don't you run a
DIRS=("./test-dir" "./other-dir-to-test")
find ${DIRS[#]} -type f | wc -l
beforehand. This gets all the files (-type f) one per line and counts the number of files. DIRS is an array in bash, so you can store the folders in a variable
If you want to know the size of all the stored files, you can use du
DIRS=("./test-dir" "./other-dir-to-test")
du -c -d 0 ${DIRS[#]} | tail -1 | awk -F ' ' '{print $1}'
This prints the disk usage with du, calculates a grand total (-c flag), gets the last line (example 4378921 total), and uses just the first column with awk