iOS create file name list from asset catalog - ios

I am using asset catalog for emoji. The question is, how I could get all emoji file list from asset catalog to prevent hard coded array of file names?
I tried to create run script., but it's not working. creating txt file and put directory path only
for file in "./Images.xcassets/Smiles/"; do
echo $file >> ./Sparkle/smiles.txt
done
Maybe anyone could help me with this or other solution. Thanks

Ok, there is solution to create txt file with specific file list:
Run Script:
rm './Sparkle/smiles.txt'
for file in './YOURAPP/Resources/Images.xcassets/Smiles/smile_*'; do
filename=$(basename $file | cut -f 1 -d '.')
echo -e "$filename\n" > './YOURAPP/smiles.txt'
done

Related

Unix. Parse file with full paths to SHA256 checksums files. Run command in each path/file

I have a file file.txt with filenames ending with *.sha256, including the full paths of each file. This is a toy example:
file.txt:
/path/a/9b/x3.sha256
/path/7c/7j/y2.vcf.gz.sha256
/path/e/g/7z.sha256
Each line has a different path/file. The *.sha256 files have checksums.
I want to run the command "sha256sum -c" on each of these *.sha256 files and write the output to an output_file.txt. However, this command only accepts the name of the .sha256 file, not the name including its full path. I have tried the following:
while read in; do
sha256sum -c "$in" >> output_file.txt
done < file.txt
but I get:
"sha256sum: WARNING: 1 listed file could not be read"
which is due to the path included in the command.
Any suggestion is welcome
#!/bin/bash
while read in
do
thedir=$(dirname "$in")
thefile=$(basename "$in")
cd "$thedir"
sha256sum -c "$thefile" >>output_file.txt
done < file.txt
Modify your code to extract the directory and file parts of your in variable.

How to copy multiple files in directory and move each into their correct directory

Unix shell ksh
I created a file list and am currently trying to copy each file to their correct path.
(mylist)
-1111
-2222
-3333
-4444
-5555
current directory
/sample/dir/unknown/
-1111fileneeded.txt
-2222fileneeded.txt
-3333fileneeded.txt
-4444fileneeded.txt
-5555fileneeded.txt
-6666dontneed.txt
-7777dontneed.txt
-8888dontneed.txt
...etc
The first 4 characters of each file matches with their correct path to where they need to go.
/sample/dir/1111/
/sample/dir/2222/
/sample/dir/3333/
/sample/dir/4444/
So here is what I currently have..
for i in `cat mylist`
do echo "$i"
find /sample/dir/unknown/mylist*
this is where I am kinda stuck and trying to figure out what needs to be done to have each file moved into their correct directory.
This should work
#!/bin/ksh
while IFS=\| read -r line; do
dir=`echo $line | cut -c 2-5`
mv "$line /sample/$dir/$line"
done > filelist.txt
IFS is escape special char, just in case.
cut -c 2-5 is taking all char from 2 to 5 (because there is a dash at the start of your file name)
Let me know if there is something else you don't understand.

command line to convert all .docx in a directory (and subdirectories) to text file and write new files

I would like to convert all .docx files in a directory (and subdirectories) to text files from the command line (so I can use grep after on these files). I found this
unzip -p tutu.docx word/document.xml | sed -e 's/<\/w:p>/\n/g; s/<[^>]\{1,\}>//g; s/[^[:print:]\n]\{1,\}//g'
here which works well but it sends the file in the terminal. I would like to write the new text file (.txt for instance) in the same directory as the .docx file. And I would like a script to do this recursively.
I have this, using antiword, that do what I want for .doc files but it doesn't work for .docx files.
find . -name '*.doc' | while read i; do antiword -i 1 "${i}" >"${i/doc/txt}"; done
I tried to mix both but without success... A command line that would do both at the same time would be appreciated!
Thank you
You can use pandoc to convert docx files. It doesn't support .doc files so you will need both pandoc and antiword.
Reusing your while loop:
find . -name '*.docx' | while read i; do pandoc --from docx --to plain "${i}" >"${i/docx/txt}"; done
The following script..
converts all docx files in the directory where you run it, recursively (adapt . in find . to your wished starting point)
writes the txt files to where it found the docx file
Bash script:
find . -name "*.docx" | while read file; do
unzip -p $file word/document.xml |
sed -e 's/<[^>]\{1,\}>//g; s/[^[:print:]]\{1,\}//g' > "${file/docx/txt}"
done
Afterwards you can run the grep like this:
grep -r "some text" --include "*.txt" .

extract a file from xz file

I have a huge file file.tar.xz containing many smaller text files with a similar structure. I want to quickly examine a file out of the compressed file and have a glimpse of files content structure. I don't have information about names of the files within the compressed file. Is there anyway to extract a single file out given the above the above scenario?
Thank you.
EDIT: I don't want to tar -xvf file.tar.xz.
Based on the discussion in the comments, I tried the following which worked for me. It might not be the most optimal solution, the regex might need some improvement, but you'll get the idea.
I first created a demo archive:
cd /tmp
mkdir demo
for i in {1..100}; do echo $i > "demo/$i.txt"; done
cd demo && tar cfJ ../demo.tar.xz * && cd ..
demo.tar.xz now contains 100 txt files.
The following lists the contents of the archive, selects the first file and stores the path within the archive into the variable firstfile:
firstfile=`tar -tvf demo.tar.xz | grep -Po -m1 "(?<=:[0-9]{2} ).*$"`
echo $firstfile will output 1.txt.
You can now extract this single file from the archive:
tar xf demo.tar.xz $firstfile

Extract text files in each subfolder and join them with the subfolder name

I have compressed text files in the following folder structure:
~/A/1/1.faa.tgz #each tgz file has dozens of faa text files
~/A/2/2.faa.tgz
~/A/3/3.faa.tgz
I would like to extract the faa files (text) from each tgz file and then join them using the subfoldername (1,2 and 3) to create a single text file for each subfolder.
My attempt was the following, but the files were extracted in the folder where I ran the script:
#!/bin/bash
for FILE in ~/A/*/*.faa.tgz; do
tar -vzxf "$FILE"
done
After extracting the faa files I would use "cat" to join them (for example, using cat *.faa > .txt.
Thanks in advance.
To extract:
#!/bin/bash
for FILE in ~/A/*/*.faa.tgz; do
echo "$FILE"
mkdir "`basename "$FILE"`"
tar -vzxf "$FILE" -C "`basename "$FILE"`"
done
To join:
#!/bin/bash
for dir in ~/A/*/; do
(
cd "$dir"
file=( *.faa )
cat "${file[#]}" > "${PWD##*/}.txt"
)
done

Resources