zsh script to encode full file path - url

I want to be able to encode a path for use as a url i.e change spaces to %20. I found this function which does the encoding:
urlencode() {
setopt localoptions extendedglob
input=( ${(s::)1} )
print ${(j::)input/(#b)([^A-Za-z0-9_.\!~*\'\(\)- ])/%${(l:2::0:)$(([##16]#match))}}
}
and want to be able to pass the results of this:
print -l $PWD/* | tail -1
to the function.i.e get the last full path in the file list and encode it.
I thought that something like this:
print -l $PWD/* | tail -1 | urlencode
or
print -l $PWD/* | tail -1 > urlencode
would work but they don't.
Does anyone know how to accomplish it?
Many Thanks

You need to get your input from stdin rather than from the first argument.
Here is one way to adapt the function to do this
urlencode() {
setopt localoptions extendedglob
stdin=`while read line; do echo $line ;done`
input=( ${(s::)stdin} )
print ${(j::)input/(#b)([^A-Za-z0-9_.\!~*\'\(\)- ])/%${(l:2::0:)$(([##16]#match))}}
}
I tested it on my terminal, it works

Related

Is there a script that can extract particular link from txt and write it in another txt file?

I'm looking for a script (or if there isn't, I guess I'll have to write my own).
I wanted to ask if anyone here knows a script that can take a txt file with n links (lets say 200). I need to extract only links that have particular characters in them, let's say I only need links that contain "/r/learnprogramming". I need the script to get those links and write them to another txt files.
Edit: Here is what helped me: grep -i "/r/learnprogramming" 1.txt >2.txt
you can use ajax to read .txt file using jquery
<script src=https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.1/jquery.min.js></script>
<script>
jQuery(function($) {
console.log("start")
$.get("https://ayulayol.imfast.io/ajaxads/ajaxads.txt", function(wholeTextFile) {
var lines = wholeTextFile.split(/\n/),
randomIndex = Math.floor(Math.random() * lines.length),
randomLine = lines[randomIndex];
console.log(randomIndex, randomLine)
$("#ajax").html(randomLine.replace(/#/g,"<br>"))
})
})
</script>
<div id=ajax></div>
If you are using linux or macOS you could use cat and grep to output the links.
cat in.txt | grep /r/programming > out.txt
Solution provided by OP:
grep -i "/r/learnprogramming" 1.txt >2.txt
Since you did not provide the exact format of the document I assume those links are separated by newline characters. In this case, the code is pretty straightforward using Python/awk since you can iterate over file.readlines() and print only those that match your pattern (either by using a lines.contains(pattern) or using a regex if the pattern is more complex). To store the links in a new file simply redirect the stdout to a new file like this:
python script.py > links.txt
The solution above works even if links are separated by an arbitrary symbol s, first read the file into a single string and split it over s. I hope this helps.

Is there a way to create with a script a swift class and add properties in pre-build?

Maybe this is dumb but here goes nothing:
I am curious if there is a way to create a swift class or struct based on a list of properties.
Maybe have his script inserted somewhere in the build phases.
For example, have a local JSON file or something similar, to read from:
{"className":"Person", "name":"string", "age":"int" }
would create the struct:
struct Person {
let name: String
let age: Int
}
#! /bin/bash
gawk -F, '{
for (i = 1 ; i <= NF ; ++i)
{
split($i, arr, ":")
match(arr[1], /"(.*)"/, mat)
key=mat[1]
match(arr[2], /"(.*)"/, mat)
value=mat[1]
if (key ~ /className/)
{
struct_name=value
}
else
if (value != "")
{
contents[key]=value
}
}
}
END {
print "struct "struct_name" {"
for (key in contents)
{
print "\tlet "key": "contents[key]
}
print "}"
}' file
I have heard people say this alot that jq is better when dealing with json, but I have never tried that. So you should consider searching about jq if it helps.
The above gawk script should produce the desired output.
The same can be done with general awk or even normal bash scripts but doing it with gawk was a little easier.
All you need to do is redirect the output of the above scipt to the desired file. Like suppose you saved the script with name parser.
bash parser > pathtoyourapp/filename.swift
You can do the same in the above script as too like on the last line of script:
}' file > pathtoyourxcodeproject/controller/filename.swift
As you are making swift code, I assume you have a mac. You can install gawk by any of the package managers available for mac. The one I use is macports. After installing macports, you can install gawk by sudo port install gawk.
UPDATE:
As mentioned in the comments by trojanfoe, the project navigator won't update just by adding the file to the project directory. I am not sure about how to do that.
What I found after searching the net, this seems to do the job.

Reading a file line by line using bash, extracting some data. How?

I want to read a file a extract information from it based on certain tag. For example :
SCRIPT_NAME:mySimpleShell.sh
This is a simple shell. I would like to have this as
Description. I also want to create a txt file our of this.
SCRIPT_NAME:myComplexShell.sh
This is a complex shell. I would like to have this as
Description. I also want to create a txt file our of this.
So when I pass in this file to my shell script, my shell will read it line by line and
when it gets to SCRIPT_NAME, It extract it and save it in $FILE_NAME, then starts writing
the description to a file on disk with $FILE_NAME.txt name. And It does it until It reaches the end of file. If there is 3 SCRIPT_NAME tag, then it creates 3 description file.
Thanks for helping me in advance :)
Read the lines using a while loop. Use a regex to check if a line has SCRIPT_NAME and if so, extract the filename. This is shown below:
#! /bin/bash
while IFS= read -r line
do
if [[ $line =~ SCRIPT_NAME:(.*$) ]]
then
FILENAME="${BASH_REMATCH[1]}"
echo "Writing to $FILENAME.txt"
else
echo "$line" >> "$FILENAME.txt"
fi
done < inputFile
#!/bin/sh
awk '/^SCRIPT_NAME:/ { split( $0, a, ":" ); name=a[2]; next }
name { print > name ".txt" }' ${1?No input file specified}

Help with grep in BBEdit

I'd like to grep the following in BBedit.
Find:
<dc:subject>Knowledge, Mashups, Politics, Reviews, Ratings, Ranking, Statistics</dc:subject>
Replace with:
<dc:subject>Knowledge</dc:subject>
<dc:subject>Mashups</dc:subject>
<dc:subject>Politics</dc:subject>
<dc:subject>Reviews</dc:subject>
<dc:subject>Ratings</dc:subject>
<dc:subject>Ranking</dc:subject>
<dc:subject>Statistics</dc:subject>
OR
Find:
<dc:subject>Social web, Email, Twitter</dc:subject>
Replace with:
<dc:subject>Social web</dc:subject>
<dc:subject>Email</dc:subject>
<dc:subject>Twitter</dc:subject>
Basically, when there's more than one category, I need to find the comma and space, add a linebreak and wrap the open/close around the category.
Any thoughts?
Wow. Lots of complex answers here. How about find:
,
(there's a space after the comma)
and replace with:
</dc:subject>\r<dc:subject>
Find:
(.+?),\s?
Replace:
\1\r
I'm not sure what you meant by “wrap the open/close around the category” but if you mean that you want to wrap it in some sort of tag or link just add it to the replace.
Replace:
\1\r
Would give you
Social web
Email
Twitter
Or get fancier with Replace:
\1\r
Would give you
Social web
Email
Twitter
In that last example you may have a problem with the “Social web” URL having a space in it. I wouldn't recommend that, but I wanted to show you that you could use the \1 backreference more than once.
The Grep reference in the BBEdit Manual is fantastic. Go to Help->User Manual and then Chapter 8. Learning how to use RegEx well will change your life.
UPDATE
Weird, when I first looked at this it didn't show me your full example. Based upon what I see now you should
Find:
(.+?),\s?
Replace:
<dc:subject>\1</dc:subject>\r
I don't use BBEdit, but in Vim you can do this:
%s/(_[^<]+)</dc:subject>/\=substitute(submatch(0), ",[ \t]*", "</dc:subject>\r", "g")/g
It will handle multiple lines and tags that span content with line breaks. It handles lines with multiple too, but won't always get the newline between the close and start tag.
If you post this to the google group vim_use and ask for a Vim solution and the corresponding perl version of it, you would probably get a bunch of suggestions and something that works in BBEdit and then also outside any editor in perl.
Don
You can use sed to do this either, in theory you just need to replace ", " with the closing and opening <dc:subject> and a newline character in between, and output to a new file. But sed doesn't seem to like the html angle brackets...I tried escaping them but still get error messages any time they're included. This is all I had time for so far, so if I get a chance to come back to it I will. Maybe someone else can solve the angle bracket issue:
sed s/, /</dc:subject>\n<dc:subject>/g file.txt > G:\newfile.txt
Ok I think I figured it out. Basically had to put the replacement text containing angle brackets in double quotes and change the separator character sed uses to something other than forward slash, as this is in the replacement text and sed didn't like it. I don't know much about grep but read that grep just matches things whereas sed will replace, so is better for this type of thing:
sed s%", "%"</dc:subject>\n<dc:subject>"%g file.txt > newfile.txt
You can't do this via normal grep. But you can add a "Unix Filter" to BBEdit doing this work for you:
#!/usr/bin/perl -w
while(<>) {
my $line = $_;
$line =~ /<dc:subject>(.+)<\/dc:subject>/;
my $content = $1;
my #arr;
if ($content =~ /,/) {
#arr = split(/,/,$content);
}
my $newline = '';
foreach my $part (#arr) {
$newline .= "\n" if ($newline ne '');
$part =~ s/^\s*(\S*(?:\s+\S+)*)\s*$/$1/;
$newline .= "<dc:subject>$part</dc:subject>";
}
print $newline;
}
How to add this UNIX-Filter to BBEdit you can read at the "Installation"-Part of this URL: http://blog.elitecoderz.net/windows-zeichen-fur-mac-konvertieren-und-umgekehrt-filter-fur-bbeditconverting-windows-characters-to-mac-and-vice-versa-filter-for-bbedit/2009/01/

Importing CSV from a variable instead of file?

I have a command that formats it's output in the form of CSV. I have a list of machine this command will run against using a foreach loop. in the below example $serverlist is automatically generated with an AD Query.
foreach ($server in $serverlist) {
$outputlist = mycommand
}
what I would like to do is somehow end up with objects from the resulting CSV so I can then only select certain objects for a report. However the only way I can see to do this is using import-csv, which only seems to want to work with files and not variable: ie.
Import-Csv output.csv | ft "HostName","TaskName" |
Where-object {$_.TaskName -eq 'Blah'}
I'd like to be able to have import-csv $outputlist instead. doing this causes import-csv to have a hissyfit :)
Can anyone point me in the right direction on how to achieve this?
Cheers
The command you want is called ConvertFrom-CSV. The syntax is shown below.
NAME
ConvertFrom-CSV
SYNOPSIS
Converts object properties in comma-separated value (CSV) format into CSV
versions of the original objects.
SYNTAX
ConvertFrom-CSV [[-Delimiter] <char>] [-InputObject] <PSObject[]> [-Header <string[]>] [<CommonParameters>]
ConvertFrom-CSV -UseCulture [-InputObject] <PSObject[]> [-Header <string[]>] [<CommonParameters>]

Resources