I do have a KeePass-Database which has up to 100 entries with url's in it. It has a bunch of entries where the url looks like this:
https://banking.consorsfinanz.de/onlinebanking-cfg/loginFormAction.do
Now I want to "shorten/cleanup" this URL's to this:
https://banking.consorsfinanz.de/
I could export the Database to csv and re-import it, but this forces me to create a new db which i try to avoid. Is there maybe another way? If not, can somebody write a line of code which runs preferrably in windows (if not, linux is also possible) to fix this in the csv?
Something like:
Search for the third occurence of / and delete everything afterwards OR
Search for * //*/ and delete everything afterwards
could work, or am I wrong?
Thank you!
where the url looks like this:
https://banking.consorsfinanz.de/onlinebanking-cfg/loginFormAction.do
Now I want to "shorten/cleanup" this URL's to this:
https://banking.consorsfinanz.de/
Awk
awk 'BEGIN{FS=OFS="/"}{print $1,$2,$3,""}'
example:
$ awk 'BEGIN{FS=OFS="/"}{print $1,$2,$3,""}' <<< "https://domain.name/foo/bar/blah/whatever"
https://domain.name/
Sed
sed 's#\(https://[^/]*/\).*#\1#'
example:
$ sed 's#\(https://[^/]*/\).*#\1#' <<<"https://domain.name/foo/bar/blah/whatever"
https://domain.name/
Related
I am trying to create an ER diagram for my project using RailRoady. I am also using PaperTrail gem. because of this my ERD is all messed up. Is there any way to exclude PaperTrail::version table from ERD?
I went through following issue but couldn't understand much
https://github.com/preston/railroady/issues/54
and
https://github.com/preston/railroady/pull/115
Can anyone give a snippet / Example.
Stumbled across your question today and followed to that first linked issue to this comment. I'm not entirely sure how you're using RailRoady, but what worked for me looked like this:
$ railroady -M -s $picked | sed '/PaperTrail/d' > picked.dot
Here, -M signifies models specifically and -s $picked passes my own list of model files where $picked looks like this:
./app/models/my_model.rb,./app/models/another_model.rb,./app/models/a_third_model.rb
I don't think it's necessary to specify files; it's just what I was doing because I only wanted to map certain files. I then pipe it to sed, which removes lines that mention PaperTrail, before outputting it to picked.dot.
Like I said, I don't know your specific use case, but this worked for me.
I need to go through in each and every folders/files in php to find some specific keyword in this case translation format e.g $this-> and $translator->.
I need to get those result and put it on to new files.
Here is what I have tried before using ruby.
this = File.readlines("folder_path.php")
#If I need to get any translation that contain $this-> should I use grep? I tried using grep before but not giving result that I need.
that = File.open("new_file.txt", "w")
that << this
that.close
Hope that I didn't make any confusion. Thanks.
Just use grep:
grep '$this->' -R *.php -n > result.txt
I'm writing a simple app that keeps track of how many lines of code there are in my rails application. It will keep track of how many lines of code I write per hour. At the moment I'm using a crontab command that runs every 10 minutes and appends the number of lines to a file.
0,10,20,30,40,50 * * * * cd bimble ; find . \( -name '*.rb' -o -name '*.erb' \) | xargs wc -l | tail -1 | sed 's/total//' >>linesOfCode.txt
Rather than writing to a file I would like to send numberOfLines variable to a rails app, what would be the easiest simplest way to do this?
Do I have to write an api? something like this http://squarism.com/2011/04/01/how-to-write-a-ruby-rails-3-rest-api/ or is there a better way to do this?
Thanks for your help!
Full API sounds like overkill. You can have a controller action that responds to GET, and pulls the right value you want out of the params hash, and does whatever you want internally in rails with it. Something like:
def getline
lines = params[:lines]
# do whatever, put in model, etc with lines var
end
Then you can simply have your cron script send that via a wget similar to
wget http://rails_url/controller/getline?lines=`cat linesOfCode.txt` &> /dev/null
I think (correct me if I am wrong) that it is better to put a / at the end of most of url. Like this: http://www.myweb/file/
And not put / at the end of filenames: http://www.myweb/name.html
I have to correct that in a website with a lot of links. Is there a way I can do that in a fast way. For instance in some programs like Dreamweaver I can use find and replace.
The second case is quite easy with Dreamweaver:
- Find: .html/"
- Replace: .html"
But how can I say something like:
- Find: all the links that end with a directory. Like http://www.myweb/file
- Replace: the same link but with a / at the end. Like http://www.myweb/file/
Your approach may work but it is based on the assumption that all files have a file extension.
There is a distinct difference between the urls http://www.myweb/file and http://www.myweb/file/ because the latter could resolve to http://www.myweb/file/index.php, or any other in the default set configured in your web server. That URL could also reference a perfectly valid file which doesn't contain a file extension, such as if it were a REST endpoint.
So you are correct insofar as you should explicitly add a "/" if you are referring to a directory, for example if you are expecting the web server to look up the correct index page to respond, or doing a directory listing.
To replace the incorrect URLS, regular expressions are your friend.
To find all files which have an erroneous "/" you could use /\.(html|php|jpg|png)\//, adding as many different file extensions into that pipe-separated list as you like. You can then replace that with .$1 or .\1 depending on your tool.
An example of doing this with Perl would be:
perl -pi -e 's/\.(html|php|jpg|png)\//.\1/g' theFileYouWantToCheck.html
Of (if you're using a Linux-based system) you can automate that nicely with find:
find path/to/html/root -type f -name "*.html* | xargs perl -pi -e 's/\.(html|php|jpg|png)\//.\1/g'
which will find all html files in the directory and do an inline find and replace. Assuming you're using version control, it's then easy to see the changes it's applied :)
Update
Solving the problem for adding a slash to directories isn't trivial. The approach I'd take:
Write a script to recurse through your website structure locally, making a list of all files
Parse the HTML files to extract all href=".*" and replace them with href=".*/" only if the end of the URL isn't present in the list extracted by the first script.
Any text-based find and replace is not going to be aware of whether the link is actually to a file or not.
i have a set of tags, from which i need to extract some data. I knwo this might be simple. I am not able to get to the part exactly. The tag is shown bewlow.
<Response><Result>Success</Result></Response>
I want to extract whatever comes between the tags. In this case, 'Success'.
I tried using the grep command , but couldnt get it done. Any help would be appreciated.
echo "<Response><Result>Success</Result></Response>" | perl -npe 's/.*>([^<]+)<.*/$1/'
If the data is saved in a file:
perl -npe 's/.*>([^<]+)<.*/$1/' infile