Slash at the end of url - url

I think (correct me if I am wrong) that it is better to put a / at the end of most of url. Like this: http://www.myweb/file/
And not put / at the end of filenames: http://www.myweb/name.html
I have to correct that in a website with a lot of links. Is there a way I can do that in a fast way. For instance in some programs like Dreamweaver I can use find and replace.
The second case is quite easy with Dreamweaver:
- Find: .html/"
- Replace: .html"
But how can I say something like:
- Find: all the links that end with a directory. Like http://www.myweb/file
- Replace: the same link but with a / at the end. Like http://www.myweb/file/

Your approach may work but it is based on the assumption that all files have a file extension.
There is a distinct difference between the urls http://www.myweb/file and http://www.myweb/file/ because the latter could resolve to http://www.myweb/file/index.php, or any other in the default set configured in your web server. That URL could also reference a perfectly valid file which doesn't contain a file extension, such as if it were a REST endpoint.
So you are correct insofar as you should explicitly add a "/" if you are referring to a directory, for example if you are expecting the web server to look up the correct index page to respond, or doing a directory listing.
To replace the incorrect URLS, regular expressions are your friend.
To find all files which have an erroneous "/" you could use /\.(html|php|jpg|png)\//, adding as many different file extensions into that pipe-separated list as you like. You can then replace that with .$1 or .\1 depending on your tool.
An example of doing this with Perl would be:
perl -pi -e 's/\.(html|php|jpg|png)\//.\1/g' theFileYouWantToCheck.html
Of (if you're using a Linux-based system) you can automate that nicely with find:
find path/to/html/root -type f -name "*.html* | xargs perl -pi -e 's/\.(html|php|jpg|png)\//.\1/g'
which will find all html files in the directory and do an inline find and replace. Assuming you're using version control, it's then easy to see the changes it's applied :)
Update
Solving the problem for adding a slash to directories isn't trivial. The approach I'd take:
Write a script to recurse through your website structure locally, making a list of all files
Parse the HTML files to extract all href=".*" and replace them with href=".*/" only if the end of the URL isn't present in the list extracted by the first script.
Any text-based find and replace is not going to be aware of whether the link is actually to a file or not.

Related

rename file name with eloquent way

File.rename(blog_path + '/' + project_path, File.expand_path(topic_name, blog_path))
I use these code to rename ruby file name, but I think there is a better way to write this functionality with less code since it includes blog_path two times.
The code is OK, but I think there is no need to expand_path here - this method creates an absolute path from the the relative one.
Also, it is good to use File.join to create a path instead just concatenate it with slash - it will be completely OS independent. So I would write your code like this:
File.rename(File.join(blog_path, project_path), File.join(blog_path, topic_name))
Or if you want to get rid of doubled blog_path, change working directory before doing a rename:
Dir.chdir(blog_path)
File.rename(project_path, topic_name)
More info on working with files and directories in Ruby you can find in the article: Ruby for Admins: Files and Directories.

How to get Swagger UI to use my titles?

I am using Swagger-php and Swagger-UI and it all works just fine but for one annoyance. The UI that Swagger-UI creates has the expected click-to-expand sections for my API routes, but the title of each one appears to be the JSON file generated and not any name I can give it. After the title, the description is the one I give in my annotation, but the title I seem to have no control over.
So if I have routes that begin with a resourcePath of /foo, and a description that says "Foo API Functions," the UI looks like:
foo.json : Foo API Functions
I don't want "foo.json" I'd much rather specify what this says. Like just "Foo" or even "Foo Functions" and then change my description to something more meaningful like, "This is where you find the foo functions."
Am I missing which annotation to use for this?
If you manually edit the api-docs.json file. You can replace the .json with .{format} and all will display correctly and function correctly. Not sure why the .{format} is not inserted by default. Slightly annoying.
I too ran into this problem, but I couldn't find anything on either theswagger-php or swagger-ui github pages mentioning this. So, I wrote a short build script (assuming a Unix-like OS) as a work around, which will first build all the docs using swagger.phar and then run sed to do an inline string replace on the problematic string. Here is what I did below:
#!/bin/bash
# write API documentation from "src" directory to the "docs" directory
/usr/bin/php swagger.phar src -o docs
# replace instances of "json" with "{format}" to fix swagger-php formatting issue
sed -i -e 's/json/{format}/g' docs/api-docs.json
Fixed in swagger-php 0.9.1
I don't know why swagger-ui strips out ".{format}" but not ".json"
The .{format} was not inserted by default because it might be confusing. It suggests the presence of different formats and swagger-php only supports the json format.

Lua - My documents path and file creation date

I'm planning to do a program with Lua that will first of all read specific files
and get information from those files. So my first question is whats the "my documents" path name? I have searched a lot of places, but I'm unable to find anything. My second question is how can I use the first four letters of a file name to see which one is the newest made?
Finding the files in "my documents" then find the newest created file and read it.
The reading part shouldn't be a problem, but navigating to "my documents" and finding the newest created file in a folder.
For your first question, depends how robust you want your script to be. You could use Lua's builtin os.getenv() to get a variety of environment vars related to user, such as USERNAME, USERPROFILE, HOMEDRIVE, HOMEPATH. Example:
username = os.getenv('USERNAME')
dir = 'C:\\users\\' .. username .. '\\Documents'
For the second question, there is no builtin mechanism in Windows to have the file creation or modification timestamp as part of the filename. You could read the creation or modification timestamp, via a C extension you create or using an existing Lua library like lfs. Or you could read the contents of a folder and parse the filenames if they were named according to the pattern you mention. Again there is nothing built into Lua to do this, you would either use os.execute() or lfs or, again, your own C extension module, or combinations of these.

How to compress multiple folders into one archive?

I have some compression components (like KAZip, JVCL, zLib) and exactly know how to use them to compress files, but i want to compress multiple folders into one single archive and keep folders structure after extract, how can i do it?
in all those components i just can give a list of files to compress, i can not give struct of folders to extract, there is no way (or i couldn't find) to tell every file must be extracted where:
i have a file named myText.txt in folder FOLDER_A and have a file with same name myText.txt in folder FOLDER_B:
|
|__________ FOLDER_A
| |________ myText.txt
|
|__________ FOLDER_B
| |________ myText.txt
|
i can give a list of files to compress: myList(myText.txt, myText.txt) but i cant give the structure for uncompress files, what is best way to found which file belongs to which folder?
The zip format just does not have folders. Well, it kinda does, but they are kind of empty placeholders, only inserted if you need metadata storage like user access rights. But other than those rather rare advanced things - there is no need for folders at all. What is really done - and what you can observe opening zip file in the notepad and scrolling to the end - is that each file has its path in it, starting with "archive root". In your exanple the zip file should have two entries (two files):
FOLDER_A/myText.txt
FOLDER_B/myText.txt
Note, that the separators used are true slashes, common to UNIX world, not back-slashes used in DOS/Windows world. Some libraries would fix back-slashes it for you, some would not - just do your tests.
Now, let's assume that that tree is contained in D:\TEMP\Project - just for example.
D:\TEMP\Project\FOLDER_A\myText.txt
D:\TEMP\Project\FOLDER_B\myText.txt
There are two more questions (other than path separators): are there more folders within D:\TEMP\Project\ that should be ignored, rather than zipped (like maybe D:\TEMP\Project\FOLDER_C\*.* ? and does your zip-library have direct API to pack the folders wit hall its internal subfolder and files or should you do it file by file ?
Those three questions you should ask yourself and check while choosing the library. The code drafts would be somewhat different.
Now let's start drafting for the libraries themselves:
The default variant is just using Delphi itself.
Enumerate the files in the folder: http://docwiki.embarcadero.com/CodeExamples/XE3/en/DirectoriesAndFilesEnumeraion_(Delphi)
If that enumeration results in absolute paths then strip the common D:\TEMP\Project from the beginning: something like If AnsiStartsText('D:\TEMP\Project\', filename) then Delete(filename, 1, Length('D:\TEMP\Project\'));. You should get paths relative to chosen containing place. Especially if you do not compress the whole path and live some FOLDER_C out of archive.
Maybe you should also call StringReplace to change '\' into '/' on filenames
then you can zip them using http://docwiki.embarcadero.com/Libraries/XE2/en/System.Zip.TZipFile.Add - take care to specify correct relative ArchiveFileName like aforementioned FOLDER_A/myText.txt
You can use ZipMaster library. It is very VCL-bound and may cause troubles using threads or DLLs. But for simple applications it just works. http://www.delphizip.org/
Last version page have links to "setup" package which had both sources, help and demos. Among demos there is an full-featured archive browser, capable of storing folders. So, you just can read the code directly from it. http://www.delphizip.org/191/v191.html
You talked about JVCL, that means you already have Jedi CodeLib installed. And JCL comes with a proper class and function, that judging by name can directly do what you want it too: function TJclSevenzipCompressArchive.AddDirectory(const PackedName: WideString; const DirName: string = ''; RecurseIntoDir: Boolean = False; AddFilesInDir: Boolean = False): Integer;
Actually all those libraries are rather similar on basic level, when i made XLSX export i just made a uniform zipping API, that is used with no difference what an actual zipping engine is installed. But it works with in-memory TStream rather than on-disk files, so would not help you directly. But i just learned than apart of few quirks (like instant vs postponed zipping) on ground level all those libs works the same.

ack misses results (vs. grep)

I'm sure I'm misunderstanding something about ack's file/directory ignore defaults, but perhaps somebody could shed some light on this for me:
mbuck$ grep logout -R app/views/
Binary file app/views/shared/._header.html.erb.bak.swp matches
Binary file app/views/shared/._header.html.erb.swp matches
app/views/shared/_header.html.erb.bak: <%= link_to logout_text, logout_path, { :title => logout_text, :class => 'login-menuitem' } %>
mbuck$ ack logout app/views/
mbuck$
Whereas...
mbuck$ ack -u logout app/views/
Binary file app/views/shared/._header.html.erb.bak.swp matches
Binary file app/views/shared/._header.html.erb.swp matches
app/views/shared/_header.html.erb.bak
98:<%= link_to logout_text, logout_path, { :title => logout_text, :class => 'login-menuitem' } %>
Simply calling ack without options can't find the result within a .bak file, but calling with the --unrestricted option can find the result. As far as I can tell, though, ack does not ignore .bak files by default.
UPDATE
Thanks to the helpful comments below, here are the new contents of my ~/.ackrc:
--type-add=ruby=.haml,.rake
--type-add=css=.less
ack is peculiar in that it doesn't have a blacklist of file types to ignore, but rather a whitelist of file types that it will search in.
To quote from the man page:
With no file selections, ack-grep only searches files of types that it recognizes. If you have a file called foo.wango, and ack-grep doesn't know what a .wango file is, ack-grep won't search it.
(Note that I'm using Ubuntu where the binary is called ack-grep due to a naming conflict)
ack --help-types will show a list of types your ack installation supports.
If you are ever confused about what files ack will be searching, simply add the -f option. It will list all the files that it finds to be searchable.
ack --man states:
If you want ack to search every file,
even ones that it always ignores like
coredumps and backup files, use the
"−u" switch.
and
Why does ack ignore unknown files by
default? ack is designed by a
programmer, for programmers, for
searching large trees of code. Most
codebases have a lot files in them
which aren’t source files (like
compiled object files, source control
metadata, etc), and grep wastes a lot
of time searching through all of those
as well and returning matches from
those files.
That’s why ack’s behavior of not
searching things it doesn’t recognize
is one of its greatest strengths: the
speed you get from only searching the
things that you want to be looking at.
EDIT: Also if you look at the source code, bak files are ignored.
Instead of wrestling with ack, you could just use plain old grep, from 1973. Because it uses explicitly blacklisted files, instead of whitelisted filetypes, it never omits correct results, ever. Given a couple of lines of config (which I created in my home directory 'dotfiles' repo back in the 1990s), grep actually matches or surpasses many of ack's claimed advantages - in particular, speed: When searching the same set of files, grep is faster than ack.
The grep config that makes me happy looks like this, in my .bashrc:
# Custom 'grep' behaviour
# Search recursively
# Ignore binary files
# Output in pretty colors
# Exclude a bunch of files and directories by name
# (this both prevents false positives, and speeds it up)
function grp {
grep -rI --color --exclude-dir=node_modules --exclude-dir=\.bzr --exclude-dir=\.git --exclude-dir=\.hg --exclude-dir=\.svn --exclude-dir=build --exclude-dir=dist --exclude-dir=.tox --exclude=tags "$#"
}
function grpy {
grp --include=*.py "$#"
}
The exact list of files and directories to ignore will probably differ for you: I'm mostly a Python dev and these settings work for me.
It's also easy to add sub-customisations, as I show for my 'grpy', that I use to grep Python source.
Defining bash functions like this is preferable to setting GREP_OPTIONS, which will cause ALL executions of grep from your login shell to behave differently, including those invoked by programs you have run. Those programs will probably barf on the unexpectedly different behaviour of grep.
My new functions, 'grp' and 'grpy', deliberately don't shadow 'grep', so that I can still use the original behaviour any time I need that.

Resources