In Laravel 5.6, how do I create relative links? - url

I'm new to Laravel (using 5.6) and can't get my links to work.
My directory structure is: resources/views/pages/samples
In the samples directory, I have 10 blade files I want to link to (named "sample1.blade.php", etc.). I have a "master" links page in the pages directory (one level up from samples).
I've tried the following but can't get any of them to work correctly...
Sample 1
Sample 1
Sample 1
Sample 1
...and a few other variations.
I've also tried adding a base tag to the HTML header but that doesn't help.
Every time I click a link, it says "Sorry, the page you are looking for could not be found."
What am I missing?

Thanks #happymacarts, I didn't realize I had to add a path for every single page in my site.
After adding the paths, the links are working.
I will get into the practice of updating the paths every time I add a page.

Related

How to make relative links work in a .rmd file?

I am finishing an online tutorial created by Jeniffer Bryan. There is a section about relative links in Git markdown files. I tried and looked at examples, but it never works in my .rmd file.
I tried to use branch names and the whole path, but the result is always "/rmd_output/1/master/ds.md not found" and "No image at path plot.jpg."
[a link](master/ds.md)
[a link](hwsf/ds.md)
![image1](plot.jpg)
When you click on "a link," you should be directed to another page, namely "ds.md." I am asking this question, because I've ensured that the documents ds.md and plot.jpg are in the desired directory.

What is the proper way to set up resource URLs in a ClojureScript single page application

I am developing a Clojure/ClojureScript SPA based on http-kit, compojure and tiny bits of hiccup on backend and mainly reagent on frontend. Project is done with leiningen, based on hand-wrecked chestnut template.
When I tried to make more complex URLs than just "/" the following setup created a mess for me:
When producing the initial hiccup to serve HTML and adding includes for CSS and JS files I followed the examples and added them as relative urls like
(include-css "css/style.css")
;and
(include-js "js/compiled/out/goog/base.js")
(include-js "js/compiled/myproject.js")
(note absence of slash in the beginning)
In the chestnut template I got default :asset-path option for cljsbuild set to "js/compiled/out"
Of course, when I tried to add a route to the same page with the http://my-domain/something URL in addition to root http://my-domain/ and load it, the thing failed to get any of my assets (trying to fetch them under e.g. /something/js/compiled/myproject.js).
I was able to fix this issue for explicitly included assets by making those urls relative to root (prepending a slash to each of them). It left me with the same problem with the script tag with src="js/compiled/out/cljs_deps.js" injected by cljsbuild, but this one I fixed by making :asset-path relative to root as well.
It all seems to work properly now, but the fact that I had to make some head-scratching and a surprisingly large amount of googling to finally resolve this makes me feel this is not the default approach. Hence the questions:
Did I do the right thing by converting all asset urls to relative-to-root ones? (Keeping in mind that I'm working on an SPA)
If yes, why isn't this a default approach and why I keep seeing relative-to-current location URLs everywhere (including all the examples on the web as well as lein templates)?
Update:
The relevant part of my app's compojure routes looks like this:
(defroutes home-routes
(resources "/")
(GET "/" _
(friend/authenticated
(html-response
(app-page))))
(GET "/something*" _
(friend/authenticated
(html-response
(app-page)))))

Gitbook not linking to root directory

In my markdown in gitbook, I have a link from a file in one folder to another page in a different folder:
in /folder2/file.md:
![](/example-folder/example-file.html#example-link)
In the gitbook editor's live preview, this link works as expected and goes to the correct page and location.
In the command line gitbook build, however, the html generated drops the initial slash and results in the html:
<a href="example-folder/example-file.html#example-link">
Without the initial slash, the link is no longer pointing to the root directory where the desired file is, and instead is trying to find /folder2/example-folder/
I have tried using a ./ at the beginning of the path, and get the same result.
Any suggestions for what I'm doing wrong or how I can fix it? Thank you!

overriding front template translations doesn't work

I'm working on prestashop and I'm Trying to override "order detail page" in front (customer's details orders).
This is how I did :
I copied file \controllers\front\OrderDetailController.php into folder \override\controllers\front\OrderDetailController.php
I copied also default template file order-detail.tpl into folder override/customtemplate/order-detail.tpl
And In OrderDetailController.php I have specified template directory like that
$this->setTemplate(_PS_OVERRIDE_DIR_ . '/themes/parfum_evo/order-detail.tpl');
I tried, it works fine except translations. Even watching the documentation, no test solution seems to work.
Could anyone help me? Thank you in advance :'(
The php override sits in the correct place. As for the other, you specified the path the override/customtemplate/order-detail.tpl but then placed it in override/themes/parfum_evo/order-detail.tpl. I take it as customtemplate is farfum_evo really, but you need to add another one named themes, after override, using that structure. I think. Because there is a hook named
DisplayOverrideTemplate
Which should take care of this, while I believe setTemplate for controllers will always grab from the main theme folder

Nutch - crawl domain first

I am new to Nutch and have very and I try to make it do some specific crawling, i.e. I want it to first go e.g 3 levels deep withing one specific domain(e.g. wikipedia) - that part can be achieved by modifying regex-urlfilter file.
But then I want it to start crawling all external links that it fetched before but only with 1 level depth.
So, my question is, is there any way to get list of crawled links from first run so that they could be used as seeds for second crawling?
You can get the list of crawled urls using this command:
bin/nutch readdb crawl/crawldb -dump file
You can then manually edit the urls/seed.txt file with the output from that command.

Resources