How to set a resource in a firefox addon? - firefox-addon

I have basically the same problem as this guy. I have a page, accessed over the web (well, local intranet, if that matters), and it needs to reference images on the client's machine. I know those images are going to be in C:\pics. Internet Explorer lets you just reference them, but I'm having trouble printing properly with internet explorer, so I want to try firefox. The answer on that question says you can create a "resource" with a firefox add-on that pages will be able to reference. However, it doesn't seem to be working. I followed the guide for how to make your first add-on and got the red border to work on mozilla sites. I tried editing that add-on to include a chrome.manifest file that just says this:
resource exposedpics file:///C:/pics
and then the page (an asp page) references exposedpics.
<img align=left border="0" src="resource:///exposedpics/<%=Request("Number")%>.jpg" style="border: 3 solid #<%=bordercolor%>" align="right" WIDTH="110" HEIGHT="110">
the page doesn't show the picture. If I go to View Image Info on the image, I'll see the address is "resource:///exposedpics/8593.jpg" (in my example where I input 8593), but it doesn't show the image here. (yes, the image does exist under c:\pics. if I go to file:///C:/pics/8593.jpg, it loads.)
so maybe I don't know how to use a chrome.manifest. (I'm not sure if I need to reference it somehow in my manifest.json, I'm not.) That stack overflow question also says it's possible to dynamically create resources. so I tried to make my manifest.json say:
{
"manifest_version": 2,
"name": "FirefoxPixExposer",
"version": "1.0",
"description": "allows websites to access C:\\pics",
"content_scripts": [
{
"matches": ["<all_urls>"],
"js": ["expose.js"]
}
]
}
and expose.js says
// Import Services.jsm unless in a scope where it's already been imported
Components.utils.import("resource://gre/modules/Services.jsm");
var resProt = Services.io.getProtocolHandler("resource")
.QueryInterface(Components.interfaces.nsIResProtocolHandler);
var aliasFile = Components.classes["#mozilla.org/file/local;1"]
.createInstance(Components.interfaces.nsILocalFile);
aliasFile.initWithPath("file:///C:/pics");
var aliasURI = Services.io.newFileURI(aliasFile);
resProt.setSubstitution("ExposedPics", aliasURI);
but the same thing happens, the image doesn't display. I did notice that if I put document.body.style.border = "5px solid red"; at the top of expose.js, I do see a border around the body, but if I move it to below the line Components.utils.import("resource://gre/modules/Services.jsm"); it doesn't show up. Therefore, I suspect the code to dynamically create a resource is broken.
What am I doing wrong? Ultimately, how can I get an image on the client's machine to show up on a page from the internet?

You are writing a WebExtensions so none of the APIs you are trying to use exist.
This includes Components.utils.import, Components.classes etc. You should read Working with files on MDN to get an idea, what is still possible.

Related

Twitter Card Images not working on Gatsby app

I'm working on a Gatsby app with Netlify CMS (and hosted on Netlify). Trying to get the metadata working so that Twitter cards display correctly with images.
The metadata is generally all right, but the images aren't showing on the Twitter validator or if I try to post to Twitter. The problem is clearly the images themselves, which are hosted on the site using Gatsby and Gatsby Image Sharp to render.
In fact, the validator seems to show no fundamental issues. Simply, the image doesn't show up:
Example relevant metadata:
<meta name="twitter:url" content="https://example.com/" data-react-helmet="true">
<meta name="twitter:image" content="https://example.com/static/12345/c5b20/blah.jpg" data-react-helmet="true">
<meta data-react-helmet="true" name="twitter:title" content="Site title">
<meta data-react-helmet="true" name="twitter:card" content="summary_large_image">
I know the images the issue, because if I replace my image URL (which is the full image URL) with an external URL, it works fine, showing the full card with image.
Any idea what could be causing this? I'm sizing the image down so it loads quickly, and it seems to load just fine directly (eg). (I mean, is there something weird/off about that image?)
NOTE: In a previous version of this question, I referenced Cloudinary and Uploadcare, but have since removed those two in a branch to simplify the problem. (They seem to have been unecessary holdovers from the starter app I used.) You can now see an example page for that branch here and the associated image in the twitter:image tag here. I feed this pre-processed/shrunk image into the header using React Helmet (and Gatsby React Helmet) and using the following code in my GraphQL call to get the image associated with the blogpost in that particular, smaller format:
featuredimage {
childImageSharp {
fixed(width: 480, quality: 75) {
src
}
}
Second Note/thought: Should I be worried about the fact that the pages in production seem to be re-rendering on every reload? Isn't SSR supposed to ensure that doesn't happen? I tested this by including a call to Math.random(), hidden, in the page. You can see the result by running document.getElementsByClassName('document')[0].children[0].innerText, and note that it produces a different number on each page reload. This implies to me that the whole page is being re-rendered by the client. Isn't that wrong? Why would that be happening? Might that relate to some sort of client processing of the images on each request, which might be screwing up the Twitter cards?
Third update: I put together a simpler reproduction here. It's based off of this starter template, with Uploadcare/Cloudinary removed and Twitter card metadata added to the header. Other than that, and removing unnecessary pages, I didn't make any other changes. I used this starter for a repro rather than a vanilla starter app, because I'm unsure whether the issue is caused by the interaction of Netlify CMS and the Gatsby Sharp Image plugin. I might try to put together a second reproduction. For now, the code for this repo is here, and the pages that should show Twitter cards are the blog posts, such as this one.
ACTUALLY, it seems that a super basic reproduction, with Gatsby 3 and no Netlify CMS or anything, has the same issue. Here's the minimal reproduction, with the image taken from src/images using an allImageSharp query and inserted into the metadata for each page. Code here.
FINAL UPDATE
Based on Derek's answer below, I removed the #reach/router stuff, and got the site URL from Netlify build env variables. It appeared that #reach/router only gave this information when JS was running, which excluded the Twitterbot, resulting in an undefined base URL, which broke the Twitter image. Including the URL from Netlify (using process.env.URL in the Gatsby config and pulling that in through a siteMetadata query) fixed the problem!
Update:
I think I might have found the issue. When opening the minimal production with script disabled, the url for twitter:image is invalid:
<meta data-react-helmet="true" name="twitter:image" content="undefined/static/03475800ca60d2a62669c6ad87f5fda0/58026/energy.jpg">
So for some reasons, during build, the hostname is missing, but when JS kicks in, it appears (Might have something to do with the way you get the hostname). Twitter crawlers probably does not have JS enabled & couldn't fetch the image.
Make sure your opengraph images are absolute urls with https:// or http:// protocols. I checked your example link & saw that it was a relative link (/static/etc.)
For Twitter, it seems to demand social cards to be 2:1
Images for this Card support an aspect ratio of 2:1 with minimum dimensions of 300x157 or maximum of 4096x4096 pixels.
https://developer.twitter.com/en/docs/twitter-for-websites/cards/overview/summary-card-with-large-image
If you're using the latest Gatsby image plugin, you can use aspectRatio to crop the image.
Also note that you can skip the twitter:image tag, if your og:image has already satisfied Twitter's card requirement.
SSR does not mean to never run JS in the client, React will render your page on the client side regardless of SSR.
This was solved here: https://github.com/gatsbyjs/gatsby/discussions/32100.
"location and thus origin is not available during gatsby build and thus the generated HTML has undefined there."
I got it working by changing the way I create the image URL inside seo.js from this:
let origin = "";
if (typeof window !== "undefined") {
origin = window.location.origin;
}
const image = origin + imageSrc;
to this:
const imageSrc = thumbnail && thumbnail.childImageSharp.fixed.src;
const image = site.siteMetadata?.siteUrl + imageSrc;
You need to use siteUrl from siteMetadata.
Below is my pageQuery from inside blog-post.js:
export const pageQuery = graphql`
query BlogPostBySlug(
$id: String!
$previousPostId: String
$nextPostId: String
) {
site {
siteMetadata {
title
siteUrl
}
}
markdownRemark(id: { eq: $id }) {
id
excerpt(pruneLength: 160)
html
frontmatter {
title
date(formatString: "MMMM DD, YYYY")
description
thumbnail {
childImageSharp {
fixed(width: 1200) {
...GatsbyImageSharpFixed
}
}
}
}
}
}
`

How to add extensions to Microsoft Edge

I an trying to figure out how to add webRequest extension to Microsoft Edge. Can someone provide some assistance? I have gone though a number of documents, but when I go to Microsoft online store I don't see it there.
Test code:
<html>
<script>
browser.webRequest.onBeforeRequest.addListener(
logURL,
{ urls: ["<all_urls>"] }
);
function logURL(requestDetails) {
console.log("Loading: " + requestDetails.url);
};
</script>
</html>
In the doc of webRequest in MDN, we can see that:
To use the webRequest API for a given host, an extension must have the "webRequest" API permissions and the host permission for that host.
Where can we add the permissions? The answer is the manifest.json file. It is the necessary part of an extension. You could see the Anatomy of an extension to learn the compositions of an extension.
Besides, browser.webRequest isn't in the list of content scripts APIs, so we can only use it in background scripts.
In a conclusion, we can't just use the browser.webRequest in a script of a html file. If we want to test the event browser.webRequest.onBeforeRequest, we need to have a manifest.json file, put permisssions in it:
"permissions": [
"*://learn.microsoft.com/*",
"webRequest"
]
Then put the scripts you gave in the background scripts. Then you could try to debug the extension in Edge, there will be no error. Here is an article about creating a Microsoft Edge extension, you could refer to it if you need.

How Automatically On "Content Blocker" Extension in Safari section?

I am creating an Ad Blocker. I am just trying to Automatically on safari extension of "Content Blocker". I went through examples but did't found any solution. Is there any way to "ON" extension or manually we have to start it?
On iOS, Safari Content Blockers are disabled by default.
There is no way to automatically enable them from your app. You must instruct the user to:
Open the Settings app.
Go to Safari > Content Blockers.
Toggle on your Content Blocker extension.
On macOS (as of 10.12), a similar rule applies: Content Blocker extensions (bundled with your app) are disabled by default, and must be toggled on by the user in Safari Preferences > Extensions.
Assuming you want to test your "personal AdBlock program", first prepare a dummy HTML, with this line <div class="ads">hello</div>,
next apply your "personal AdBlock program", assuming it is JavaScript/CSS based and not proxy-like, you either hide, or remove the element (Node) from the DOM.
for example:
document.querySelector('div[class*="ads"]') -- this is nice and (very) generic way to find the element.
this is how to hide "the ads"
document.querySelector('div[class*="ads"]').style.display="none";
or, to make it stronger, related to other rules on the page, make it a local style + important notifier: document.querySelector('div[class*="ads"]').style.cssText="display:none !important;" ;
you can also remove the element (Node) from the DOM:
var e = document.querySelector('div[class*="ads"]') follow by:
e.parentNode.removeChild(e);
now, you probably want to see that "YOUR ADBLOCK" worked,
later (after the page has loaded, and your javascript code runned) type:
console.log(null === document.querySelector('div[class*="ads"]') ? "removed(success)" : "still here(failed)")
note that for this example (to make things simple) I assume there is only one div with that class in the page (avoiding loops :) ).
if you've just going to hide the element, you should query its current (most updated) style-condition, using a native method exist under window:
console.log("none" === window.getComputedStyle(document.querySelector('div[class*="ads"]')) ? "hidden(success)" : "still here(failed)")
Enjoy!

nicEdit Uploading Locally - Issues with nicUpload

If anyone has managed to get locally uploading images I'd be mightily appreciative of some help.
I've downloaded the latest version of nicEdit along with the nicUpload plug in (from nicedit.com - Version 0.9 r24 released June 7th, 2012).
I've also downloaded nicUpload.php from http://svn.nicedit.com//trunk/nicUpload/php/nicUpload.php
NicUpload.php - I've set NICUPLOAD_PATH and NICUPLOAD_URI both to 'images' which is the subfolder of where nicupload.php and nicEdit.js are located.
NicEdit.js - I've added the following to line 271:-
uploadURI : 'nicUpload.php?id=123',
I've given it an ID otherwise it was failing with an invalid ID code. But the ?id=123 isn't meant to be there. I've also set the iconsPath accordingly.
Line 1370 I've switched this:-
nicURI : 'http://api.imgur.com/2/upload.json',
for this:-
nicURI : 'http://www.mydomain.com/nicedit/nicUpload.php',
But I'm still getting "Failed to upload image". I've searched and searched and searched for answers to this and I'm getting close to having spent two days tinkering with it.
With a few debugging displays I can see that it's failing on line 46 of nicUpload.php where it says:-
$file = $_FILES['nicImage'];
$image = $file['tmp_name'];
$max_upload_size = ini_max_upload_size();
if(!$file) {...
That last IF is true and that's where it exits with the error.
Appreciate anyone being able to help.
The nicUpload.php script file laying around sucks and I don't even understand how it could work.
NicEditor uses imgur as the default image upload service. The source code follows the API format described here: http://api.imgur.com/resources_anon#upload
My suggestion would be to implement the API request and response defined there.
I did not use the niceedit upload function to do what you want. I managed to add a button to the link and img dropdown menu. The button opens a file manager window where you also can upload. I managed to put then de url of the image or document into the nicedit drop down img or url window. That is how I solved the problem.

Developing a Firefox plugin/addon that invokes "Save As" from FF's own set of functions

I have a basic FF addon that polls for something in the DOM of the page in window.document. When it sees it, it is supposed to save the page. That's the hard part. I don't want to replicate the functionality of "save complete" I just want to call the pre-existing functionality from the plugin/addon at the right moment.
Is this an XPCom thing? Or is it pure JavaScript via the relevant APIs ?
iMacros for Firefox can invoke Save-as (without popping the associated dialog), but I can't see how.
Can anyone advise as to how to call deeper Firefox functions like this?
Thanks, - Paul
PS - I really love Mozilla Archive Format, with MHT and Faithful Save but I think it is replicating functionality again. My alternative is to invoke it's function, but that's as opaque to me as the firefox native one.
You can use nsIWebBrowserPersist.saveDocument() for this:
var persist = Cc["#mozilla.org/embedding/browser/nsWebBrowserPersist;1"].
createInstance(Ci.nsIWebBrowserPersist);
var localPath = Cc["#mozilla.org/file/local;1"].
createInstance(Ci.nsILocalFile);
localPath.initWithPath(pathToLocalDirectory);
var localFile = localPath.clone();
localFile.append("mylocalfile.html");
persist.saveDocument(document, localFile, localPath, null, 0, 0);
The key is the third parameter which specifies where the linked URIs should be stored. See http://mxr.mozilla.org/mozilla2.0/source/embedding/components/webbrowserpersist/public/nsIWebBrowserPersist.idl#256 for complete documentation.

Resources