How to extract values from html script in server.R - shiny-server

I try to build an app that let's users select rectangles on a google Map. The coordinates should than be extracted and used for further calculations. I found an excellent tutorial here on how to interact with the google api and send the data to R. However, I needed to modify the files a bit to have a costum ui.R that lets the user select the fields but also select other options important for processing.
So now I have three files in the same directory.
The html file with the javascript commands for the google api, the ui.R and the server.R.
index.html
content can be found in the link above but the important part is:
//This function listen to the drawing manager and after you draw the rectangle it extract the coordinates of the NE and SW corners
google.maps.event.addListener(drawingManager, 'rectanglecomplete', function(rectangle) {
var ne = rectangle.getBounds().getNorthEast();
var sw = rectangle.getBounds().getSouthWest();
//The following code is used to import the coordinates of the NE and SW corners of the rectangle into R
Shiny.onInputChange("NE1", ne.lat());
Shiny.onInputChange("NE2", ne.lng());
Shiny.onInputChange("SW1", sw.lat());
Shiny.onInputChange("SW2", sw.lng());
ui.R
I use tags$embed to embed the index.html file in ui.R
require(shiny)
shinyUI(
fluidPage(
titlePanel("Included Content"),
# mainPanel(
fluidRow(
column(9,
div(style='height:800px; width:1200px; overflow: hidden',
tags$embed(src="./index.html", seamless=FALSE,width="100%",height="100%"))),
column(3,
h2("Some Calculations"),
selectInput("Test",label=NA,choices=c("yes","no")),
selectInput("Test2",label=NA,choices=c("2","3")),
h3("Coordinates"),
tableOutput("test"),
h3("allInput"),
tableOutput("test2"))
)
))
server.R
# server.R
library(sp)
library(rjson)
library(shiny)
library(dplyr)
library(tidyr)
shinyServer(function(input, output,session) {
CorLongLat<<-reactive({
if(length(input$NE1)>0){
as.data.frame(matrix(c(input$NE2,input$NE1,input$NE2,input$SW1,input$SW2,input$SW1,input$SW2,input$NE1),ncol=2,byrow=T))
}
})
output$test<-renderTable({
CorLongLat()
})
output$test2<-renderTable({
as.data.frame(reactiveValuesToList(input))%>%gather(ID,Value)
})
})
However, if I try to disply all inputs via output$test2<-renderTable({
as.data.frame(reactiveValuesToList(input))%>%gather(ID,Value)
})
it showes only the controls that are directly embeded in the ui.R. How do I access the variables NE1,NE2,SW1,SW2 from index.html that are created via Shiny.onInputChange?
EDIT
In the meantime I got a little closer to the core of the problem. Running the App with options(shiny.trace=TRUE) tells me after launching the app the following
Listening on http://127.0.0.1:4840
SEND {"config":{"workerId":"","sessionId":"16000363ddc17409cd5fdf25c038b61d"}}
RECV {"method":"init","data":{"Test":"yes","Test2":"2",".clientdata_output_test4_hidden":false,".clientdata_pixelratio":1,".clientdata_url_protocol":"http:",".clientdata_url_hostname":"127.0.0.1",".clientdata_url_port":"4840",".clientdata_url_pathname":"/",".clientdata_url_search":"",".clientdata_url_hash_initial":"",".clientdata_singletons":"",".clientdata_allowDataUriScheme":true}}
SEND {"errors":[],"values":[],"inputMessages":[]}
SEND {"config":{"workerId":"","sessionId":"0881062c26c882de2d0648a96ed98296"}}
RECV {"method":"init","data":{".clientdata_output_json_hidden":false,".clientdata_pixelratio":1,".clientdata_url_protocol":"http:",".clientdata_url_hostname":"127.0.0.1",".clientdata_url_port":"4840",".clientdata_url_pathname":"/index.html",".clientdata_url_search":"",".clientdata_url_hash_initial":"",".clientdata_singletons":"",".clientdata_allowDataUriScheme":true}}
SEND {"errors":[],"values":[],"inputMessages":[]}
After selecting a triangle on the map I get
RECV {"method":"update","data":{"NE1":67.40748724648756,"NE2":-11.77734375,"SW1":62.552856958572896,"SW2":-26.9384765625}}
And after selecting on selectInput :
RECV {"method":"update","data":{"Test2":"3"}}
I think the problem here is the second sessionID that is established. How do I refer to those in server.R or how do I combine both into on?

Related

Twitter Card Images not working on Gatsby app

I'm working on a Gatsby app with Netlify CMS (and hosted on Netlify). Trying to get the metadata working so that Twitter cards display correctly with images.
The metadata is generally all right, but the images aren't showing on the Twitter validator or if I try to post to Twitter. The problem is clearly the images themselves, which are hosted on the site using Gatsby and Gatsby Image Sharp to render.
In fact, the validator seems to show no fundamental issues. Simply, the image doesn't show up:
Example relevant metadata:
<meta name="twitter:url" content="https://example.com/" data-react-helmet="true">
<meta name="twitter:image" content="https://example.com/static/12345/c5b20/blah.jpg" data-react-helmet="true">
<meta data-react-helmet="true" name="twitter:title" content="Site title">
<meta data-react-helmet="true" name="twitter:card" content="summary_large_image">
I know the images the issue, because if I replace my image URL (which is the full image URL) with an external URL, it works fine, showing the full card with image.
Any idea what could be causing this? I'm sizing the image down so it loads quickly, and it seems to load just fine directly (eg). (I mean, is there something weird/off about that image?)
NOTE: In a previous version of this question, I referenced Cloudinary and Uploadcare, but have since removed those two in a branch to simplify the problem. (They seem to have been unecessary holdovers from the starter app I used.) You can now see an example page for that branch here and the associated image in the twitter:image tag here. I feed this pre-processed/shrunk image into the header using React Helmet (and Gatsby React Helmet) and using the following code in my GraphQL call to get the image associated with the blogpost in that particular, smaller format:
featuredimage {
childImageSharp {
fixed(width: 480, quality: 75) {
src
}
}
Second Note/thought: Should I be worried about the fact that the pages in production seem to be re-rendering on every reload? Isn't SSR supposed to ensure that doesn't happen? I tested this by including a call to Math.random(), hidden, in the page. You can see the result by running document.getElementsByClassName('document')[0].children[0].innerText, and note that it produces a different number on each page reload. This implies to me that the whole page is being re-rendered by the client. Isn't that wrong? Why would that be happening? Might that relate to some sort of client processing of the images on each request, which might be screwing up the Twitter cards?
Third update: I put together a simpler reproduction here. It's based off of this starter template, with Uploadcare/Cloudinary removed and Twitter card metadata added to the header. Other than that, and removing unnecessary pages, I didn't make any other changes. I used this starter for a repro rather than a vanilla starter app, because I'm unsure whether the issue is caused by the interaction of Netlify CMS and the Gatsby Sharp Image plugin. I might try to put together a second reproduction. For now, the code for this repo is here, and the pages that should show Twitter cards are the blog posts, such as this one.
ACTUALLY, it seems that a super basic reproduction, with Gatsby 3 and no Netlify CMS or anything, has the same issue. Here's the minimal reproduction, with the image taken from src/images using an allImageSharp query and inserted into the metadata for each page. Code here.
FINAL UPDATE
Based on Derek's answer below, I removed the #reach/router stuff, and got the site URL from Netlify build env variables. It appeared that #reach/router only gave this information when JS was running, which excluded the Twitterbot, resulting in an undefined base URL, which broke the Twitter image. Including the URL from Netlify (using process.env.URL in the Gatsby config and pulling that in through a siteMetadata query) fixed the problem!
Update:
I think I might have found the issue. When opening the minimal production with script disabled, the url for twitter:image is invalid:
<meta data-react-helmet="true" name="twitter:image" content="undefined/static/03475800ca60d2a62669c6ad87f5fda0/58026/energy.jpg">
So for some reasons, during build, the hostname is missing, but when JS kicks in, it appears (Might have something to do with the way you get the hostname). Twitter crawlers probably does not have JS enabled & couldn't fetch the image.
Make sure your opengraph images are absolute urls with https:// or http:// protocols. I checked your example link & saw that it was a relative link (/static/etc.)
For Twitter, it seems to demand social cards to be 2:1
Images for this Card support an aspect ratio of 2:1 with minimum dimensions of 300x157 or maximum of 4096x4096 pixels.
https://developer.twitter.com/en/docs/twitter-for-websites/cards/overview/summary-card-with-large-image
If you're using the latest Gatsby image plugin, you can use aspectRatio to crop the image.
Also note that you can skip the twitter:image tag, if your og:image has already satisfied Twitter's card requirement.
SSR does not mean to never run JS in the client, React will render your page on the client side regardless of SSR.
This was solved here: https://github.com/gatsbyjs/gatsby/discussions/32100.
"location and thus origin is not available during gatsby build and thus the generated HTML has undefined there."
I got it working by changing the way I create the image URL inside seo.js from this:
let origin = "";
if (typeof window !== "undefined") {
origin = window.location.origin;
}
const image = origin + imageSrc;
to this:
const imageSrc = thumbnail && thumbnail.childImageSharp.fixed.src;
const image = site.siteMetadata?.siteUrl + imageSrc;
You need to use siteUrl from siteMetadata.
Below is my pageQuery from inside blog-post.js:
export const pageQuery = graphql`
query BlogPostBySlug(
$id: String!
$previousPostId: String
$nextPostId: String
) {
site {
siteMetadata {
title
siteUrl
}
}
markdownRemark(id: { eq: $id }) {
id
excerpt(pruneLength: 160)
html
frontmatter {
title
date(formatString: "MMMM DD, YYYY")
description
thumbnail {
childImageSharp {
fixed(width: 1200) {
...GatsbyImageSharpFixed
}
}
}
}
}
}
`

Rails/Slim auto-encoding Postgres geometric types

I have Google Maps polygons stored as polygons in Postgres and I read them straight from DB to output to a react Component for editing using the Google Maps API.
In my local dev environment this works fine and by inspecting the data being fed to the React component everything looks normal:
this.state = {
map: "POLYGON ((10.69332405332034 59.88086121809927, 10.77572151425784 59.84569766552776, 10.81554695371096 59.84121336506844, 10.8450727105469 59.84518027707294, 10.86910530332034 59.85397478713949, 10.91442390683596 59.88499566305687, 11.020510637793 59.9383527020427, 10.99115654233401 59.96809210273585, 10.91811462644046 59.99462872670429, 10.80250068906253 60.0067306049673, 10.58723732236331 59.97273110496651, 10.43772026303714 59.86724837030302, 10.44239803555911 59.85643166134471, 10.44501587155..."
}
But in production it seems some kind of compression/encoding is taking effect, rendering the data unusable to Google Maps:
this.state = {
map: "01030000000100000011000000004DF44CCD71164029B0493D19844D40004DF44C45AE1640E4A03B36D6814D40004DF44C75D81640139E06594D7F4D40004DF44C7532174001AC3D1CC47C4D40004DF44CAD0917404808CC83B0774D40004DF44CB5101740926DE6EDB2714D40004DF44C45081740393029BE276E4D40004DF44C8513174013D62165106C4D40004DF44C2DF31640637589D49B6A4D40004DF44CBD901640E794678CCA6B4D40004DF44C5535164080F6C4D4E16B4D40004DF44CCD17164099584C2D84724D40004DF44C553516400505C70B53784D40004DF44C1D31164037A2643EC17F4D40004DF44CC53D1640F4139BEDEA"
}
Background/environment
We recently had to take a server out of service and in its place we added two new ones to the load balancer. They were set up through Cloud 66 using the same config so they should be exaclty the same, but I guess you never know.
We use slim syntax for templates.
I should clarify: Nothing is being done explicitly by our application code to the map field on its way from Postgres to the React component. We get the database record like so: #coverage_map = CoverageMap.find(params[:id]) and then output it in the template like so: coverageMap: #coverage_map. The outputted data on display here is copied from the HTML template being rendered by Slim.
What could be happening here? Any tips on what to look for?
In your dev environment you're retrieving the geometry from the database as WKT (Well Known Text), which is not PostgreSQL's standard output. In production you're getting a WKB (Well Known Binary) representation of the geometry, which is what you normally see when firing a simple select. What you need is to use ST_AsText to get your WKT, e.g.
WITH mytable (geom) AS (
VALUES ('POINT(1 1)'::geometry)
)
SELECT geom,ST_AsText(geom) FROM mytable;
geom | st_astext
--------------------------------------------+------------
0101000000000000000000F03F000000000000F03F | POINT(1 1)
(1 Zeile)

Zebra Printer - Cut on last page

I've a Zebra ZT610 and I want to print a label, in pdf format, containing multiple pages and then have it cut on the last page. I've tried using the delayed cut mode and sending the ~JK command but I'm using a self written java application to do the invocation of printing. I've also tried to add the string "${^XB}$" into the PDF document before each page break, except the last, and used the pass-through setting in the driver to inhibit the cut command but that seems to not work either as the java print job is rendering such text as an image.
I've tried the official Zebra driver as well as using the NiceLabel zebra driver too in the hope that they may have more "Custom Commands" options in the settings but nothing has yet come to light.
After we had the same issues for several weeks and neither the vendor nor google nor Zebra's own support came up with a FULL working solution, we've worked out the following EASY 5 step solution for this (apparently pretty common) Zebra Cutter issue/problem:
Step 1:
Set Cutter-Mode to Tear-Off in the settings.
This will disable the auto-cutting after every single page.
Step 2: Go to Customer-Commands in the settings dialog (Allows ZPL coding).
Step 3: Set the first drop-down to "DOCUMENT".
Step 4: Set the Start-Section to "TEXT" and paste in
^XA^MMD^XZ^XA^JUS^XZ
MMD enables PAUSE-Mode. The JK command is only available in Pause-Mode and many Zebra printers do not support the much easier command CN (Cut-Now).
JUS saves the setting to the printer.
Step 5: Set the End-Section to "ANALYZED TEXT" and paste in
˜JK˜PS
JK sets the cut command to the end of the document, PS disables the pause mode (and thus starts printing immediately). When everything looks as described above, hit "APPLY" and your Zebra printer will automatically cut after the end of each document you send to it. You just send your PDF using sumatra or whatever you prefer. The cutter handling is now automatically done by the printer settings.
Alternatively, if you want to do this programmaticaly, use the START and END codes at the corresponding positions in your ZPL code instead. Note that ˜CMDs cannot be send in combination with ^CMDs, thats why there's no XA...XZ block to reset any settings (which is not necessary in this scenario as it only affects the print session and PS turns the pause mode back to OFF).
I had similar concern but as the print server was CUPS, I wasn't able to use Windows drivers and utilities (settings dialog). So basically, I did the following:
On the printer, set Cutter mode. This will cut after each printed label.
In my Java code, thanks to Apache PDFBox lib, open the PDF and for each page, render it as a monochrome BufferedImage, get bytes array from it, and get its hex representation.
Write a few ZPL commands to download hex as graphic data, and add the ^XB command before the ^XZ one, in order to prevent a cut here, except for the last page, so that there is a cut only at the end of the document.
Send the generated ZPL code to the printer. In my case, I send it as a raw document through IPP, using application/vnd.cups-raw as mime-type, thanks to the great lib ipp-client-kotlin, but it is also possible to use Java native printing API with bytes.
Below in a snippet of Java code, for demo purpose:
public void printPdfStream(InputStream pdfStream) throws IOException {
try (PDDocument pdDocument = PDDocument.load(pdfStream)) {
PDFRenderer pdfRenderer = new PDFRenderer(pdDocument);
StringBuilder builder = new StringBuilder();
for (int pageIndex = 0; pageIndex < pdDocument.getNumberOfPages(); pageIndex++) {
boolean isLastPage = pageIndex == pdDocument.getNumberOfPages() - 1;
BufferedImage bufferedImage = pdfRenderer.renderImageWithDPI(pageIndex, 300, ImageType.BINARY);
byte[] data = ((DataBufferByte) bufferedImage.getData().getDataBuffer()).getData();
int length = data.length;
// Invert bytes
for (int i = 0; i < length; i++) {
data[i] ^= 0xFF;
}
builder.append("~DGR:label,").append(length).append(",").append(length / bufferedImage.getHeight())
.append(",").append(Hex.getString(data));
builder.append("^XA");
builder.append("^FO0,0");
builder.append("^XGR:label,1,1");
builder.append("^FS");
if (!isLastPage) {
builder.append("^XB");
}
builder.append("^XZ");
}
IppPrinter ippPrinter = new IppPrinter("ipp://printserver/printers/myprinter");
ippPrinter.printJob(new ByteArrayInputStream(builder.toString().getBytes()),
documentFormat("application/vnd.cups-raw"));
}
}
Important: hex data can (and should) be compressed, as mentioned in ZPL Programming Guide, section Alternative Data Compression Scheme for ~DG and ~DB Commands. Depending on the PDF content, it may drastically reduce the data size (by a factor 10 in my case!).
Note that Zebra's support provides a few more alternatives in order to controller the cutter, but this one worked immediately.
Zebra Automatic Cut - Found another solution.
Create a file with the name: Delayed Cut Settings.txt
Insert the following code: ^XA^MMC,N^XZ
Send it to the printer
After you do the 3 steps above, all the documents you send to the printer will be cut automatically.
(To disable that function send again the 'Delayed Cut Setting.txt' with the following code:^XA^MMD^XZ )
The first document you send to the printer, you need to ADD (just once) the command ^MMC,N before the ^XZ
My EXAMPLE TXT:
^XA
^FX Top section with logo, name and address.
^CF0,60
^FO50,50^GB100,100,100^FS
^FO75,75^FR^GB100,100,100^FS
^FO93,93^GB40,40,40^FS
^FO220,50^FDIntershipping, Inc.^FS
^CF0,30
^FO220,115^FD1000 Shipping Lane^FS
^FO220,155^FDShelbyville TN 38102^FS
^FO220,195^FDUnited States (USA)^FS
^FO50,250^GB700,3,3^FS
^FX Second section with recipient address and permit information.
^CFA,30
^FO50,300^FDJohn Doe^FS
^FO50,340^FD100 Main Street^FS
^FO50,380^FDSpringfield TN 39021^FS
^FO50,420^FDUnited States (USA)^FS
^CFA,15
^FO600,300^GB150,150,3^FS
^FO638,340^FDPermit^FS
^FO638,390^FD123456^FS
^FO50,500^GB700,3,3^FS
^FX Third section with bar code.
^BY5,2,270
^FO100,550^BC^FD12345678^FS
^FX Fourth section (the two boxes on the bottom).
^FO50,900^GB700,250,3^FS
^FO400,900^GB3,250,3^FS
^CF0,40
^FO100,960^FDCtr. X34B-1^FS
^FO100,1010^FDREF1 F00B47^FS
^FO100,1060^FDREF2 BL4H8^FS
^CF0,190
^FO470,955^FDCA^FS
^MMC,N
^XZ

"document" in mozilla extension js modules?

I am building Firefox extension, that creates single XMPP chat connection, that can be accessed from all tabs and windows, so I figured, that only way to to this, is to create connection in javascript module and include it on every browser window. Correct me if I am wrong...
EDIT: I am building traditional extension with xul overlays, not using sdk, and talking about those modules: https://developer.mozilla.org/en-US/docs/Mozilla/JavaScript_code_modules
So I copied Strophe.js into js module. Strophe.js uses code like this:
/*_Private_ function that creates a dummy XML DOM document to serve as
* an element and text node generator.
*/
[---]
if (document.implementation.createDocument === undefined) {
doc = this._getIEXmlDom();
doc.appendChild(doc.createElement('strophe'));
} else {
doc = document.implementation
.createDocument('jabber:client', 'strophe', null);
}
and later uses doc.createElement() to create xml(or html?) nodes.
All worked fine, but in module I got error "Error: ReferenceError: document is not defined".
How to get around this?
(Larger piece of exact code: http://pastebin.com/R64gYiKC )
Use the hiddenDOMwindow
Cu.import("resource://gre/modules/Services.jsm");
var doc = Services.appShell.hiddenDOMWindow.document;
It sounds like you might not be correctly attaching your content script to the worker page. Make sure that you're using something like tabs.attach() to attach one or more content scripts to the worker page (see documentation here).
Otherwise you may need to wait for the DOM to load, waiting for the entire page to load
window.onload = function ()
{
Javascript code goes here
}
Should take at least diagnose that issue (even if the above isn't the best method to use in production). But if I had to wager, I'd say that you're not attaching the content script.

Load or Stress Testing Tool with URL Import Functionality

Can someone recommend a load testing tool which allows you to either:
a. replay an IIS (7) log(s) to simulate a real live site daily run;
b. import a CSV or equivalent list of URLS so we can achieve a similar thing as above but at a URL level;
c. .net API so I can create simple tests easily from my list of URLS is also a good way to go.
I do not really want to record my tests.
I think I can do B) with WAPT but need to create an XML file manually, not too much grief, but wondering if any tools cover these scenarios out the box.
Visual Studio Test Edition would require some code to parse the file into a suitable test run.
It is a great load testing solution.
Our load testing service lets you write a very simple script using JavaScript to pull data out of a CSV file and then fetch those URLs. For example, the following code would pluck 10 random URLs from the CSV file and fetch them as part of a single session:
var c = browserMob.openHttpClient();
var csv = browserMob.getCSV("urls.csv");
browserMob.beginTransaction();
for (var i = 0; i < 10; i++) {
browserMob.beginStep("Step 1");
var url = csv.random().get("url");
c.get(url);
browserMob.endStep();
}
browserMob.endTransaction();
The CSV file itself needs to be a normal CSV file with the first row containing a header named "url". This script would be run repeatedly for each virtual user participating in a load test.
We have support for so called 'uri-format' in our open-source tool called Yandex.Tank You simply put all your uris to a file, one uri -- one line, then specify headers in your load.ini like this:
[phantom]
address=example.org
rps_schedule=line(1, 1600, 2m)
headers = [Host: mts-maps.yandex.ru]
[Connection: close] [Bloody: yes]
ammo_file = ammo.uri
ammo.uri:
/
/index.html
/1/example.html
/2/example.html

Resources