Search with Relay doesn't include new results due to local cache - relayjs

I've implemented a search-as-you-type component in React and Relay. It's roughly the same setup as search functionality using relay
It works as intended with one exception. New results from the server never appear when I retype a search I've already performed on the client. I looks like Relay always goes to the local cache in this case.
So, for example, say I've searched for 'foo' and didn't find any results. Now, seconds later, another user on the website creates this 'foo', but Relay will never query the server since the cached response to the 'foo' search was an empty result.
Is there a pattern or best practice for this scenario?
The query is as follows. I call this.props.relay.setVariables to perform the search:
initialVariables: {
search: '',
hasSearch: false
},
fragments: {
me: () => Relay.QL`
fragment on Viewer {
relationSearch(search: $search) #include(if: $hasSearch) {
... on User {
username
}
}
}
`
}

The answer seems to be to use this.props.relay.forceFetch with the search variables instead.
See https://facebook.github.io/relay/docs/api-reference-relay-container.html#forcefetch
Someone correct me if this isn't best practice.

Related

Downloading whole websites with k6

I'm currently evaluating whether k6 fits our load testing needs. We have a fairly traditional website architecture that uses Apache webservers with PHP und a MySQL database. Sending simple HTTP requests with k6 looks simple enough and I think we will be able to test all major functionality with it, as we don't rely on JavaScript that much and most pages are static.
However, I'm unsure how to deal with resources (stylesheets, images, etc.) that are referenced in the HTML that is returned in the requests. We need to load them as well, as this sometimes leads to database requests, which must be part of the load test.
Is there some out-of-the-box functionality in k6 that allows you to load all the resources like a browser would? I'm aware that k6 does NOT render the page and I don't need it to. I only need to request all the resources inside the HTML.
You basically have two options, both with their caveats:
Record your session - you can either export har directly from the browser as shown there or use an extension made for your browser here is firefox and chromes. Both should be usable without a k6 cloud account you just need to set them to download the har and it will automatically (and somewhat silently) download them when you hit stop. And then either use the in k6 har converter (which is deprecated, but still works) or the new har-to-k6 one which.
This method is particularly good if you have a lot of pages and/or resources and even works if you have a single page style of application as it just gets what the browser requested as a HAR and then transforms it into a script. And if there were no dynamic things that need to be inputed (username/password) the final script can be used as is most of the time.
The biggest problem with this approach is that if you add a css file you need to redo this whole exercise. This is even more problematic if you css/js file name change on each change or something like that. Which is what the next method is good for:
Use parseHTML and then find the elements you care about and make a request for them.
import http from "k6/http";
import {parseHTML} from "k6/html";
export default function() {
const res = http.get("https://stackoverflow.com");
const doc = parseHTML(res.body);
doc.find("link").toArray().forEach(function (item) {
console.log(item.attr("href"));
// make http gets for it
// or added them to an array and make one batch request
});
}
will produce
NFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/favicon.ico?v=4f32ecc8f43d
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a
INFO[0001] /opensearch.xml
INFO[0001] https://cdn.sstatic.net/Shared/stacks.css?v=53507c7c6e93
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/primary.css?v=d3fa9a72fd53
INFO[0001] https://cdn.sstatic.net/Shared/Product/product.css?v=c9b2e1772562
INFO[0001] /feeds
INFO[0001] https://cdn.sstatic.net/Shared/Channels/channels.css?v=f9809e9ffa90
As you can see some of the urls are relative and not absolute so you will need to handle this. And in this example only some are css, so probably more filtering is needed.
The problem here is that you need to write the code and if you add a relative link or something else you need to handle it. Luckily k6 is scriptable so you can reuse the code :D.
I've followed Михаил Стойков suggestion and written my own function to load resources. You can set the way resources are loaded (batch or sequential gets with options.concurrentResourceLoading).
/**
* #param {http.RefinedResponse<http.ResponseType>} response
*/
export function getResources(response) {
const resources = [];
response
.html()
.find('*[href]:not(a)')
.each((index, element) => {
resources.push(element.attributes().href.value);
});
response
.html()
.find('*[src]:not(a)')
.each((index, element) => {
resources.push(element.attributes().src.value);
});
if (options.concurrentResourceLoading) {
const responses = http.batch(
resources.map((r) => {
return ['GET', resolveUrl(r, response.url), null, {
headers: createHeader()
}];
})
);
responses.forEach(() => {
check(response, {
'resource returns status 200': (r) => r.status === 200,
});
});
} else {
resources.forEach((r) => {
const res = http.get(resolveUrl(r, response.url), {
headers: createHeader(),
});
!check(res, {
'resource returns status 200': (r) => r.status === 200,
});
});
}
}

Mutation not requesting for actively fetched container data in fatQuery on RANGE_ADD

I am trying to do a RANGE_ADD mutation using what is now known as Relay Classic (this probably is resolved with Modern). I get the error:
Warning: writeRelayUpdatePayload(): Expected response payload to include the newly created edge `newThingEdge` and its `node` field. Did you forget to update the `RANGE_ADD` mutation config?
So, yes, the payload is not sending the anything more than the clientMutationId in the expected response shape because the request mutation is not asking for it.
According to #Joe Savona here, https://github.com/facebook/relay/issues/521, this might be happening if there is no intersecting container requesting for this data. But that's not entirely true for me. My Route is requesting:
things: (Component) => Relay.QL`
query {
allThings(variable: $variable) {
${Component.getFragment('things')},
}
}
`,
while my fatQuery is requesting for:
fragment on AddMockThing {
allThings(variable: "${variable}", first: 100) {
edges {
node {
id,
},
},
},
newThingEdge
}
Now you may say these aren't the same queries, because of the extra first: 100 in the getFatQuery version, but if I don't use that, I get the error:
Error: Error: You supplied the 'edges' field on a connection named 'allThings', but you did not supply an argument necessary to do so. Use either the 'find', 'first', or 'last' argument.
On the other hand, if I add first: 100 to the Route query, I get the error: Error: Invalid root field 'allThings'; Relay only supports root fields with zero or one argument.
Stuck between a fatQuery and a hard place. Would appreciate the help!
You're getting a validation error because the Relay compiler is looking for a connection argument (first: X). You can disable this particular validation by adding the #relay(pattern: true) directive. This marks the fat query as a ‘pattern’ to match against, rather than as something concrete.
fragment on AddMockThing #relay(pattern: true) {
allThings(variable: "${variable}") {
edges {
node {
id,
},
},
},
newThingEdge
}
More info here: https://stackoverflow.com/a/34112045/802047

How to purposely delay an AJAX response while testing with Capybara?

I have a React component that mimics the "link preview" feature that most modern social media sites have. You type in a link and it fetches the image, title, etc...
I do this by having the React component make an AJAX call back to my server to fetch the URL preview data.
While it's fetching I show an intermediate "loading" state (i.e. some loading icon or spinning wheel)
The relevant React snippet looks like
this.setState({ isLoadingAttachment: true })
return $.ajax({
type: "GET",
url: some_url,
dataType: "json",
contentType: "application/json",
}).success(function(response){
// Succesful! Do Success stuff
component.setState({ isLoadingAttachment: false })
}).error(function(response) {
// Uh oh! Handle failure stuff
component.setState({ isLoadingAttachment: false })
});
Note how the isLoadingAttachment state variable is only valid for a brief second while the server is doing the fetching. Both the success and error scenarios immediately disable it.
I'd like to test some functionality during my "loading" state with my Capybara feature specs. I've mocked all the web calls and the data to be returned by the server, but it all happens so quickly that it passes through the "loading" state before I can even run any expect().. statement on it. I also purposely don't call wait_for_ajax so the page will go ahead without waiting for the ajax, but it's still too fast.
Lastly I also tried purposefully delaying the server call by 1.0 second, but that didn't work either. I assume because the whole thing is single threaded somehow?
# `foo` is an arbitrary method called during the server-side execution
allow_any_instance_of(MyController).
to receive(:foo) { sleep(1.0) }.and_call_original
Any thoughts on how I could do this?
Thanks!
Capybara starts up the app server in a different thread than the tests, however if you're using the default Capybara.server setting you may have issues with your app calling back to itself since it uses webrick by default. Instead you should specify Capybara.server = :puma. Beyond that, mocking responses is generally a bad idea in feature specs (which are generally meant to be end-to-end tests) since it means you're not actually testing your apps code the way it would run in production anymore. A better solution is to use something like puffing-billy - https://github.com/oesmith/puffing-billy - to mock web responses outside of your apps code which would allow you to do something like
proxy.stub('https://example.com/proc/').and_return(Proc.new { |params, headers, body|
sleep 2
{ :text => "Your results"}
})

Mongo and Node.js: unable to look up document by _id

I'm using the Express framework and Mongodb for my application.
When I insert objects into the database, I use a custom ObjectID. It's generated using mongo's objectid function, but toString()ed (for reasons i think are irrelevant, so i won't go into them). Inserts look like this:
collection.insert({ _id: oid, ... })
The data is in mongo - i can db.collection.find().pretty() from mongo's cli and see it exactly as it's supposed to be. But when I run db.collection.find({_id: oid}) from my application, I get nothing ([] to be exact). Even when I manually hardcode the objectID that I'm looking for (that I know is in the database) into my query, I still can't get an actual result.
As an experiment, I tried collection.find({title: "test title"}) from the app, and got exactly the result I wanted. The database connection is fine, the data structure is fine, the data itself is fine - the problem is clearly with the _id field.
Query code:
var sid = req.param("sid");
var db = req.db;
var collection = db.get("stories");
collection.find({'_id': sid }, function(err, document) {
console.log(document.title);
});
Any ideas on how I can get my document by searching _id?
update: per JohnnyHK's answer, I am now using findOne() as in the example below. However, it now returns null (instead of the [] that i was getting). I'll update if I find a solution on my own.
collection.findOne({'_id': sid }, function(err, document) {
console.log(document);
});
find provides a cursor as the second parameter to its callback (document in your example). To find a single doc, use findOne instead:
collection.findOne({'_id': sid }, function(err, document) {
console.log(document.title);
});

How to create a method query that works for an Infinite Scroll loading behavior

I'm creating a page that outputs a list of 1000-3000 records. The current flow is:
User loads a page
jQuery hits the server for all the records and injects them into the page.
Problem here is that those records for some users can take 3+ seconds to return which is a horrible UX.
What I would like to do is the following:
1. User loads a page
2. jQuery hits the server and gets at most 100 records. Then keeps hitting the server in a loop until the records loaded equal the max records.
Idea here is the user gets to see records quickly and doesn't think something broke.
So it's not really an infinite scroll as I don't care about the scroll position but it seems like a similar flow.
How in jQuery can I the the server in a loop? And how in rails can I query taking into account a offset and limit?
Thank you
You can simply query the server for a batch of data over and over again.
There are numerous APIs you can implement. Like:
client: GET request /url/
server: {
data: [ ... ]
rest: resturl
}
client GET request resturl
repeat.
Or you can get the client to pass in parameters saying you want resource 1-100, then 101-200 and do this in a loop.
All the while you will render the data as it comes in.
Your server either needs to let you pass in parameters saying you want record i to i + n.
Or your server needs to get all the data. Store it somewhere then return a chunk of the data along with some kind unique id or url to request another chunk of data and repeat this.
// pseudo jquery code
function next(data) {
render(data.records);
$.when(getData(data.uniqueId)).then(next);
}
function getData(id) {
return $.ajax({
type: "GET",
url: ...
data {
// when id is undefined get server to load all data
// when id is defined get server to send subset of data stored # id.
id: id
},
...
});
}
$.when(getData()).then(next);

Resources