How to deal with nested routing in Relay Modern v6 / experimental - relayjs

I know the relay documentation states that react-router v4/5 is no good with Relay since they changed to dynamic routing, but if you look really closely it always says:
"Last updated on 2017-6-3 by Jimmy Jia"
And I really do not want to have to use "Found Relay" either.
I'd like to still be able to access my QueryRenderer or useQuery and not an abstraction on top of that so you can't, which is what Found is doing, so that is a no-go.
So... with Relay Modern v6 just released, and taking a sneak-peak at relay-experimental with useQuery, useFragment etc. hooks that integrates with React Suspense and #defer (hopefully) just around the corner - what are the recommendations and best practices for handling nested routing in 2019 with Relay Modern v6.
With the integration with Suspense isn't dynamic routing starting to make more sense?
There are lots of examples on very simple relay applications, such as https://github.com/relayjs/relay-examples/, but so far I have yet to find a good example showing how to deal with nested routes in Relay in a proper way. And by "in a proper way" I'm not talking about using "Found Relay", but using a router that only does the "routing" part and does it well.

I come from the future (2021) and unfortunately this is still a problem.
However one improvement is that React Router has released an experimental v6 branch that supports preloading. Note that this is not the same as the v6 beta branch which does not include experimental features. Of course because this is experimental you can't assume it will be stable, so you may want to pin the package until it becomes stable.
A great example of how to use this integration can be found here: https://github.com/Hellzed/hello-relay-react-router-experimental. Please credit (and star on github) the original author of this example if you can. I will use this example in this answer.
You can install this version of the router using:
npm install history react-router-dom#experimental
Then, you write your app as follows:
import React from "react";
import { useQueryLoader, usePreloadedQuery } from "react-relay/hooks";
import { BrowserRouter as Router, Routes, Route } from "react-router-dom";
import graphql from "babel-plugin-relay/macro";
const query = graphql`
query AppHelloQuery {
hello
}
`;
function Hello({ queryReference }) {
const data = usePreloadedQuery(query, queryReference);
return <p>{data.hello}</p>;
}
function App() {
const [queryReference, loadQuery] = useQueryLoader(query);
return (
<Router>
<Routes>
<Route
path="/"
preload={() => loadQuery()}
element={
<React.Suspense fallback="Loading...">
<Hello queryReference={queryReference} />
</React.Suspense>
}
/>
</Routes>
</Router>
);
}
Basically everything is written the same as in the relay docs and the react-router docs for the routing, except that you add <Route preload={() => loadQuery()}>, which links up Relay and the router.

Related

Dart Functions Framework usage

I'm new to the Dart functions framework. My goal is to use this package to create several functions and deploy them to Cloud Run (in combination with Firebase, but I guess that's irrelevant to this question).
I've run the quick starts and I've read all of the contents in the docs.
The quick start mentions just one function at a time (e.g. Hello World, Cloud Events, etc..), like this:
import 'package:functions_framework/functions_framework.dart';
import 'package:shelf/shelf.dart';
#CloudFunction()
Response function(Request request) {
return Response.ok('Hello, World!');
}
But as you can see in the quickstarts only one function is handled in a project at a time. How about me wanting to deploy several functions? Should I:
Write several functions in the same project / file, so that the function framework compiles the 'server.dart` by itself
OR
Create a different functions_framework for each function?
Let me be more specific. Should I do the following (option 1 - which makes more sense to me):
import 'dart:math';
import 'package:functions_framework/functions_framework.dart';
import 'package:shelf/shelf.dart';
#CloudFunction()
Response function(Request request) {
return Response.ok('Hello, World!');
}
#CloudFunction()
Response function2(Request request) {
if (Random().nextBool()) {
return Response.ok('Hello, World!');
} else {
return Response.internalServerError();
}
}
Or should I build a different folder by running a build_runner for each function I need in my project?
Is there a difference and/or a best practice?
Thanks in advance.
EDIT. This question is related to the deployment on Cloud Run itself, and not just testing on my own PC. To test my own functions I did the following:
Run dart run build_runner build, so that it updates the server.dart file correctly (I can see that the framework does a lot behind the scenes and that the _nameToFunctionTarget is basically a router);
Run the server in two different terminals, like this: dart run bin/server.dart --port MYPORT --target MYFUNCTION (where MYPORT and MYFUNCTION are either 8080/8081 or function/function2 respectively).
I guess I'm just confused on how to correctly manage this framework once deployed on Cloud Run.
EDIT 2. I just gave up using Dart as a Serverless language or even a Backend language. There's just too much jargon even for the basic things. Any backend framework is either dead, or maintained by one single enthusiast guy (props to him!). This language has not yet received enough love from the Google Team / the community and at this moment in time is basically not possible to go fullstack on just Dart. It's a dream, but it can't be realized now. Furthermore, Dart hardly lacks a proper SDKs to use Firestore, etc., so Firebase isn't an option. I find it easier to just learn NodeJS and exploit the Firebase support for Firebase Functions written in NodeJS, and I'll wait for more support in there in the future, if there ever will be.
The documentation is a bit sparse right now (and I'm new to it also! I couldn't find any good examples, so here goes...)
You can only have a single function that is served. It should be
named 'function' (the type and name can be overriden, see the
cloudevent example dartfn generate cloudevent)
You 'could' have many of these deployed so that each does a specific thing, such as processing cloudevents above, but most people
want something more REST-like (see next)
You need to attach a Router() so that you can have the single entry point (function) handled by specific logic in your code.
Example for Rest
add to pubspec.yaml (in dependencies:) shelf_router: ^1.1.2
delegate the #CloudFunction to use the Router()
functions.dart
import 'package:functions_framework/functions_framework.dart';
import 'package:shelf/shelf.dart';
import 'package:shelf_router/shelf_router.dart';
Router app = Router()
..get('/health', (Request request) {
return Response.ok('healthy');
})
..get('/user/<user>', (Request request, String user) {
// fetch the user... (probably return as json)
return Response.ok('hello $user');
})
..post('/user', (Request request) {
// convert request body to json and persist... (probably return as json)
return Response.ok('saved the user');
});
#CloudFunction()
Future<Response> function(Request request) => app.call(request);

The getEntityAttributes() function in Thingsboard

I am trying to use the attributeService.getEntityAttributes function to obtain some server attributes of my device. I was using the .getEntityAttributesValues function when working with the 2.x version of Thingsboard and it was working fine. With the current version I am using the following code:
var conf = {
ignoreLoading: false,
ignoreErrors: true,
resendRequest: true
};
var myattr = attributeService.getEntityAttributes(entityID,'SERVER_SCOPE',["myattribute"],conf);
But I get no data or error back. I was using the .getEntityAttributesValues with .then() method but it doesn't seem to work anymore. It says ".then is not a function".
What am I doing wrong? Please help and tell me how to use the new function properly. I am using TB v.3.1.1 CE.
Thingsboard 2.x UI was made with AngularJS.
Thingsboard 3.x UI now uses Angular.
One of the key differences between these frameworks in regards of your problem is the move from Promise based services, to Observable based services.
Let's look at the source of the getEntityAttributes function:
https://github.com/thingsboard/thingsboard/blob/2488154d275bd8e6883baabba5519be78d6b088d/ui-ngx/src/app/core/http/attribute.service.ts
It's mostly a thin wrapper around a network call made with the http.get method from Angular.
Therefore, if we have a look at the official documentation: https://angular.io/guide/http#requesting-data-from-a-server, it is mentioned that we must subscribe to the observable in order to handle the response. So something along the lines of:
attributeService.getEntityAttributes(entityID,'SERVER_SCOPE',["myattribute"],conf).subscribe((attributes) => {…})

Dart method$Class syntax

I'm new to dart, and following the tutorial provided on the Dart for the web page.
It all makes sense - apart from one piece of sytax:
final InjectorFactory injector = self.injector$Injector;
Here's the full code from the tutorial:
import 'main.template.dart' as self;
const useHashLS = false;
#GenerateInjector([
routerProvidersHash,
ClassProvider(Client, useClass: InMemoryDataService),
// Using a real back end?
// Import 'package:http/browser_client.dart' and change the
above to:
// ClassProvider(Client, useClass: BrowserClient),
])
final InjectorFactory injector = self.injector$Injector;
void main() {
runApp(ng.AppComponentNgFactory, createInjector: injector);
}
I'm a confused by the apparent .method$Class syntax. Can anyone explain to me what this means/what it's doing?
It's also underlined in Webstorm with the message The getter 'injector$Injector' isn't defined for the class 'self'. Regardless, it runs fine and works as expected.
Thanks in advance!
$ in an identifier has no special meaning. It's by convention often used for names in generated code.
Angular also uses code generation and the code will only become available after code generation was executed for example by webdev serve or webdev build.
I don't know the current state but the code might still be generated in a directory that is not analyzed by the DartAnalyzler and you might always see the error even wen the app can be run without problems.

How can you get information about other apps running or in focus?

My motivation: I'm writing an app to help with some quantified self / time tracking type things. I'd like to use electron to record information about which app I am currently using.
Is there a way to get information about other apps in Electron? Can you at least pull information about another app that currently has focus? For instance, if the user is browsing a webpage in Chrome, it would be great to know that A) they're using chrome and B) the title of the webpage they're viewing.
During my research I found this question:
Which app has the focus when a global shortcut is triggered
It looks like the author there is using the nodObjc library to get this information on OSX. In addition to any approaches others are using to solve this problem, I'm particularly curious if electron itself has any way of exposing this information without resorting to outside libraries.
In a limited way, yes, you can get some of this information using the electron's desktopCapturer.getSources() method.
This will not get every program running on the machine. This will only get whatever chromium deems to be a video capturable source. This generally equates to anything that is an active program that has a GUI window (e.g., on the task bar on windows).
desktopCapturer.getSources({
types: ['window', 'screen']
}, (error, sources) => {
if (error) throw error
for (let i = 0; i < sources.length; ++i) {
log(sources[i]);
}
});
No, Electron doesn't provide an API to obtain information about other apps. You'll need to access the native platform APIs directly to obtain that information. For example Tockler seems to do so via shell scripts, though personally I prefer accessing native APIs directly via native Node addons/modules or node-ffi-napi.
2022 answer
Andy Baird's answer is definitely the better native Electron approach though that syntax is outdated or incomplete. Here's a complete working code snippet, assumes running from the renderer using the remote module in a recent Electron version (13+):
require('#electron/remote').desktopCapturer.getSources({
types: ['window', 'screen']
}).then(sources => {
for (const thisSource of sources) {
console.log(thisSource.name);
}
});
The other answers here are for the rendering side - it might be helpful to do this in the main process:
const { desktopCapturer } = require('electron')
desktopCapturer.getSources({ types: ['window', 'screen'] }).then(async sources => {
for (const source of sources) {
console.log("Window: ", source.id, source.name);
}
})

MvcIntegrationTestFramework or an alternative updated for ASP.NET MVC 3

I'm interested in using Steve Sanderson’s MvcIntegrationTestFramework or a very similar alternative with ASP.NET MVC 3 Beta.
Currently when compiling MvcIntegrationTestFramework against MVC 3 Beta I get the following error due to changes in MVC:
Error 6
'System.Web.Mvc.ActionDescriptor.GetFilters()' is obsolete: '"Please call System.Web.Mvc.FilterProviders.Providers.GetFilters() now."' \MvcIntegrationTestFramework\Interception\InterceptionFilterActionDescriptor.cs Line 18
Questions
Can anybody provide the MvcIntegrationTestFramework working for ASP.NET MVC 3 Beta?
--- and / or ---
Are there similar alternatives you would recommend?
EDIT #1: Note I have e-mailed Steve the creator of MvcIntegrationTestFramework, also hoping for some feedback there.
EDIT #2 & #3: I have received a message from Steve. Quoted for your reference:
I haven't needed to use that project with MVC 3, so sorry, I don't have an updated version of it. As far as I'm aware it should be possible to update it to work on MVC 3, but you'd need to figure that out perhaps by inspecting the MVC 3 source code to notice any changes in how actions, filters, etc are invoked now. If you do update it, and if you decide to adopt it as an ongoing project (e.g., putting it on Github or similar), let me know and I'll post a link to it! (Thanks Steve!)
EDIT #4: Honestly had a quick stab at using System.Web.Mvc.FilterProviders.Providers.GetFilters() didn't get anywhere fast and simply adding the [Obsolete] found that there was an error in the internals of the MVC requests. Anybody else had a dabble?
EDIT #5: Please comment if you are using an alternative Integration Test Framework with MVC 3.
Have a look at my fork:
https://github.com/JonCanning/MvcIntegrationTestFramework/
I realize this is not the answer you're looking for but Selenium or Watin may be of use to you as an alternative to the Integration Test Framework.
Selenium will let you record tests as nUnit code so you can integrate with your existing test projects etc. Then your test can validate the DOM similarly to the Integration Test Framework. The advantage is Selenium tests can be executed with various different browsers.
Key caveat is that Selenium needs your app to be deployed on a web server, not sure if that's a show stopper for you.
I thought I would share my experiences with using MvcIntegrationTestFramework in an ASP.NET MVC 4 project. In particular, the ASP.NET MVC 4 project was a Web Role for a Windows Azure Cloud Service.
Although the example project from Jon Canning's fork worked for me (although I did change the System.Web.Mvc assembly from 3.0.0.0 to 4.0.0.0, which required a bunch of editing in the web.config file to get the tests to run and pass), I got an error whenever I tried to run the same tests against an Azure ASP.NET MVC 4 Web Role project. The error was:
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.
The inner exception was:
System.InvalidOperationException: This method cannot be called during the application's pre-start initialization phase.
I started wondering how an Azure Web Role project based on ASP.NET MVC 4 was different to a normal ASP.NET MVC 4 project, and how such a difference would cause this error. I did a bit of searching on the web but didn't come across anybody trying to do the same thing that I was doing. Soon enough I managed to realise that it was to do with the Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener. Part of the role of this class seems to be to ensure that the web role is running in a hosted service or the Development Fabric (you'll see a message to this effect if you switch the startup project from the cloud service project to the web role project inside a cloud service solution, and then try to debug).
The solution? I removed the corresponding listener from the Web.config file of my Web Role project:
<configuration>
...
<system.diagnostics>
<trace>
<listeners>
<!--Remove this next 'add' element-->
<add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.8.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
name="AzureDiagnostics"> <filter type="" /> </add>
</listeners>
</trace>
</system.diagnostics>
...
</configuration>
I was then able to run integration tests as normal against my Web Role project. I did, however, add the listener to the Web.Debug.config and Web.Release.config transformation files so that everything was still the same for normal deploying and debugging.
Maybe that will help somebody looking to use the MvcIntegrationTestFramework for Azure development.
EDIT
I just realised that this solution might be a bit of a 'hack' because it might not let you do integration testing on application code that relates to Azure components (e.g. the special Azure caching mechanisms perhaps). That said, I haven't come across any issues to do with this yet, although I also haven't really written that many integration tests yet either...
I used Jon Canning's updated version (https://github.com/JonCanning/MvcIntegrationTestFramework/) and it solved my problem very well for controller methods that only accept value types and strings, but did not work for those that accepted classes.
It turns out there was an issue with the code for the updated MvcIntegrationTestFramework.
I figured out how to fix it, but don't know where else to post the solution, so here it is:
A simple sample to show how it works is:
[TestMethod]
public void Account_LogOn_Post_Succeeds()
{
string loginUrl = "/Account/LogOn";
appHost.Start(browsingSession =>
{
var formData = new RouteValueDictionary
{
{ "UserName", "myusername" },
{ "Password", "mypassword" },
{ "RememberMe", "true"},
{ "returnUrl", "/myreturnurl"},
};
RequestResult loginResult = browsingSession.Post(loginUrl, formData);
// Add your test assertions here.
});
}
The call to browsingSession.Post would ultimately cause the NameValueCollectionConversions.ConvertFromRouteValueDictionary(object anonymous) method to be called, and the code for that was:
public static class NameValueCollectionConversions
{
public static NameValueCollection ConvertFromObject(object anonymous)
{
var nvc = new NameValueCollection();
var dict = new RouteValueDictionary(anonymous); // ** Problem 1
foreach (var kvp in dict)
{
if (kvp.Value == null)
{
throw new NullReferenceException(kvp.Key);
}
if (kvp.Value.GetType().Name.Contains("Anonymous"))
{
var prefix = kvp.Key + ".";
foreach (var innerkvp in new RouteValueDictionary(kvp.Value))
{
nvc.Add(prefix + innerkvp.Key, innerkvp.Value.ToString());
}
}
else
{
nvc.Add(kvp.Key, kvp.Value.ToString()); // ** Problem2
}
}
return nvc;
}
Then there was two problems:
The call to new RouteValueDictionary(anonymous) would cause the "new" RouteValueDictionary to be created, but instead of 4 keys, there are only three, one of which was an array of 4 items.
When it hits this line: nvc.Add(kvp.Key, kvp.Value.ToString(), the kvp.Value is an array, and the ToString() gives:
"System.Collections.Generic.Dictionary'2+ValueCollection[System.String,System.Object]"
The fix (to my specific issue) was to change the code as follows:
var dict = anonymous as RouteValueDictionary; // creates it properly
if (null == dict)
{
dict = new RouteValueDictionary(anonymous);
}
After I made this change, then my model class would properly bind, and all would be well.

Resources