I'm new to the Dart functions framework. My goal is to use this package to create several functions and deploy them to Cloud Run (in combination with Firebase, but I guess that's irrelevant to this question).
I've run the quick starts and I've read all of the contents in the docs.
The quick start mentions just one function at a time (e.g. Hello World, Cloud Events, etc..), like this:
import 'package:functions_framework/functions_framework.dart';
import 'package:shelf/shelf.dart';
#CloudFunction()
Response function(Request request) {
return Response.ok('Hello, World!');
}
But as you can see in the quickstarts only one function is handled in a project at a time. How about me wanting to deploy several functions? Should I:
Write several functions in the same project / file, so that the function framework compiles the 'server.dart` by itself
OR
Create a different functions_framework for each function?
Let me be more specific. Should I do the following (option 1 - which makes more sense to me):
import 'dart:math';
import 'package:functions_framework/functions_framework.dart';
import 'package:shelf/shelf.dart';
#CloudFunction()
Response function(Request request) {
return Response.ok('Hello, World!');
}
#CloudFunction()
Response function2(Request request) {
if (Random().nextBool()) {
return Response.ok('Hello, World!');
} else {
return Response.internalServerError();
}
}
Or should I build a different folder by running a build_runner for each function I need in my project?
Is there a difference and/or a best practice?
Thanks in advance.
EDIT. This question is related to the deployment on Cloud Run itself, and not just testing on my own PC. To test my own functions I did the following:
Run dart run build_runner build, so that it updates the server.dart file correctly (I can see that the framework does a lot behind the scenes and that the _nameToFunctionTarget is basically a router);
Run the server in two different terminals, like this: dart run bin/server.dart --port MYPORT --target MYFUNCTION (where MYPORT and MYFUNCTION are either 8080/8081 or function/function2 respectively).
I guess I'm just confused on how to correctly manage this framework once deployed on Cloud Run.
EDIT 2. I just gave up using Dart as a Serverless language or even a Backend language. There's just too much jargon even for the basic things. Any backend framework is either dead, or maintained by one single enthusiast guy (props to him!). This language has not yet received enough love from the Google Team / the community and at this moment in time is basically not possible to go fullstack on just Dart. It's a dream, but it can't be realized now. Furthermore, Dart hardly lacks a proper SDKs to use Firestore, etc., so Firebase isn't an option. I find it easier to just learn NodeJS and exploit the Firebase support for Firebase Functions written in NodeJS, and I'll wait for more support in there in the future, if there ever will be.
The documentation is a bit sparse right now (and I'm new to it also! I couldn't find any good examples, so here goes...)
You can only have a single function that is served. It should be
named 'function' (the type and name can be overriden, see the
cloudevent example dartfn generate cloudevent)
You 'could' have many of these deployed so that each does a specific thing, such as processing cloudevents above, but most people
want something more REST-like (see next)
You need to attach a Router() so that you can have the single entry point (function) handled by specific logic in your code.
Example for Rest
add to pubspec.yaml (in dependencies:) shelf_router: ^1.1.2
delegate the #CloudFunction to use the Router()
functions.dart
import 'package:functions_framework/functions_framework.dart';
import 'package:shelf/shelf.dart';
import 'package:shelf_router/shelf_router.dart';
Router app = Router()
..get('/health', (Request request) {
return Response.ok('healthy');
})
..get('/user/<user>', (Request request, String user) {
// fetch the user... (probably return as json)
return Response.ok('hello $user');
})
..post('/user', (Request request) {
// convert request body to json and persist... (probably return as json)
return Response.ok('saved the user');
});
#CloudFunction()
Future<Response> function(Request request) => app.call(request);
Related
I try to write some custom lint rules. To achieve this, I used the analyzer_plugin package and I set up my project as it should be. Here is a simplified excerpt of the main class :
class LintAnalyzerPlugin extends ServerPlugin {
#override
Future<void> analyzeFile({required AnalysisContext analysisContext, required String path}) async {
channel.sendNotification(
AnalysisErrorsParams(path, [getAnalysisError(path)]).toNotification(),
);
}
}
channel.sendNotification is called but no message is displayed into VS Code Problems panel.
After some investigation, I found out that the JSON generated for the sent notification use Dart server Legacy protocol. But the Dart analyzer server run by Dart Code extension wait for LSP (Microsoft Language Server Protocol).
Fortunately the extension offers a setting to start the server with the Legacy protocol:
"dart.useLegacyAnalyzerProtocol": true
And now the VS Code Problems panel populates sent notifications.
Unfortunately Dart Code extension advises to use LSP because the Legacy protocol will eventually be removed some day.
Is it possible to generate LSP? Or did I miss something?
If anyone has any suggestions, I'm all ears.
Is there a way to do the equivalent of #Stepwise's failure behavior for a single feature? We have some integration tests that are set up such that setupSpec() kicks off a Kafka process, and then the actual test checks that each step happened. If step 3 failed, there's no reason to bother checking subsequent steps.
There is no built-in way to do this, but assuming you are using a recent 2.x Spock version and not 1.3 or so, a relatively simple annotation-driven Spock extension can do the trick for you.
package de.scrum_master.stackoverflow.q71414311
import org.spockframework.runtime.extension.ExtensionAnnotation
import java.lang.annotation.ElementType
import java.lang.annotation.Retention
import java.lang.annotation.RetentionPolicy
import java.lang.annotation.Target
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
#ExtensionAnnotation(StepwiseIterationsExtension)
#interface StepwiseIterations {}
package de.scrum_master.stackoverflow.q71414311
import org.spockframework.runtime.extension.IAnnotationDrivenExtension
import org.spockframework.runtime.model.FeatureInfo
import org.spockframework.runtime.model.parallel.ExecutionMode
class StepwiseIterationsExtension implements IAnnotationDrivenExtension<StepwiseIterations> {
#Override
void visitFeatureAnnotation(StepwiseIterations annotation, FeatureInfo feature) {
// Disable parallel iteration execution for #StepwiseIterations feature,
// similarly to how #Stepwise disables it for the whole specification
feature.setExecutionMode(ExecutionMode.SAME_THREAD)
// If an error occurs in this feature, skip remaining iterations
feature.getFeatureMethod().addInterceptor({ invocation ->
try {
invocation.proceed()
}
catch (Throwable t) {
invocation.getFeature().skip("skipping subsequent iterations after failure")
throw t
}
})
}
}
Add this to your code base, annotate your iterated test with #StepwiseIterations and run it. I think the result is exactly what you are looking for.
In Spock 1.3, an similar, but more complex extension would also be possible.
I also want to express my special thanks to Leonard Brünings, Spock maintainer and boundless source of knowledge. I had a more complex version of this extension in place, but after discussing with him, it evolved into this tiny, elegant solution we are seeing here.
FYI, there is a pre-existing Spock issue #1008 requesting this feature. I created pull request #1442 which adds this capability to #Stepwise. So hopefully in the future we do not need an extra annotation and extra extension anymore.
For example, here is a simple dart code:
#import('dart:io');
main() {
var server = new HttpServer();
server.listen('127.0.0.1', 8080);
server.defaultRequestHandler = (HttpRequest request, HttpResponse response) {
response.outputStream.write('Hello, world'.charCodes());
response.outputStream.close();
};
}
when the web server print the 'Hello, world', I would like to run a progress to run a
long heavy task, but don't want to it blocking the current process. May I know how to handle it? Thanks.
I tried with Process.run and Process.start with no success.
From you comment I can tell there are a misunderstanding of how Dart works spawning external processes. When you spawn a process in Dart it is by default running so the Dart program and the external program are running separate (so in different processes) and the Dart program can execute other stuff. You can then await for the result from the program (e.g. when it closes).
Therefore it does not make much sense to run the process with "&" as parameter (I guess this was an attempt to tell it should run separately from the Dart program).
But, since you are spawning another Dart program your should also consider using an Isolate which can execute both your own method on another thread or run external code by using:
https://api.dart.dev/stable/2.6.0/dart-isolate/Isolate/spawnUri.html
I'm new to dart, and following the tutorial provided on the Dart for the web page.
It all makes sense - apart from one piece of sytax:
final InjectorFactory injector = self.injector$Injector;
Here's the full code from the tutorial:
import 'main.template.dart' as self;
const useHashLS = false;
#GenerateInjector([
routerProvidersHash,
ClassProvider(Client, useClass: InMemoryDataService),
// Using a real back end?
// Import 'package:http/browser_client.dart' and change the
above to:
// ClassProvider(Client, useClass: BrowserClient),
])
final InjectorFactory injector = self.injector$Injector;
void main() {
runApp(ng.AppComponentNgFactory, createInjector: injector);
}
I'm a confused by the apparent .method$Class syntax. Can anyone explain to me what this means/what it's doing?
It's also underlined in Webstorm with the message The getter 'injector$Injector' isn't defined for the class 'self'. Regardless, it runs fine and works as expected.
Thanks in advance!
$ in an identifier has no special meaning. It's by convention often used for names in generated code.
Angular also uses code generation and the code will only become available after code generation was executed for example by webdev serve or webdev build.
I don't know the current state but the code might still be generated in a directory that is not analyzed by the DartAnalyzler and you might always see the error even wen the app can be run without problems.
Can we use graph database neo4j with react js? If not so is there any alternate option for including graph database in react JS?
Easily, all you need is neo4j-driver: https://www.npmjs.com/package/neo4j-driver
Here is the most simplistic usage:
neo4j.js
//import { v1 as neo4j } from 'neo4j-driver'
const neo4j = require('neo4j-driver').v1
const driver = neo4j.driver('bolt://localhost', neo4j.auth.basic('username', 'password'))
const session = driver.session()
session
.run(`
MATCH (n:Node)
RETURN n AS someName
`)
.then((results) => {
results.records.forEach((record) => console.log(record.get('someName')))
session.close()
driver.close()
})
It is best practice to close the session always after you get the data. It is inexpensive and lightweight.
It is best practice to only close the driver session once your program is done (like Mongo DB). You will see extreme errors if you close the driver at a bad time, which is incredibly important to note if you are beginner. You will see errors like 'connection to server closed', etc. In async code, for example, if you run a query and close the driver before the results are parsed, you will have a bad time.
You can see in my example that I close the driver after, but only to illustrate proper cleanup. If you run this code in a standalone JS file to test, you will see node.js hangs after the query and you need to press CTRL + C to exit. Adding driver.close() fixes that. Normally, the driver is not closed until the program exits/crashes, which is never in a Backend API, and not until the user logs out in the Frontend.
Knowing this now, you are off to a great start.
Remember, session.close() immediately every time, and be careful with the driver.close().
You could put this code in a React component or action creator easily and render the data.
You will find it no different than hooking up and working with Axios.
You can run statements in a transaction also, which is beneficial for writelocking affected nodes. You should research that thoroughly first, but transaction flow is like this:
const session = driver.session()
const tx = session.beginTransaction()
tx
.run(query)
.then(// same as normal)
.catch(// errors)
// the difference is you can chain multiple transactions:
const tx1 = await tx.run().then()
// use results
const tx2 = await tx.run().then()
// then, once you are ready to commit the changes:
if (results.good !== true) {
tx.rollback()
session.close()
throw error
}
await tx.commit()
session.close()
const finalResults = { tx1, tx2 }
return finalResults
// in my experience, you have to await tx.commit
// in async/await syntax conditions, otherwise it may not commit properly
// that operation is not instant
tl;dr;
Yes, you can!
You are mixing two different technologies together. Neo4j is graph database and React.js is framework for front-end.
You can connect to Neo4j from JavaScript - http://neo4j.com/developer/javascript/
Interesting topic. I am using the driver in a React App and recently experienced some issues. I am closing the session every time a lifecycle hook completes like in your example. When there where more intensive queries I would see a timeout error. Going back to my setup decided to experiment by closing the driver in some more expensive queries and it looks like (still need more testing) the crashes are gone.
If you are deploying a real-world application I would urge you to think about Authentication and Authorization when using a DB-React setup only as you would have to store username/password of the neo4j server in the client. I am looking into options of having the Neo4J server issuing a token and receiving it for Authorization but the best practice is for sure to have a Node.js server in the middle with something like Passport to handle Authentication.
So, all in all, maybe the best scenario is to only use the driver in Node and have the browser always communicating with the Node server using axios...