What and where is this class 'UniversalScalabilityLawForecast' in Micrometer library? - devops

I'm reading 'SRE with Java Microservices'(O'reilly)
"USL forecasting is a form of “derived” Meter in Micrometer and can be enabled as
shown in Example 4-39. "
Example 4-39. Universal scalability law forecast configuration in Micrometer
UniversalScalabilityLawForecast
.builder(
registry
.find("http.server.requests")
.tag("uri", "/myendpoint")
.tag("status", s -> s.startsWith("2"))
)
.independentVariable(UniversalScalabilityLawForecast.Variable.THROUGHPUT)
// In this case, forecast to up to 1,000 requests/second (throughput)
.maximumForecast(1000)
.register(registry);
What and where is this class 'UniversalScalabilityLawForecast' in Micrometer library?? I can't find it
github repository, google searching, no luck.
help me please.

Related

Using Mosek with Drake on Deepnote

ValueError: "MosekSolver cannot Solve because MosekSolver::available() is false, i.e., MosekSolver has not been compiled as part of this binary. Refer to the MosekSolver class overview documentation for how to compile it."
Hi, I got the above error when trying to use the Mosek solver in Drake. It is not clear to me how to enable Mosek in Deepnote with Drake. Do I need to include something in the Dockerfile or the init file? Any tips would be appreciated.
Links I looked at:
https://drake.mit.edu/pydrake/pydrake.solvers.mosek.html
https://drake.mit.edu/bazel.html#mosek
Mosek+Drake does work on Deepnote. The workflow is like this:
Obtain a Mosek license file (from the Mosek website), and upload it to Deepnote.
Set an environment variable to tell Drake where to find the license file. For instance, you can add the following at the top of your notebook:
import os
os["MOSEKLM_LICENSE_FILE"] = "mosek.lic"
Now MosekSolver.available() should be True, and Mosek will even be chosen as the default preferred solver for if you simply call Solve(prog).
Note: Please be very careful not to share the Deepnote notebook with your mosek.lic uploaded.

Are the Core Web Vitals metrics configurable at build time?

Is it possible to configure the Core Web Vitals metrics thresholds during build time? or as part of your CI process?
The Core Web Vitals prescribe a specific set of metrics thresholds and percentiles we believe correspond well to user expectations across a range of devices.
We encourage using our official thresholds as much as possible. If however, you would like to set custom targets for thresholds (e.g a Largest Contentful Paint performance budget of < 3s), this is possible using Lighthouse CI and LightWallet. Metric targets can be set using assertions and a performance budgets file.
An example of such assertions can be found below:
{
"ci": {
"assert": {
"assertions": {
"largest-contentful-paint": ["warn", {"maxNumericValue": 3000}],
"viewport": "error",
"resource-summary:document:size": ["error", {"maxNumericValue": 14000}],
"resource-summary:font:count": ["warn", {"maxNumericValue": 1}],
"resource-summary:third-party:count": ["warn", {"maxNumericValue": 5}]
}
}
}
}

Exported Dataflow Template Parameters Unknown

I've exported a Cloud Dataflow template from Dataprep as outlined here:
https://cloud.google.com/dataprep/docs/html/Export-Basics_57344556
In Dataprep, the flow pulls in text files via wildcard from Google Cloud Storage, transforms the data, and appends it to an existing BigQuery table. All works as intended.
However, when trying to start a Dataflow job from the exported template, I can't seem to get the startup parameters right. The error messages aren't overly specific but it's clear that for one thing, I'm not getting the locations (input and output) right.
The only Google-provided template for this use case (found at https://cloud.google.com/dataflow/docs/guides/templates/provided-templates#cloud-storage-text-to-bigquery) doesn't apply as it uses a UDF and also runs in Batch mode, overwriting any existing BigQuery table rather than append.
Inspecting the original Dataflow job details from Dataprep shows a number of parameters (found in the metadata file) but I haven't been able to get those to work within my code. Here's an example of one such failed configuration:
import time
from google.cloud import storage
from googleapiclient.discovery import build
from oauth2client.client import GoogleCredentials
def dummy(event, context):
pass
def process_data(event, context):
credentials = GoogleCredentials.get_application_default()
service = build('dataflow', 'v1b3', credentials=credentials)
data = event
gsclient = storage.Client()
file_name = data['name']
time_stamp = time.time()
GCSPATH="gs://[path to template]
BODY = {
"jobName": "GCS2BigQuery_{tstamp}".format(tstamp=time_stamp),
"parameters": {
"inputLocations" : '{{\"location1\":\"[my bucket]/{filename}\"}}'.format(filename=file_name),
"outputLocations": '{{\"location1\":\"[project]:[dataset].[table]\", [... other locations]"}}',
"customGcsTempLocation": "gs://[my bucket]/dataflow"
},
"environment": {
"zone": "us-east1-b"
}
}
print(BODY["parameters"])
request = service.projects().templates().launch(projectId=PROJECT, gcsPath=GCSPATH, body=BODY)
response = request.execute()
print(response)
The above example indicates invalid field ("location1", which I pulled from a completed Dataflow job. I know I need to specify the GCS location, the template location, and the BigQuery table but haven't found the correct syntax anywhere. As mentioned above, I found the field names and sample values in the job's generated metadata file.
I realize that this specific use case may not ring any bells but in general if anyone has had success determining and using the correct startup parameters for a Dataflow job exported from Dataprep, I'd be most grateful to learn more about that. Thx.
I think you need to review this document it explains exactly the syntax required for passing the various pipeline options available including the location parameters needed... 1
Specifically with your code snippet the following does not follow the correct syntax
""inputLocations" : '{{\"location1\":\"[my bucket]/{filename}\"}}'.format(filename=file_name)"
In addition to document1, you should also review the available pipeline options and their correct syntax 2
Please use the links...They are the official documentation links from Google.These links will never go stale or be removed they are actively monitored and maintained by a dedicated team

Launching composed task built by DSL from stream application

Every example I've seen (task-launcher sink and triggertask source ) shows how to launch the task defined by uri attribute.
My tasks definitions look like this :
sampleTask <t2: timestamp || t1: timestamp>
sampleTask-t1 timestamp
sampleTask-t2 timestamp
sampleTaskRunner composed-task-runner --graph=sampleTask
My question is how do I launch the composed task runner (sampleTaskRunner, defined by DSL) from stream application.
Thanks
UPDATE
I ended up with the below solution that triggers task using SCDF REST API :
composedTask definition :
<timestamp || mySampleTask>
Stream definition :
http | httpclient | log
Deployment properties :
app.http.port=81
app.httpclient.body=name=composedTask&arguments=--increment-instance-enabled=true
app.httpclient.http-method=POST
app.httpclient.url=http://localhost:9393/tasks/executions
app.httpclient.headers-expression={'Content-Type':'application/x-www-form-urlencoded'}
Though it's easy to implement http sink component, would be great if stream application starters will provide one out of the box.
Another concern I have is about discovering the SCDF REST URL when deployed in distributed environment.
Here's a quick take from one of the SCDF's R&D team members (Glenn Renfro).
stream create foozer --definition "trigger --fixed-delay=5 | tasklaunchrequest-transform --uri=maven://org.springframework.cloud.task.app:composedtaskrunner-task:1.1.0.BUILD-SNAPSHOT --command-line-arguments='--graph=sampleTask-t1||sampleTask-t2 --increment-instance-enabled=true --spring.datasource.url=jdbc:mariadb://localhost:3306/test --spring.datasource.username=root --spring.datasource.password=password --spring.datasource.driverClassName=org.mariadb.jdbc.Driver' | task-launcher-local" --deploy
In the foozer stream definition,
1) "trigger" source happens to trigger an upstream event every 5s
2) "tasklaunchrequest-transform" processor takes a few arguments; more specifically, it uses "composedtaskrunner-task:1.1.0.BUILD-SNAPSHOT" to launch a composed-task graph (i.e., sampleTask-t1||sampleTask-t2)
3) Pay attention to --increment-instance-enabled. This was recently added to CTR application and this provides the ability to re-launch a composed-task in a recurring cadence
4) Since the CTR and SCDF must share the same database, we are also passing datasource properties as command-line args. (SCDF-server is already started with the same datasource credentials)
Hope this helps.
Lastly, we will add a sample to the reference guide via: spring-cloud/spring-cloud-dataflow#1780

Stream reasoning / Reactive programming in prolog?

I was wondering if you know of any way to use prolog for stream processing, that is, some kind of reactive programming, or at least to let a query run on a knowledge base that is continuously updated (effectively a stream), and continuously output the output of the reasoning?
Anything implemented in the popular "prologs", such as SWI-prolog?
You can use Logtalk's support for event-driven programming to define monitors that watch for knowledge base update events and react accordingly. You can run Logtalk using most Prolog systems as the backed compiler, including SWI-Prolog.
The event-driven features are described e.g. in the user manual:
http://logtalk.org/manuals/userman/events.html
The current distribution contains some examples of using events and monitors. An interesting one considering your question is the bricks example:
https://github.com/LogtalkDotOrg/logtalk3/tree/master/examples/bricks
Running this example first and then looking at its code should give you as good idea of what you can do with system wide events and monitors.
XSB has stream processing capabilities. See page 14 in the
XSB Manual
I'm working on something related, in project pqConsole already there is the basic capability: report to the user structured data, containing actionable areas (links) that call back in Prolog current state, hence the possibility to expose actions and react appropriately (hopefully).
It's strictly related to pqConsole::win_write_html, showcasing recent Qt capabilities of SWI-Prolog.
Here an example of a snippet producing only a simple formatted report, I'll try now to add the reactive part, so you can evaluate if you find expressive this basic system. Hints are welcome...
/* File: win_html_write_test.pl
Author: Carlo,,,
Created: Aug 27 2013
Purpose: example usage win_html_write/1
*/
:- module(win_html_write_test,
[dir2list/1
]).
:- [library(http/html_write)].
:- [library(dirtree)].
dir2list(Path) :-
dirtree(Path, DirTree),
% sortree(compare_by_attr(name), DirTree, Sorted), !,
DirTree = Sorted,
phrase(html([\css,
\logo,
hr([]),
ul(\dirtree2html(Sorted, [])),
br([])]), Tokens),
with_output_to(atom(X), print_html(Tokens)),
win_html_write(X),
dump_page_to_debug(X).
css --> html(style(type='text/css',
['.size{color:blue;}'
])).
logo --> html(img([src=':/swipl.png'],[])).
dirtree2html(element(dir, A, S), Parents) -->
html(li([\elem2html(A),
ul(\elements2html(S, [A|Parents]))])).
dirtree2html(element(file, A, []), _Parents) -->
html(li(\elem2html(A))).
elem2html(A) -->
{memberchk(name=N, A),
memberchk(size=S, A)
},
html([span([class=name], N), ' : ', span([class=size], S)]).
elements2html([E|Es], Parents) -->
dirtree2html(E, Parents),
elements2html(Es, Parents).
elements2html([], _Parents) --> [].
dump_page_to_debug(X) :-
open('page_to_debug.html', write, S),
format(S, '<html>~n~s~n</html>~n', [X]),
close(S).
This snippet depends on dirtree, that should be installed with
?- pack_install(dirtree).
edit With 3 modifications now the report has the ability to invoke editing of files:
call to get paths in structure
dir2list(Path) :-
dirtree(Path, DirTreeT),
assign_path(DirTreeT, DirTree),
...
request a specialized output for files only
dirtree2html(element(file, A, []), _Parents) -->
html(li(\file2html(A))).
finally, the 'handler' - here just place a request to invoke the editor
file2html(A) -->
{memberchk(name=N, A),
memberchk(path=P, A),
memberchk(size=S, A)
},
html([span([class=name],
[a([href='writeln(editing(\'~s\')), edit(\'~s\')'-[N,P]], N)]
), ' : ', span([class=size], S)]).
and now the file names are clickable, write a message and get edited if requested: a picture
You should check out RTEC: Run-Time Event Calculus:
https://github.com/aartikis/RTEC
RTEC is an open-source Event Calculus dialect optimised for stream reasoning. It is written in Prolog and has been tested under YAP 6.2.
Feature highlights:
Interval-based.
Sliding window reasoning.
Interval manipulation constructs for non-inertial fluents.
Caching for hierarchical knowledge bases.
Support for out-of-order data streams.
Indexing for handling efficiently irrelevant data.
There is also a mention of it on the SWI-Prolog website:
https://www.swi-prolog.org/pack/file_details/prologmud_I7/prolog/ec_planner/RTEC/README.md
which presumably relies on:
https://www.swi-prolog.org/pldoc/doc/_SWI_/library/dialect/yap.pl
I don't know why this hasn't been brought up so far, but in SWI-Prolog there is prolog_listen, which can, amongst other things, monitor dynamic updates to the database:

Resources