The following takes several seconds. I know this is not conceptual but I'm cross posting from a GCP issue just in case someone happened to step on the same scenario.
const { PredictionServiceClient } = require('#google-cloud/automl');
const predictionServiceClient = new PredictionServiceClient({ apiEndpoint });
const prediction = await predictionServiceClient.predict({
name: modelFullId,
params,
payload,
});
This API in real time takes close to 10s when cold and 5s when hot. Is there a way to fasten this up other than to export the model and run it yourselves?
Yes, you can export the model and use with tensorFlowJS.
https://cloud.google.com/vision/automl/docs/edge-quickstart
https://github.com/tensorflow/tfjs/tree/master/tfjs-automl
Export the model and download model.json, dict.txt, *.bin files to your local.
Load the model into tensorFlowJS and use it.
Related
I am using the realtime database and I am using transactions to ensure the integrity of my data set. In my example below I am updating currentTime on every update.
export const updateTime = functions.database.ref("/users/{userId}/projects/{projectId}")
.onUpdate((snapshot) => {
const beforeData = snapshot.before.val();
const afterData = snapshot.after.val();
if (beforeData.currentTime !== afterData.currentTime) {
return Promise.resolve();
} else {
return snapshot.after.ref.update( {currentTime: new Date().getTime()})
.catch((err) =>{
console.error(err);
});
}
});
It seems the cloud function is not part of the transaction, but triggers multiple updates in my clients, which I try to avoid.
For example, I watched this starter tutorial which replaces :pizza: with a pizza emoji. In my client I would see :pizza: for one frame before it gets replaced with the emoji. I know, the pizza tutorial is just an example, but I am running into a similar issue. Any advice is highly appreciated!
Cloud Functions don't run as part of the database transaction indeed. They run after the database has been updated, and receive "before" and "after" snapshots of the affected data.
If you want a Cloud Function to serve as an approval process, the idiomatic approach is to have the clients write to a different location (typically called a pending queue) that the function listens to. The function then performs whatever operation it wants, and writes the result to the final location.
I'm not really understand what's the temporary storage means.
recently I have an issue:
We can deliver a variable to the function by TwiML, but it's have size/length limit. So I have to save this variable to global, so I can get this variable in anywhere(function) I want. But I'm worry if the variable will be changed by some others, because our function in twilio is serverless. In fact, global variable is not safe. Anyone can solve this?
I want to ask may I use temporary storage to resolve this?
Thanks.
Twilio developer evangelist here.
A coworker, a solutions engineer based in the UK, wrote this great blog post on using temporary storage in a Twilio Function.
The code in your Twilio Function would look something like
/**
*
* This Function shows you how to reach and utilise the temporary storage under the Function layer, mainly for single-invocation jobs
* For example, on each invocation we can create a file based on user data and use it accordingly
*
* IMPORTANT: Do NOT treat this storage as long term storage or for personal data that need to persist.
* The contents get deleted whenever the associated container is brought down, so this function is useful for one time actions
*
*/
var fs = require('fs');
var path = require('path');
var tmp_dir = require('os').tmpdir();
exports.handler = function(context, event, callback) {
/*We create a text file and we put some data in it*/
fs.writeFile(path.join(tmp_dir, 'test_file.txt'), 'Contents of created file in OS temp directory', function(err) {
if (err) callback(err);
/*We read the contents of the temporary directory to check that the file was created. For multiple files you can create a loop*/
fs.readdir(tmp_dir, function(err, files) {
if (err) callback(err);
callback(null, "File created in temporary directory: " + files.join(", "));
});
});
};
If you want to use temporary storage from the Twilio CLI, you can do so by running this command inside a project that you created with the Serverless Toolkit:
twilio serverless:new example --template=temp-storage
This Function template is also here on the Twilio Labs GitHub page.
Let me know if this helps at all!
enter image description here
The code looks like this:
In widget api_get_account_status, I save correlationId to function storage-CallSid;
In function storage-CallSid, I save correlationId to the "temporary storage";
In widget split_check_call_type, I invoke another function get-storage-CallSid;
In function get-storage-CallSid, may I get correlationId from "temporary storage"?
I was taking a quick look to K6 from loadimpact.
The graphs that I got so far show TPS, Response Time, Error rates at the global level and that is not too useful.
When I load test, I rather have those stats at the global level, but also at the flow level or at the APi level. This way if for example if I see some high latency I can tell right away if is caused by a single API or if all APIs are slow.
Or I can tell of a given API is giving say HTTP/500 or several different APIs are.
Can K6 show stats like TPS, Response Time, HTTP status at the API level, the flow level and global level?
Thanks
Yes, it can, and you have 3 options here in terms of result presentation (all involve using custom metrics to some degree):
End of test summary printed to stdout.
You output the result data to InfluxDB+Grafana.
You output the result data to Load Impact Insights.
Global stats you get with all three above, and per API endpoint stats you get out of the box with 2) and 3), but to get stats at the flow level you'd need to create custom metrics which works with all three options above. So something like this:
import http from "k6/http";
import { Trend, Rate } from "k6/metrics";
import { group, sleep } from "k6";
export let options = {
stages: [
{ target: 10, duration: "30s" }
]
};
var flow1RespTime = new Trend("flow_1_resp_time");
var flow1TPS = new Rate("flow_1_tps");
var flow1FailureRate = new Rate("flow_1_failure_rate");
export default function() {
group("Flow 1", function() {
let res = http.get("https://test.loadimpact.com/");
flow1RespTime.add(res.timings.duration);
flow1TPS.add(1);
flow1FailureRate.add(res.status == 0 || res.status > 399);
});
sleep(3);
};
This would expand the end of test summary stats printed to stdout to include the custom metrics:
I'm playing around with BigQueryIO write using loads. My load trigger is set to 18 hours. I'm ingesting data from Kafka with a fixed daily window.
Based on https://github.com/apache/beam/blob/v2.2.0/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BatchLoads.java#L213-L231 it seems that the intended behavior is to offload rows to the filesystem when at least 500k records are in a pane
I managed to produce ~ 600K records and waited for around 2 hours to see if the rows were uploaded to gcs, however, nothing was there. I noticed that the "GroupByDestination" step in "BatchLoads" shows 0 under "Output collections" size.
When I use a smaller load trigger all seems fine. Shouldn't the AfterPane.elementCountAtLeast(FILE_TRIGGERING_RECORD_COUNT)))) be triggered?
Here is the code for writing to BigQuery
BigQueryIO
.writeTableRows()
.to(new SerializableFunction[ValueInSingleWindow[TableRow], TableDestination]() {
override def apply(input: ValueInSingleWindow[TableRow]): TableDestination = {
val startWindow = input.getWindow.asInstanceOf[IntervalWindow].start()
val dayPartition = DateTimeFormat.forPattern("yyyyMMdd").withZone(DateTimeZone.UTC).print(startWindow)
new TableDestination("myproject_id:mydataset_id.table$" + dayPartition, null)
}
})
.withMethod(Method.FILE_LOADS)
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(WriteDisposition.WRITE_APPEND)
.withSchema(BigQueryUtils.schemaOf[MySchema])
.withTriggeringFrequency(Duration.standardHours(18))
.withNumFileShards(10)
The job id is 2018-02-16_14_34_54-7547662103968451637. Thanks in advance.
Panes are per key per window, and BigQueryIO.write() with dynamic destinations uses the destination as key under the hood, so the "500k elements in pane" thing applies per destination per window.
We are running application in Spring context using DataNucleus as our ORM mapping and mysql as our database.
Our application have a daily import job of some data feed into our database. The size of the data feed translate into around 1 millions row of insert/update. The performance of the import start out to be very good but then it degrade overtime (as the number of executed query increase) and at some point the application freeze or stop responding. We will have to wait for the whole job to complete before the application response again.
This behavior looks very like a memory leak to us and we have been looking hard at our code to catch any potential problem, however the problem didn't go away. One interesting thing we found from the heap dump is that org.datanucleus.ExecutionContextThreadedImpl (or the HashSet/HashMap) hold 90% of our memory (5GB) during the import. (I have attahed the screenshot of the dump below). My research on the internet said this reference is the Level1 Cache (not sure am i correct). My question is during a large import, how can i limit/control the size of the level1 cache. May be ask DN to not cache during my import?
If that's not the L1 cache, what's the possible cause of my memory issue?
Our code use a transaction for every insert to prevent locking of large chunk of data in the database. It's call the flush method every 2000 insert
As a temporary fix, we moved our import process to run overnight when no one is using our app. Obviously, this cannot go on forever. Please could someone at least point us in the right direction so that we can do more research and hoping we can find a fixes.
Would be good if someone have knowledge of decoding the heap dump
Your help would be very much appreciated by all of us here. Many thanks!
https://s3-ap-southeast-1.amazonaws.com/public-external/datanucleus_heap_dump.png
https://s3-ap-southeast-1.amazonaws.com/public-external/datanucleus_dump2.png
Code Below - Caller of this method does not have a transaction. This method will process one import object per call, and we need to process around 100K of these object daily
#Override
#PreAuthorize("(hasUserRole('ROLE_ADMIN')")
#Transactional(propagation = Propagation.REQUIRED)
public void processImport(ImportInvestorAccountUpdate account, String advisorCompanyKey) {
ImportInvestorAccountDescriptor invAccDesc = account
.getInvestorAccount();
InvestorAccount invAcc = getInvestorAccountByImportDescriptor(
invAccDesc, advisorCompanyKey);
try {
ParseReportingData parseReportingData = ctx
.getBean(ParseReportingData.class);
String baseCCY = invAcc.getBaseCurrency();
Date valueDate = account.getValueDate();
ArrayList<InvestorAccountInformationILAS> infoList = parseReportingData
.getInvestorAccountInformationILAS(null, invAcc, valueDate,
baseCCY);
InvestorAccountInformationILAS info = infoList.get(0);
PositionSnapshot snapshot = new PositionSnapshot();
ArrayList<Position> posList = new ArrayList<Position>();
Double totalValueInBase = 0.0;
double totalQty = 0.0;
for (ImportPosition importPos : account.getPositions()) {
Asset asset = getAssetByImportDescriptor(importPos
.getTicker());
PositionInsurance pos = new PositionInsurance();
pos.setAsset(asset);
pos.setQuantity(importPos.getUnits());
pos.setQuantityType(Position.QUANTITY_TYPE_UNITS);
posList.add(pos);
}
snapshot.setPositions(posList);
info.setHoldings(snapshot);
log.info("persisting a new investorAccountInformation(source:"
+ invAcc.getReportSource() + ") on " + valueDate
+ " of InvestorAccount(key:" + invAcc.getKey() + ")");
persistenceService.updateManagementEntity(invAcc);
} catch (Exception e) {
throw new DataImportException(invAcc == null ? null : invAcc.getKey(), advisorCompanyKey,
e.getMessage());
}
}
Do you use the same pm for the entire job?
If so, you may want to try to close and create new ones once in a while.
If not, this could be the L2 cache. What setting do you have for datanucleus.cache.level2.type? It think it's a weak map by default. You may want to try none for testing.