I am using Serilog in my .NETCore servers with several sinks such as console, file and graylog (GELF) for my cloud deployments.
As soon as I set the log-level to DEBUG I get messages from the quartz scheduler thread every few seconds. How can I turn this OFF? It shows thousands of useless entries like so:
2018-11-27 22:09:35.210 +00:00 [DBG] [Quartz.Core.QuartzSchedulerThread] [ThreadId 5] Batch acquisition of 0 triggers
2018-11-27 22:10:04.038 +00:00 [DBG] [Quartz.Core.QuartzSchedulerThread] [ThreadId 5] Batch acquisition of 0 triggers
2018-11-27 22:10:30.869 +00:00 [DBG] [Quartz.Core.QuartzSchedulerThread] [ThreadId 5] Batch acquisition of 0 triggers
2018-11-27 22:11:00.591 +00:00 [DBG] [Quartz.Core.QuartzSchedulerThread] [ThreadId 5] Batch acquisition of 0 triggers
I wrote a REST API to dynamically change the log level without restarting the services (see _logLevelSwitch in the config below). However, I do NOT want external libraries to inherit the log level for my code. How can this be done?
I managed to do this for the source Microsoft using .MinimumLevel.Override("Microsoft", LogEventLevel.Warning) in my Serilog configuration. But Quartz is a library and its source is the name of my own binary, so I cannot lower the loglevel in the same way. Here is my Serilog configuration (the general part, other props are set depending on the runtime environment)
var loggerConfig = new LoggerConfiguration()
.MinimumLevel.ControlledBy(_logLevelSwitch)
.Enrich.FromLogContext()
.MinimumLevel.Override("Microsoft", LogEventLevel.Warning)
.Enrich.WithProcessName()
.Enrich.WithThreadId()
.Enrich.WithMachineName()
.Enrich.WithAssemblyName();
And then I use additional runtime specific configuration code like:
switch (po.Runtime)
{
case Runtime.DevelLocal:
loggerConfig.Enrich.WithProperty(new KeyValuePair<string, object>("runtime", CommonConstants.RtDevelLocal));
_logLevelSwitch.MinimumLevel = LogEventLevel.Debug;
loggerConfig.WriteTo.Console(outputTemplate: "{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} [{Level:u3}] [{SourceContext}] " +
"[ThreadId {ThreadId}] {Message:lj}{NewLine}{Exception}");
loggerConfig.WriteTo.File(Path.Combine(po.LogPath, $"{BbDeConstants.ProductName}-.log"), rollingInterval: RollingInterval.Day,
outputTemplate: "{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} [{Level:u3}] [{SourceContext}] " +
"[ThreadId {ThreadId}] {Message:lj}{NewLine}{Exception}");
break;
case Runtime.DemoDatacenter:
loggerConfig.Enrich.WithProperty(new KeyValuePair<string, object>("runtime", CommonConstants.RtDemoDatacenter));
loggerConfig.WriteTo.Graylog(new GraylogSinkOptions
{
HostnameOrAddress = po.LogHost,
TransportType = TransportType.Udp,
Port = 12201,
Facility = "BizBus",
});
break;
.....
}
I read that Quartz.Net is using Commons.logging but I do not use it and I have no configuration for it.
Any idea how to get rid of those Quartz.Core.QuartzSchedulerThread messages?
I solved this problem by adding this line to my Serilog config:
.MinimumLevel.Override("Quartz", LogEventLevel.Information)
So your config would be:
var loggerConfig = new LoggerConfiguration()
.MinimumLevel.ControlledBy(_logLevelSwitch)
.Enrich.FromLogContext()
.MinimumLevel.Override("Microsoft", LogEventLevel.Warning)
.MinimumLevel.Override("Quartz", LogEventLevel.Information)
.Enrich.WithProcessName()
.Enrich.WithThreadId()
.Enrich.WithMachineName()
.Enrich.WithAssemblyName();
This worked for me in .NET Core:
appsettings.json:
"Logging": {
"LogLevel": {
"Default": "Debug",
"Microsoft": "Information",
"Microsoft.Hosting.Lifetime": "Information",
"Quartz": "Information",
"Quartz.Core.QuartzSchedulerThread": "Information",
"Quartz.Core.JobRunShell": "Information"
}
},
Related
I am trying to learn to automate End2end testing a React-native mobile App using wdio and appium.
The target component I am trying to click in this problem is this:
Component screen shot
I got an error of TypeError: $(...).waitForDisplayed is not a function" in my current test project. While I got "elements not found" when I'll do assync mode.
I can verify that the IDs are visible in Appium Element Inspector
ScreenShot here
Below are my codes (#1 & #2) Either way, I got an error. I really need to understand why I got this errors.
#1
describe('Test Unit - Assync Mode', () => {
it('Client must be able to login in the app. ', async () => {
// pay attention to `async` keyword
const el = await $('~pressSkip') // note `await` keyword
await el.click()
await browser.pause(500)
})
})
Error Message
#2
beforeEach(() => {
$('~pressSkip').waitForDisplayed({ timeout: 20000 })
})
describe('My Simple test', () => {
it('Client must be able to login the app', () => {
// Click Skip button id: "pressSkip"
$('~pressSkip').click();
// Enter Login
// Email id: "loginEmail"
// Password id: "loginPwd"
// Click Login Button id:
});
});
=============
Wdio.conf.js
=============
const { join } = require('path');
exports.config = {
//
// ====================
// Runner Configuration
// ====================
//
// WebdriverIO allows it to run your tests in arbitrary locations (e.g. locally or
// on a remote machine).
runner: 'local',
//
// ==================
// Specify Test Files
// ==================
specs: [
'./test/specs/**/*.js'],
// ============
// Capabilities
// ============
//
capabilities: [{
// http://appium.io/docs/en/writing-running-appium/caps/
// This is `appium:` for all Appium Capabilities which can be found here
'appium:platformName': 'Android',
'appium:deviceName': 'emulator-5554',
'appium:platformVersion': '8.1.0',
'appium:newCommandTimeout': '60',
'appium:app': join(process.cwd(), '/android/app/build/outputs/apk/debug/app-debug.apk'),
}],
//
// If you only want to run your tests until a specific amount of tests have failed use
// bail (default is 0 - don't bail, run all tests).
bail: 0,
//
// Set a base URL in order to shorten url command calls. If your `url` parameter starts
// with `/`, the base url gets prepended, not including the path portion of your baseUrl.
// If your `url` parameter starts without a scheme or `/` (like `some/path`), the base url
// gets prepended directly.
baseUrl: 'http://localhost:/wd/hub',
//
// Default timeout for all waitFor* commands.
waitforTimeout: 10000,
//
// Default timeout in milliseconds for request
// if browser driver or grid doesn't send response
connectionRetryTimeout: 120000,
//
// Default request retries count
connectionRetryCount: 3,
//
// Test runner services
// Services take over a specific job you don't want to take care of. They enhance
// your test setup with almost no effort. Unlike plugins, they don't add new
// commands. Instead, they hook themselves up into the test process.
services: [['appium',{
// This will use the globally installed version of Appium
command: 'appium',
args: {
basePath: "/wd/hub",
// This is needed to tell Appium that we can execute local ADB commands
// and to automatically download the latest version of ChromeDriver
relaxedSecurity: true,}
}]],
port: 4723,
hostname: "localhost",
// Make sure you have the wdio adapter package for the specific framework installed
// before running any tests.
framework: 'jasmine',
//
// Test reporter for stdout.
// The only one supported by default is 'dot'
// see also: https://webdriver.io/docs/dot-reporter
reporters: ['spec'],
//
// Options to be passed to Jasmine.
jasmineOpts: {
// Jasmine default timeout
defaultTimeoutInterval: 60000,
//
// The Jasmine framework allows interception of each assertion in order to log the state of the application
// or website depending on the result. For example, it is pretty handy to take a screenshot every time
},
}
describe('Test Unit - Assync Mode', () => {
it('Client must be able to login in the app. ', async () => {
// pay attention to `async` keyword
await (await $('~pressSkip')).waitForDisplayed({ timeout: 20000 })
const el = await $('~pressSkip') // note `await` keyword
await el.click()
await browser.pause(500)
})
})
add await to the waitfordisplay also
There is an official examples of MassTransit with SQS. The "bus" is configured to use SQS (x.UsingAmazonSqs). The receive endpoint is an SQS which in turn subscribed to an SNS topic. However there is no example how to Publish into SNS.
How to publish into SNS topic?
How to configure SQS/SNS to use http, since I develop against localstack?
AWS sdk version:
var cfg = new AmazonSimpleNotificationServiceConfig { ServiceURL = "http://localhost:4566", UseHttp = true };
Update:
After Chris's reference and experiments with configuration I came up with the following for the 'localstack' SQS/SNS. This configuration executes without errors and Worker gets called, and publishes a message to a bus. However consumer class is not triggered and doesn't seem messages end up in the queue (or rather topic).
public static readonly AmazonSQSConfig AmazonSQSConfig = new AmazonSQSConfig { ServiceURL = "http://localhost:4566" };
public static AmazonSimpleNotificationServiceConfig AmazonSnsConfig = new AmazonSimpleNotificationServiceConfig {ServiceURL = "http://localhost:4566"};
...
services.AddMassTransit(x =>
{
x.AddConsumer<MessageConsumer>();
x.UsingAmazonSqs((context, cfg) =>
{
cfg.Host(new Uri("amazonsqs://localhost:4566"), h =>
{
h.Config(AmazonSQSConfig);
h.Config(AmazonSnsConfig);
h.EnableScopedTopics();
});
cfg.ReceiveEndpoint(queueName: "deal_queue", e =>
{
e.Subscribe("deal-topic", s =>
{
});
});
});
});
services.AddMassTransitHostedService(waitUntilStarted: true);
services.AddHostedService<Worker>();
Update 2:
When I look at sns subscriptions I see that the first which was created and subscribed manually through aws cli has a correct Endpoint, while the second that was created by MassTransit library has incorrect one. How to configure Endpoint for the SQS queue?
$ aws --endpoint-url=http://localhost:4566 sns list-subscriptions-by-topic --topic-arn "arn:aws:sns:us-east-1:000000000000:deal-topic"
{
"Subscriptions": [
{
"SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:deal-topic:c804da4a-b12c-4203-83ec-78492a77b262",
"Owner": "",
"Protocol": "sqs",
"Endpoint": "http://localhost:4566/000000000000/deal_queue",
"TopicArn": "arn:aws:sns:us-east-1:000000000000:deal-topic"
},
{
"SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:deal-topic:b47d8361-0717-413a-92ee-738d14043a87",
"Owner": "",
"Protocol": "sqs",
"Endpoint": "arn:aws:sqs:us-east-1:000000000000:deal_queue",
"TopicArn": "arn:aws:sns:us-east-1:000000000000:deal-topic"
}
Update 3:
I've cloned the project and ran some unit tests of the project for AmazonSQS bus configuration, consumers don't seem to work.
When I list subscriptions after the test run I can tell that Endpoints are incorrect.
...
{
"SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:MassTransit_TestFramework_Messages-PongMessage:e16799c2-9dd3-458d-bc28-52a16d646de3",
"Owner": "",
"Protocol": "sqs",
"Endpoint": "arn:aws:sqs:us-east-1:000000000000:input_queue",
"TopicArn": "arn:aws:sns:us-east-1:000000000000:MassTransit_TestFramework_Messages-PongMessage"
},
...
Could it be that AmazonSQS for localstack has a major bug?
It's not clear how to use library with 'localstack' sqs, how to point out to actual endpoint (QueueUrl) of an SQS queue.
Whenever Publish is called in MassTransit, messages are published to SNS. Those messages are then routed to receive endpoints as configured. There is no need to understand SQS or SNS when using MassTransit with Amazon SQS/SNS.
In MassTransit, you create consumers, those consumers consume message types, and MassTransit configures topics/queues as needed. Any of the samples using RabbitMQ, Azure Service Bus, etc. are easily converted to SQS by changing UsingRabbitMq to UsingAmazonSqs (and adding the appropriate NuGet package).
Looks like your configuration is setup properly to publish, but there are probably at least a few reasons I can think of why you are not receiving messages:
Issue with the current version of localstack. I had to use 0.11.2 - see Localstack with MassTransit not getting messages
You are publishing to a different topic. Masstransit will create the topic using the name of the message type. This may not match the topic you configured on the receive endpoint. You can change the topic name by configuring the topology - see How can I configure the topic name when using MassTransit SQS?
Your consumer is not configured on the receive endpoint - see the example below
public static readonly AmazonSQSConfig AmazonSQSConfig = new AmazonSQSConfig { ServiceURL = "http://localhost:4566" };
public static AmazonSimpleNotificationServiceConfig AmazonSnsConfig = new AmazonSimpleNotificationServiceConfig {ServiceURL = "http://localhost:4566"};
...
services.AddMassTransit(x =>
{
x.UsingAmazonSqs((context, cfg) =>
{
cfg.Host(new Uri("amazonsqs://localhost:4566"), h =>
{
h.Config(AmazonSQSConfig);
h.Config(AmazonSnsConfig);
});
cfg.ReceiveEndpoint(queueName: "deal_queue", e =>
{
e.Subscribe("deal-topic", s => {});
e.Consumer<MessageConsumer>();
});
});
});
services.AddMassTransitHostedService(waitUntilStarted: true);
services.AddHostedService<Worker>();
From what I see in the docs about Consumers you should be able to add your consumer to the AddMastTransit configuration like your original sample, but it didn't work for me.
After reading about Apache Flume and the benefits it provides in terms of handling client events I decided it was time to start looking into this in more detail. Another great benefit appears to be that it can handle Apache Avro objects :-) However, I am struggle to understand how the Avro schema is used to validate Flume events received.
To help understand my problem in more detail I have provided code snippets below;
Avro schema
For the purpose of this post I am using a sample schema defining a nested Object1 record with 2 fields.
{
"namespace": "com.example.avro",
"name": "Example",
"type": "record",
"fields": [
{
"name": "object1",
"type": {
"name": "Object1",
"type": "record",
"fields": [
{
"name": "value1",
"type": "string"
},
{
"name": "value2",
"type": "string"
}
]
}
}
]
}
Embedded Flume agent
Within my Java project I am currently using the Apache Flume embedded agent as detailed below;
public static void main(String[] args) {
final Event event = EventBuilder.withBody("Test", Charset.forName("UTF-8"));
final Map<String, String> properties = new HashMap<>();
properties.put("channel.type", "memory");
properties.put("channel.capacity", "100");
properties.put("sinks", "sink1");
properties.put("sink1.type", "avro");
properties.put("sink1.hostname", "192.168.99.101");
properties.put("sink1.port", "11111");
properties.put("sink1.batch-size", "1");
properties.put("processor.type", "failover");
final EmbeddedAgent embeddedAgent = new EmbeddedAgent("TestAgent");
embeddedAgent.configure(properties);
embeddedAgent.start();
try {
embeddedAgent.put(event);
} catch (EventDeliveryException e) {
e.printStackTrace();
}
}
In the above example I am creating a new Flume event with "Test" defined as the event body sending events to a separate Apache Flume agent running inside a VM (192.168.99.101).
Remote Flume agent
As described above I have configured this agent to receive events from the embedded Flume agent. The Flume configuration for this agent looks like;
# Name the components on this agent
hello.sources = avroSource
hello.channels = memoryChannel
hello.sinks = loggerSink
# Describe/configure the source
hello.sources.avroSource.type = avro
hello.sources.avroSource.bind = 0.0.0.0
hello.sources.avroSource.port = 11111
hello.sources.avroSource.channels = memoryChannel
# Describe the sink
hello.sinks.loggerSink.type = logger
# Use a channel which buffers events in memory
hello.channels.memoryChannel.type = memory
hello.channels.memoryChannel.capacity = 1000
hello.channels.memoryChannel.transactionCapacity = 1000
# Bind the source and sink to the channel
hello.sources.avroSource.channels = memoryChannel
hello.sinks.loggerSink.channel = memoryChannel
And I am executing the following command to launch the agent;
./bin/flume-ng agent --conf conf --conf-file ../sample-flume.conf --name hello -Dflume.root.logger=TRACE,console -Dorg.apache.flume.log.printconfig=true -Dorg.apache.flume.log.rawdata=true
When I execute the Java project main method I see the "Test" event is passed through to my logger sink with the following output;
2019-02-18 14:15:09,998 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 54 65 73 74 Test }
However, it is unclear to me exactly where I should configure the Avro schema to ensure that only valid events are received and processed by Flume. Can someone please help me understand where I am going wrong? Or, if I have misunderstood the intention of how Flume is designed to convert Flume events into Avro events?
In addition to the above I have also tried using the Avro RPC client after changing the Avro schema to specify a protocol talking directly to my remote Flume agent, but when I attempt to send events I see the following error;
Exception in thread "main" org.apache.avro.AvroRuntimeException: Not a remote message: test
at org.apache.avro.ipc.Requestor$Response.getResponse(Requestor.java:532)
at org.apache.avro.ipc.Requestor$TransceiverCallback.handleResult(Requestor.java:359)
at org.apache.avro.ipc.Requestor$TransceiverCallback.handleResult(Requestor.java:322)
at org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.messageReceived(NettyTransceiver.java:613)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:595)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:458)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:439)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:553)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
My goal is that I am able to ensure that events populated by my application conforms to the Avro schema generated to avoid invalid events being published. I would prefer I achieve this using the embedded Flume agent, but if this is not possible then I would consider using the Avro RPC approach talking directly to my remote Flume agent.
Any help / guidance would be a great help. Thanks in advance.
UPDATE
After further reading I wonder if I have misunderstood the purpose of Apache Flume. I originally thought this could be used to automatically create Avro events based on the data / schema, but now wondering if the application should assume responsibility for producing Avro events which will be stored in Flume according to the channel configuration and sent as a batch via the sink (in my case a Spark Streaming cluster).
If the above is correct then I would like to know whether Flume is required to know about the schema or just my Spark Streaming cluster which will eventually process this data? If Flume is required to know about the schema then can you please provide details of how this can be achieved?
Thanks in advance.
Since your goal is to process the data using Spark Streaming cluster you may solve this problem with 2 solutions
1) Using Flume client (tested with flume-ng-sdk 1.9.0) and Spark Streaming (tested with spark-streaming_2.11 2.4.0 and spark-streaming-flume_2.11 2.3.0) without Flume server in between the network topology.
Client class sends Flume json event at port 41416
public class JSONFlumeClient {
public static void main(String[] args) {
RpcClient client = RpcClientFactory.getDefaultInstance("localhost", 41416);
String jsonData = "{\r\n" + " \"namespace\": \"com.example.avro\",\r\n" + " \"name\": \"Example\",\r\n"
+ " \"type\": \"record\",\r\n" + " \"fields\": [\r\n" + " {\r\n"
+ " \"name\": \"object1\",\r\n" + " \"type\": {\r\n" + " \"name\": \"Object1\",\r\n"
+ " \"type\": \"record\",\r\n" + " \"fields\": [\r\n" + " {\r\n"
+ " \"name\": \"value1\",\r\n" + " \"type\": \"string\"\r\n" + " },\r\n"
+ " {\r\n" + " \"name\": \"value2\",\r\n" + " \"type\": \"string\"\r\n"
+ " }\r\n" + " ]\r\n" + " }\r\n" + " }\r\n" + " ]\r\n" + "}";
Event event = EventBuilder.withBody(jsonData, Charset.forName("UTF-8"));
try {
client.append(event);
} catch (Throwable t) {
System.err.println(t.getMessage());
t.printStackTrace();
} finally {
client.close();
}
}
}
Spark Streaming Server class listens at port 41416
public class SparkStreamingToySample {
public static void main(String[] args) throws Exception {
SparkConf sparkConf = new SparkConf().setMaster("local[2]")
.setAppName("SparkStreamingToySample");
JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, Durations.seconds(30));
JavaReceiverInputDStream<SparkFlumeEvent> lines = FlumeUtils
.createStream(ssc, "localhost", 41416);
lines.map(sfe -> new String(sfe.event().getBody().array(), "UTF-8"))
.foreachRDD((data,time)->
System.out.println("***" + new Date(time.milliseconds()) + "=" + data.collect().toString()));
ssc.start();
ssc.awaitTermination();
}
}
2) Using Flume client + Flume server between + Spark Streaming (as Flume Sink) as network topology.
For this option, the code is the same, but now the SparkStreaming must specify the full dns qualified hostname instead of localhost to start SparkStreaming server at same port 41416 if you're running this locally for testing. The Flume client will connect to flume server port 41415. The tricky part now is how to define your flume topology. You need to specify both a source and a sink for this to work.
See flume conf below
agent1.channels.ch1.type = memory
agent1.sources.avroSource1.channels = ch1
agent1.sources.avroSource1.type = avro
agent1.sources.avroSource1.bind = 0.0.0.0
agent1.sources.avroSource1.port = 41415
agent1.sinks.avroSink.channel = ch1
agent1.sinks.avroSink.type = avro
agent1.sinks.avroSink.hostname = <full dns qualified hostname>
agent1.sinks.avroSink.port = 41416
agent1.channels = ch1
agent1.sources = avroSource1
agent1.sinks = avroSink
You should get same results with both solutions, but returning to your question of if Flume is really needed for Spark Streaming contents from Json stream, the answer is it depends, Flume supports interceptors so in this case it could be used to cleanse or filter invalid data for your Spark project, but since you're adding an extra component to the topology it may impact performance and require more resources (CPU/Memory) than without Flume.
I am using workbox service worker to cache images and API responses, by creating custom caches for each. While I can see that the routes match, but I cannot see the response being stored in the cache, and the service worker is then requesting each of the resources from network due to cache miss.
I have used workbox-webpack-plugin for the service worker and writing the custom routing and caching strategies in other file, which is then passed to the plugin configuration.
On the same note, my css and js files are stored and served fine.
I have tried using different caching strategies, and a workaround without webpack plugin, but none of them seem to work
//Cache JS files
workbox.routing.registerRoute(
new RegExp('.*\.js'),
workbox.strategies.cacheFirst()
);
//Cache API response
workbox.routing.registerRoute(
new RegExp('\/api\/(xyz|abc|def)'),
workbox.strategies.staleWhileRevalidate({
cacheName: 'apiCache',
plugins : [
new workbox.expiration.Plugin({
maxEntries: 100,
maxAgeSeconds: 30 * 60 // 30 Minutes
})
]
})
);
//cache images
workbox.routing.registerRoute(
new RegExp('(png|gif|jpg|jpeg|svg)'),
workbox.strategies.cacheFirst({
cacheName: 'images',
plugins: [
new workbox.expiration.Plugin({
maxEntries: 60,
maxAgeSeconds: 30 * 24 * 60 * 60, // 30 Days
})
]
})
);
This is the webpack config :
new workboxPlugin.InjectManifest({
swSrc: 'customWorkbox.js',
swDest: 'sw.js'
})
Per the workbox docs the regex for an external api needs to match from the start:
Instead of matching against any part of the URL, the regular expression must match from the beginning of the URL in order to trigger a route when there's a cross-origin request.
var admin = require("firebase-admin");
var serviceAccount = require(__dirname+"/myserviceaccount.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://myproject.firebaseio.com"
});
db.ref('myref').on("child_changed", function(snapshot) {
...
});
package.json
{
"name": "listener",
"version": "0.0.1",
"dependencies": {
"firebase-admin": "^5.2.1"
}
}
It works fine until 1-2 hour later and no error log. Anyone can solve this problem?
The code you shared doesn't start any Cloud Functions as far as I can see. I'm surprised that it deploys at all, but it definitely won't start a reliable listener in Cloud Functions for Firebase.
To write code that correctly functions in the Cloud Functions environment, be sure to follow the instructions here: https://firebase.google.com/docs/functions/get-started.
Specifically: the correct syntax to set up code in Cloud Functions that is triggered by updates to a database path is:
exports.listenToMyRef = functions.database.ref('/myref/{pushId}')
.onUpdate(event => {
// Log the current value that was written.
console.log(event.data.val();
return true;
});