What is the differences between ServiceWorker and Workbox? - service-worker

I'm beginner on ServiceWorker. Today I saw this website : https://developers.google.com/web/tools/workbox
But I still have no idea what it is. Is this a library that help me make SW with easier way? Or, Is this another way to make PWA without SW?
What could be precise definition of "Workbox"?

It is a library for working with Service Workers. But not only Service Workers, but also other things considering Progressive Web Apps such as manifest.json files.
The page you linked says:
Why Workbox?
Workbox is a library that bakes in a set of best practices and removes the boilerplate every developer writes when working with service workers.
Precaching
Runtime caching
Strategies
Request routing
Background sync
Helpful debugging
Greater flexibility and feature set than sw-precache and sw-toolbox
which is a good definition.
Workbox will either:
help you with library provided utilities when you're writing a Service Worker by hand (manually)
take in a config and generate a Service Worker for you

Related

What Load Test tools are available that can consume AWS ALB logs from S3

Are there any recommended Load Test tools / services that are able to cycle through AWS Application Load Balancer logs stored in S3 preferably utilising the time stamps to perform piano roll type functionality?
aws-log-replay seems to be something you're looking for, it can replay requests with defined concurrency.
With regards to more or less popular load testing tools I can only think of Apache JMeter with Access Log Sampler which support out of box access log files from Tomcat, Weblogic, Reisin and SunOne, however you can come up with your own implementation of Generator class or dynamically populate HTTP Request sampler fields using JSR223 PreProcessor like it's described in Stop Making Assumptions! Learn How to Replay Your Production Traffic With JMeter guide.
Actually I don't think you will be able to produce realistic load by replaying your access logs, it might work for something simple like static content, however if your application assumes authentication, sessions, complex workflows, etc. - I'm afraid your "replay" attempt will got stuck at login page.
So instead of trying to replay complex scenarios from the logs I would suggest sticking to the load testing tool of your choice and create it from scratch. Access logs can be used to identify workload distribution (like X % of users are normally doing this, Y % are doing that, etc.) and anticipated concurrency (like at X time we had Y online users).

How to add data in Datadog to create custom dashboard?

I am new to Datadog APM. I have read few tutorials but I am unable to find how to to add data in Datadog to create custom dashboard?
The first step will be to make sure you have the datadog agent running, and that the APM component of it is running and ready to receive trace data from your applications (this option in your datadog.conf, which must be set to "true").
Second, you'll want to install the appropriate library(ies) for the languages your applications are written in. You can find them all listed in your datadog account on this page: https://app.datadoghq.com/apm/docs
Third, once the trace libraries are installed, you'll want to add trace integrations for the tools you're interested in collecting APM data on, and the instructions for those will be found in each library's docs. (E.g, Python, Ruby, and Go)
The integraitons will be a fairly quick way to get pretty granular spans on where your applications have higher latency, errors, etc. If from there you'd like to go even further, each library's docs also have instructions on how you can write your own custom tracing functions to expose more info on your custom applications--that's a little more work, but is fairly straight-forward. You'll probably want to add those bit-by-bit as you go.
Then you'd be all set, I think. You'll be tracing services, resources to get the latency, request-count, and error-count of your application requests, and you can drill down to the flame-graphs to further understand what requests spend the most time where in your applications.
Looking back now, seems like they made some recent changes to the setup process that makes it even easier to get the web framework and database integrations added if you're using Python. They've even got a command line tool in their get-started section now.
Hope this helps! And reach out to their support team (support#datadoghq.com) if you run into issues along the way--they're always happy to lend a hand.

Does spring-cloud-dataflow provide support for scheduling applications defined as tasks?

I have been looking at using projects built using spring-cloud-task within spring-cloud-dataflow. Having looked at the example projects and the documentation, the indication seems to be that tasks are launched manually through the dashboard or the shell. Does spring-cloud-dataflow provide any way of scheduling task definitions so that they can run for example on a cron schedule? I.e. Can you create a spring-cloud-task app which itself has no knowledge of a schedule, but deploy it to the dataflow server and configure the scheduling there?
Among the posts and blogs I have looked at I noticed the following:
https://spring.io/blog/2016/01/27/introducing-spring-cloud-task
Some of the Q&A afterwards hints at this being a possibility, with the reference to triggers, but I think this was discussed before it was released.
Any advice would be greatly appreciated, many thanks.
There are few ways you could launch Tasks in Spring Cloud Data Flow. Following are the available options today.
Launch it using TriggerTask; with this you could either choose to launch it with fixedDelay or via a cron expression - example here.
Launch it via an event in streaming pipeline. Imagine a use-case where you would want to create a "thumbnail" as and when there's a new image (event) in s3-bucket or in a file-system directory; the "thumbnail" operation could be a task in this case - example here.
Lastly, in the upcoming releases, we will port over "scheduler" functionality from Spring XD to Spring Cloud Data Flow.
Yes, Spring Cloud Data Flow does provide a scheduling option. To enable it, you need to add below arguments while starting the server:
--spring.cloud.dataflow.features.schedules-enabled=true

Rails turn feature on/off on the fly

I am a newbie to rails. I have used feature flags when i was in java world. I found that there are a few gems in rails (rollout and others) for doing it. But how to turn a feature on/off on the fly in rails.
In java we can use a mbean to turn features on the fly. Any idea or pointers on how to do this? I dont want to do a server restart on my machines once a code is deployed.
Unless you have a way of communicating to all your processes at once, which is non-standard, then you'd need some kind of centralized configuration system. Redis is a really fast key-value store which works well for this, but a database can also do the job if a few milliseconds per page load to figure out which features to enable isn't a big deal.
If you're only deploying on a single server, you could also use a static YAML or JSON configuration file that's read before each request is processed. The overhead of this is almost immeasurable.

Erlang Design Advice regarding HTTP services

I'm new to Erlang but I would like to get started with an application which feels applicable to the technology due to the concurrency desires I have.
This picture highlights what i want to do.
http://imagebin.org/163917
Where messages are pulled from a queue and routed to worker processes which have previously been setup as a result of a user making some input a form in a Django app. The setup requires some additional database (preexisting database so I don't want to use ETS/DETS for this bit) lookup which then talks to the message router and creates a relevant process.
My issue comes with given that I may want to ask my Django app in the future for all the workers that need to be setup and task them in the first place, what is the best way to communicate here. I favour HTTP/ json and have read up what little I can find on Mochiweb and MochiJson and I think that would do what I want. I was planning on having a OTP supervisor and application, so would it be sensible to have a seperate mochiweb process which then passes erlang messages to the router?
I have struggled a little with mochiweb due to all the tutorials talking about how you use a script to create a directory structure, which seems to put mochiweb centric to a design - which isn't want I want here, I want a lightweight mochiweb process that does occassional work.
Please tear this apart, all comments welcome.
Cheers
Dave
mochiweb is awesome but I think what you actually want is webmachine. The complete documentation is available here and here. In a nutshell, webmachine is a toolkit for making REST applications, which I think is what you want. It uses mochiweb behind the scenes but hides all of the complex (and undocumented) details. When you create a webmachine project you'll get a complete OTP application and a default resource. From there you'll do something like the following:
Add your own resources (or modify + rename the default one).
Modify the dispatcher so your resources and paths make sense for your app.
Add code to create and monitor your worker processes - probably a gen_server and a supervisor. See this and related articles for ideas. Note you'll want to start both under the main supervisor provided to you when you created your project.
Modify your resources to communicate with your gen_server.
I didn't quite follow everything else you are asking - it may be easier to answer any follow-up questions in comments.

Resources