I am trying to monitor bandwidth used by a specific app using snapdragon profiler, however i am not getting any numbers . All i get is zero values .
I tried several versions of Snapdragon profiler and also used different snapdragon devices, still i am not getting any numbers while monitoring the app specific bandwidth or even the system bandwidth .All i get is a flat graph and 0 value when i export the graph to a csv file. I searched through the SDP forum and couldn't find anything. Please help me
Related
We have been using the PageSpeed Insights API to track lighthouse data for some pages. Running a Lighthouse audit in Chrome using the same throttling settings is consistently reporting different numbers and it is not clear why. Can someone help explain this?
For example:
Using the PageSpeed Insights API, we are tracking the lighthouse Total Blocking Time. We get this data by sending a request to. https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=[MY_URL]&key=[MY_API_KEY]&strategy=desktop&categorty=performance
The response object contains
{ lighthouseResult: {
audits {
total-blocking-time:
}
}
}
From this response, we are regularly seeing a total blocking time between 800-1000ms. This lines up with the Total Blocking Time we see when auditing the site on pagespeed.web.dev - see this screenshot for an example.
When I run lighthouse via Chrome dev tools or via the Lighthouse CLI, our total blocking time is consistently reported between 200-300ms. Our other metrics are consistently better as well - see this screenshot.
Lighthouse via Chrome dev tools shows that it is using the same throttling settings as what we see on PageSpeed Insights - see these throttle settings from Chrome
Why are these values consistently so far off between the two sources that say they are reporting with the same data?
I have tried running Lighthouse audits repeatedly, both via Chrome dev tools and the PageSpeed Insights API. The lab data in PageSpeed Insights consistently does not match what we see in Chrome dev tools.
Lighthouse does a 4x slowdown on both PSI and DevTools.
This means if you're on a very fast developer machine, the 4x slowdown won't be the same as someone running on a slower machine.
If you hover over the Device icon you get some more details:
Here in Dev Tools you can see my machine is baselined at 2635. With a 4x slowdown that would give a ~659 device:
On PageSpeed Insights however the device speed was 1298, so ~325 after slow down is applied:
This is half the speed of the Dev Tools version, and so it's no surprise that PSI reports worse numbers, particularly for CPU dependent metrics like TBT.
Additionally running it locally will be dependent on your local setup (what Chrome extensions you're running, what else the machine is doing at the time of the test...etc.)
Throttling is complicated (see the Lighthouse docs on it here) and at present there isn’t the ability to change the settings using by either PSI or Dev Tools Lighthouse (the CLI offers more customisation options).
However, performance is not a single number, and your real users will be on varying devices so neither PSI, nor local dev tools is the "right" answer. What's more important is to compare like for like (i.e. baseline in Dev Tools, then run with changed in Dev Tools) rather than necessarily striving for an exact replication of another tool.
In our app we have a CPU intensive algorithm that we've had difficulties assessing quality from our production logs. Recently we've added thermal throttling (normal, serious, and critical) monitoring and have been able to correlate some issues of performance with users' devices hitting serious and critical thermal values.
While there are various ways to include sleeps and waits to add artificial CPU load. It's not obvious how to fit this into our algorithm and the scope of that investigation seems too much.
Is there a way to take a physical iOS device, connect to Xcode/debugger and trigger a throttling event and actually get the CPU to respond properly on the iOS device and throttle our apps performance. Then we would have a much more exact reproduction locally.
Im developing an uwp app on Raspberry Pi 3 with Windows IOT Core. But after I deploy my app and use it for couple days the os crashes. It says something went wrong. It says "Your pc ran into a problem and needs to restart". It restarts couple times but still same error on every boot.
I tried to remove the sd card(Class 10,64 GB) format it and reinstall everything. At first it was okay but after some time same error appears.
I tried to use different os builds and it didnt work.
I tried to use industrial power supply (5V3A) and also it didnt work.
My SD Card is not one of the recommended ones but do I really have to get the recommended sd cards to use the windows iot core properly?
"Your PC ran into a problem and needs to restart" is a typical blue screen message seen on Windows systems from the last few years - laptops and desktops with far larger hard drives and no SD card. The error is not associated with a RAM or disk space shortage (operating systems running in graphical mode usually monitor and actively warn about either). In your case, it is showing at startup, when not much is running (taking up RAM), and you can check the amount of space used on the card with the PC.
The key stats for SD cards are size (you have plenty) and speed (clearly enough or you would have trouble installing/running anything after starting the Pi). The cause is something else, and finding out what will require getting a more detailed error message from Windows - "a problem" could mean anything. In my experience, blue screen errors have mostly involved having a wrong driver installed, sometimes a bad Windows update - but IoT Core has its own alternatives, like "bad system configuration". Look for the underscored string (e.g., BAD_SYSTEM_CONFIG_INFO) at the end of your blue screen message, as that is the first hint.
Unfortunately, most Windows BSoD documentation is for traditional PCs, so I cannot recommend specific troubleshooting tools and be sure that they will run on the Pi.
You can use Windows Debugger to debug the kernel and drivers on Windows IoT Core. WinDbg is a very powerful debugger that most Windows developers are familiar with. Or you can also refer to this topic in MSDN, it shows how to create the dump file when the app crashes. If possible, you can share your code so that we can reproduce the issue.
I have successfully managed to install ThingsBoard on a Ubuntu 18.04. In my application I want to send large data packages from a few (<20) devices via MQTT. I also want to display the incoming data packets in real time.
For testing purposes I now play with the chart dashboard of ThingsBoard. Unfortunately I am not able to set the plotting interval on the dashboard to less than 1 second.
This is the current situation and I try to increase the speed of the plot:
Dashboard GIF
Are there any other settings that would meet my needs?
Thank you very much.
change the "data aggregation function" to "none" and it will start displaying real-time, raw data.
We're planning to evaluate and eventually potentially purchase perfino. I went quickly through the docs and cannot find the system requirements for the installation. Also I cannot find it's compatibility with JBoss 7.1. Can you provide details please?
There are no hard system requirements for disk space, it depends on the amount of business transactions that you're recording. All data will be consolidated, so the database reaches a maximum size after a while, but it's not possible to say what that size will be. Consolidation times can be configured in the general settings.
There are also no hard system requirements for CPU and physical memory. A low-end machine will have no problems monitoring 100 JVMs, but the exact details again depend on the amount of monitored business transactions.
JBoss 7.1 is supported. "Supported" means that web service and EJB calls can be tracked between JVMs, otherwise all application servers work with perfino.
I haven't found any official system requirements, but this is what we figured out experimentally.
We collect about 10,000 transactions a minute from 8 JVMs. We have a lot of distinct and long SQL queries. We use AWS machine with 2 VCPUs and 8GB RAM.
When the Perfino GUI is not being used, the CPU load is low. However, for the GUI to work properly, we had to modify perfino_service.vmoptions:
-Xmx6000m. Before that we had experienced multiple OutOfMemoryError in Perfino when filtering in the transactions view. After changing the memory settings, the GUI is running fine.
This means that you need a machine with about 8GB RAM. I guess this depends on the number of distinct transactions you collect. Our limit is high, at 30,000.
After 6 weeks of usage, there's 7GB of files in the perfino directory. Perfino can clear old recordings after a configurable time.