In Docker Hub, last pulled doesn't correspond with incrementing pull count - dockerhub

I am trying to get an accurate count of how many times my docker image has been pulled. I frequently see the pulled field resetting to something like '2hrs' ago, but this doesn't correspond do an increment in the pulled count. Is this feature not working or am I misunderstanding something?
image for reference: https://hub.docker.com/repository/docker/balzack/databag/general
I've observed this for awhile now, so I don't think it's an issue of them being updated on different schedules. I see the last pulled resetting usually around 10 times before I see count increment once.

Related

How to investigate suddenly increase items of Redis

So far, items count and freeable memory of Redis kept flat.
But few weeks ago, these metrics make linear change.
We assume same amount of items which have not been set expire stack up everyday.
We want to identify which key of item stacks up.
So, are there any ways to get useful information such as date of item created or age of item?

Visibility of existing autofresh connected sheets

My organisation is facing issues with running out of slots allocated Monday morning running GCP queries and we suspect this is partially due to the number of autofresh connected sheets that have been set and forgotten. We may need to ask Google to get a list of all the autofresh queries and the creators, but before we get to that, is there a way for users to check themselves first ?(Personally I have got a few queries set up as autofresh last year but now I can't remember what they are, let alone cancelling them. )
Thanks in advance.

Remove neo4j inactive labels from database

What is the method for removing inactive, unwanted node labels in a Neo4j database (community edition version 2.2.2)?
I've seen this question in the past but for whatever reason it gets many interpretations, such as clearing browser cache, etc.
I am referring here to labels actually contained in the database, such that the REST command
GET /db/data/labels
will produce them in its output. The labels have been removed from any nodes and there are not active constraints attached to them.
I am aware this question has been asked in the past and there is a cumbersome way of solving it, which is basically, dump and reload the database. The dump command doesn't even contain scattered commit statements and thus needs to be edited before executing it back. Of course this takes forever with big databases. There has to be a better way, or at least there is a feature in the queue of requirements waiting to be implemented. Can someone clarify?
If you delete the last node with a certain label - as you've observed - the label itself does not get deleted. As of today there is no way to delete a label.
However you can copy over the datastore in offline mode using e.g. Michael's store copy tool to achieve this.
The new store is then aware of only those labels which actually are used.

Quickest way to load total number of points within a set of iterations?

I am creating an app which graphs the total number of accepted points on an iteration by iteration basis, compared to all points accepted within that iteration (regardless of project). Currently, I am using a WsapiDataStore call with filters to only pull from the chosen iterations. However, this requires pulling all user stories within the iteration and then summing the Plan Estimate fields of each. It works, but it takes a pretty long time (about 20-30 seconds) to pull data which I would assume might be able to be queried in a single call. Am I correct in my thinking, or is this really the easiest way?
Rally's API does not support server side aggregations. Unfortunately pulling that data into local memory is the only way to do calculations like this.

Tracking impressions/visits per web page

I have a site with several pages for each company and I want to show how their page is performing in terms of number of people coming to this profile.
We have already made sure that bots are excluded.
Currently, we are recording each hit in a DB with either insert (for the first request in a day to a profile) or update (for the following requests in a day to a profile). But, given that requests have gone from few thousands per days to tens of thousands per day, these inserts/updates are causing major performance issues.
Assuming no JS solution, what will be the best way to handle this?
I am using Ruby on Rails, MySQL, Memcache, Apache, HaProxy for running overall show.
Any help will be much appreciated.
Thx
http://www.scribd.com/doc/49575/Scaling-Rails-Presentation-From-Scribd-Launch
you should start reading from slide 17.
i think the performance isnt a problem, if it's possible to build solution like this for website as big as scribd.
Here are 4 ways to address this, from easy estimates to complex and accurate:
Track only a percentage (10% or 1%) of users, then multiply to get an estimate of the count.
After the first 50 counts for a given page, start updating the count 1/13th of the time by a count of 13. This helps if it's a few page doing many counts while keeping small counts accurate. (use 13 as it's hard to notice that the incr isn't 1).
Save exact counts in a cache layer like memcache or local server memory and save them all to disk when they hit 10 counts or have been in the cache for a certain amount of time.
Build a separate counting layer that 1) always has the current count available in memory, 2) persists the count to it's own tables/database, 3) has calls that adjust both places

Resources