How can I determine the amount of memory the value for a given key takes up on Redis? - lua

From what I understand after looking at the redis docs, you can (essentially) determine the memory used by a string using STRLEN, but what if I want to determine the amount of memory used by a list or a hash?
Ideally I'd like to be able to to do this without using a plugin or third party software. Perhaps I need to EVAL a lua script?

At the moment (v3.2.1) Redis doesn't provide this kind of introspective functionality, and I'm afraid that a Lua script would be of little use in this case.
However, there exists a pull request by my colleague that adds this - https://github.com/antirez/redis/pull/3223 - and I expect it'll be merged eventually.

Dont know about Lua Script, But here is a small .net application which can help you determine the size used by each key in your redis database,
You can use .net application https://github.com/abhiyx/RedisSizeCalculator to calculate the size of redis key,

Related

Should I worry about my API Keys being extracted from the iOS app

I need to make requests to the Google Books API form my app which includes the API key in the URL.
I thought about just creating it as file private variable in my app, though this is a big problem because it would then be uploaded to Github.
Then I thought about environment variables but I heard they aren't included if the app isn't run by Xcode.
I'm aware that this way the key could be extracted, but should I worry?
Can't users anyway just use Wireshark or something similar and see the key in the URL?
And I can restrict the key so it is only valid when called from my Bundle ID.
What do you think would be the best option for making the calls? I mean other than that, the app barely gets 10 downloads a week so this can't be too big of an issue, right?
Whether it is an issue entirely depends on your usecase and threat model. Consider your api key public if you include or send it in any way in/from your app, and think about things like what can people do with it. What level of harm can they cause you? This gives yo the impact. Would they be motivated, for example is there a financial benefit for them somehow? This estimates the likelihood of this happening. This together, impact x likelihood = risk, which you can either accept (do nothing about it), mitigate (decrease the impact or likelihood), eliminate (fix it) or transfer (rg. buy some kind of an insurance).
As for mitigations, can you limit the api key scope, so that only necessary things can be done on the api with it? Can you set up rate limiting? Monitoring, alerting? I'm not familiar with the Books api, but these could be mitigating controls.
As for eliminating the risk, you should not put the api key in the app. You could set up your own server, which would hold the api key, and would pretty much forward requests to theBooks api, augmented with thr api key. Note though that you would still need some kind of authentication and access control in your server, otherwise it can just be used as an oracle by an attacker to perform anything in the actual Books api the same as if they had the key, only in this case they don't need it. This role could also be fulfilled by some kind of an api gateway, which can also add data to api queries.
Eliminating the risk is obviously more expensive. Defenses should be proportionate to risk, so you have to decide whether it is worth it.

How can I prove that I have added a folder to IPFS no later than a certain date?

I would like to be able to prove that a certain set of files was available through IPFS at a certain date.
How can I achieve that without resorting to centralized solutions or third party authorities?
Thanks!
You could use a solution like opentimestamps to create a timestamp of your document using the Bitcoin network.
You can create an Ethereum smart contract which takes in an IPFS hash and ties it with the current block timestamp.
Anyone will then be able to look up an IPFS hash and see if it's in the smart contract and the timestamp, as well the public address who submitted the transaction.
If you don't want to use Ethereum you can use any reliable blockchain such as Bitcoin.
You can use the current block timestamp block.timestamp store this value then compare it to the current time when adding a folder.
Note: The block time can be manipulated by the miners see the following links. If the stakes are very high and there is incentive for people to cheat the contract then i suggest exploring other methods otherwise it doesn't seem too risky.
Can a contract safely rely on block.timestamp? -
https://ethereum.stackexchange.com/questions/413/can-a-contract-safely-rely-on-block-timestamp
Need help understanding block.timestamp and how time works in the blockchain
- https://forum.ethereum.org/discussion/14634/need-help-understanding-block-timestamp-and-how-time-works-in-the-blockchain

DynamoDB auto incremented ID & server time (iOS SDK)

Is there an option in DynammoDB to store auto incremented ID as primary key in tables? I also need to store the server time in tables as the "created at" fields (eg., user create at). But I don't find any way to get server time from DynamoDB or any other AWS services.
Can you guys help me with,
Working with auto incremented IDs in DyanmoDB tables
Storing server time in tables for "created at" like fields.
Thanks.
Actually, there are very few features in DynamoDB and this is precisely its main strength. Simplicity.
There are no way automatically generate IDs nor UUIDs.
There are no way to auto-generate a date
For the "date" problem, it should be easy to generate it on the client side. May I suggest you to use the ISO 8601 date format ? It's both programmer and computer friendly.
Most of the time, there is a better way than using automatic IDs for Items. This is often a bad habit taken from the SQL or MongoDB world. For instance, an e-mail or a login will make a perfect ID for a user. But I know there are specific cases where IDs might be useful.
In these cases, you need to build your own system. In this SO answer and this article from DynamoDB-Mapper documentation, I explain how to do it. I hope it helps
Rather than working with auto-incremented IDs, consider working with GUIDs. You get higher theoretical throughput and better failure handling, and the only thing you lose is the natural time-order, which is better handled by dates.
Higher throughput because you don't need to ask Dynamo to generate the next available IDs (which would require some resource somewhere obtaining a lock, getting some numbers, and making sure nothing else gets those numbers). Better failure handling comes when you lose your connection to Dynamo (Dynamo goes down, or you are bursty and your application is doing more work than currently provisioned throughput). A write-only application can continue "working" and generating data complete with IDs, queueing it up to be written to dynamo, and never worry about ID collisions.
I've created a small web service just for this purpose. See this blog post, that explains how I'm using stateful.co with DynamoDB in order to simulate auto-increment functionality: http://www.yegor256.com/2014/05/18/cloud-autoincrement-counters.html
Basically, you register an atomic counter at stateful.co and increment it every time you need a new value, through RESTful API.

Updating an existing Memcached record

I have an application that needs to perform multiple network queries each one of those returns 100 records.
I'd like to keep all the results (several thousand or so) together in a single Memcached record named according to the user's request.
Is there a way to append data to a Memcached record or do I need to read and write it back and forth and combine the old results with the new ones by the means of my application?
Thanks!
P.S. I'm using Rails 3.2
There's no way to append anything to a memcached key. You'd have to read it in and out of storage every time.
redis does allow this sort of operation, however, as rubish points out -- it has a native list type that allows you to push new data onto it. Check out the redis list documenation for information on how to do that.
You can write a class that'll emulate list in memcached (which is actually what i did)... appending to record isn't atomic operation, so it'll generate errors that'll accumulate over time (at least in memcached). Beside it'll be very slow.
As pointed out Redis has native lists, but it can be emulated in any noSQL / K-V storage solution.

How do you track page views on a view

Is there a plugin for this or a gem that I can use. I was thinking about just writing it to a table when a view was called in the controller. Is this the best way? I see stackoverflow has this functionality how do they do it?
Google Analytics - Let Google or some other third-party analytics provider handle it for you for free. I don't think you want to do file writes on every page load - potentially costly. Another option is to store the information in memory and write to the database periodically instead of on every page load.
[EDIT] This is an interesting question. I asked for help on this issue of what's more efficient - db writes vs file writes - there's some good feedback there too.
If you just wanted to get something in there easily you could use a real time analytics provder like W3 Counter
It gives you real time data (as opposed to Google Analytics) and is relatively simple to deploy (a few lines in your global template) but may not give you the granularity that you want. I guess it depends on if you are wanting this information programmatically to display/use in the app or for statistical purposes.
Obviously, there are third party statistics services (Google Analytics, Mint, etc...), but if you must do it yourself then doing a write each time someone hits a page will seriously impact your DB.
I'd write individual hits to an intermediate file on the filesystem or memcached, then fire a task every 10 - 15 minutes that will parse that data and insert it into the database.

Resources