Azure Redis Cache Batch Operations / Multiple Operations - azure-caching

Given a list of Keys, I want to pull out multiple values from Azure Redis Cache.
How do we perform multiple operations at the same time with Azure Redis Cache?
Our data is a int/ComplexObject pair. Our data is located in SQL Server. We currently get the list by converting our List<int> of keys into a XElement object and passing that into a stored procedure - but our key size is quite small (3000 keys) - so the same data is being accessed again and again by multiple users.
It would be great if we can just cache the 3000 key/value pairs once - and then access them with something like: cache.GetValues(List<int> keys)

There is nothing special for Azure Redis cache. You would want to do the transaction operation supported in Redis as shown here http://redis.io/topics/transactions
If you using Stack Exchange Redis client then you can refer to this page https://github.com/StackExchange/StackExchange.Redis/blob/master/Docs/Transactions.md

Look at the MGet (http://redis.io/commands/mget) and MSet (http://redis.io/commands/mset) functionality that Redis has. These are supported on the StackExchange.Redis client.
private static void MGet(CancellationToken cancellationToken)
{
var pairs = new KeyValuePair<RedisKey, RedisValue>[] {
new KeyValuePair<RedisKey,RedisValue>("key1", "value1"),
new KeyValuePair<RedisKey,RedisValue>("key2", "value2"),
new KeyValuePair<RedisKey,RedisValue>("key3", "value3"),
new KeyValuePair<RedisKey,RedisValue>("key4", "value4"),
new KeyValuePair<RedisKey,RedisValue>("key5", "value5"),
};
var keys = pairs.Select(p => p.Key).ToArray();
Connection.GetDatabase().StringSet(pairs);
var values = Connection.GetDatabase().StringGet(keys);
}
You will want to keep in mind that getting or setting too many keys on a single command can lead to performance problems.

Related

Does a service worker allow one to simply use long expiration headers on static assets?

Say I have a service worker that populates the cache with the following working code when its activated:
async function install() {
console.debug("SW: Installing ...");
const cache = await caches.open(CACHE_VERSION);
await cache.addAll(CACHE_ASSETS);
console.log("SW: Installed");
}
async function handleInstall(event) {
event.waitUntil(install());
}
self.addEventListener("install", handleInstall);
When performs cache.addAll(), will the browser use its own internal cache, or will it always download the content from the site? This is important, because it one creates a new service worker release, and there are new static assets, the old version maybe be cached by the service worker.
If not then I guess one still has to do named hashed/versioned static assets. Something I was hoping service workers would make none applicable.
cache.addAll()'s behavior is described in the service worker specification, but here's a more concise summary:
For each item in the parameter array, if it's a string and not a Request, construct a new Request using that string as input.
Perform fetch() on each request and get a response.
As long as the response has an ok status, call cache.put() to add the response to the cache, using the request as the key.
To answer your question, the most relevant step is 1., as that determines what kind of Request is passed to fetch(). If you just pass in a string, then there are a lot of defaults that will be used when implicitly constructing the Request. If you want more control over what's fetch()ed, then you should explicitly create a Request yourself and pass that to cache.addAll() instead of passing in strings.
For instance, this is how you'd explicitly set the cache mode on all the requests to 'reload', which always skip the browser's normal HTTP cache and go against the network for a response:
// Define your list of URLs somewhere...
const URLS = ['/one.css', '/two.js', '/three.js', '...'];
// Later...
const requests = URLS.map((url) => new Request(url, {cache: 'reload'}));
await cache.addAll(requests);

Save users data C#

I need to save user temporary data (500 users). I am currently using:
DataTable myData = new DataTable();
myData.Columns.Add("id", typeof(int));
myData.Columns.Add("name", typeof(string));
...
myData.Rows.Add("1", "name1", ...);
myData.Rows.Add("2", "name2", ...);
...
myData.Rows.Add("500", "name500" ,...);
New rows are added/edited, eg 50x per second from one user... and every 1 minute the data are sent to Mysql database.
Is this method sufficiently stable and fast to add/edit/delete a large amount of temporary data?
I was thinking about saving to an xml file, but I think my way is faster ...
Thank you for any advice.
Edit:
I have a server and the users (clients) are connected to server, and they send data to server and server send data to them.
E.g. When the client send a message to others clients.

How to persist (dump) data to local storage and load it at later sessions?

I want to persist some parts of data from Relay store and load it again at later sessions for better user experience. (I am using Relay with react native for context).
The data can be relatively large (up to few thousands of records) and doesn't need to be 100% in sync with the server.
I want to persist the records across sessions as I don't want to refetch the data every time user opens an app. It will be both burden to the server and bad user experience (loading time).
You have access to Relay's full store in the environment file you create when setting up Relay. If you try console logging out recordSource you should see your entire store, and it should update every time Relay processes a new operation (Query/Mutation/Subscription), so maybe all you have to do is store that in your persisted storage (i.e. localStorage).
Example:
// your-app-name/src/RelayEnvironment.js
import {Environment, Network, RecordSource, Store} from 'relay-runtime';
import fetchGraphQL from './fetchGraphQL';
async function fetchRelay(params, variables) {
console.log(`fetching query ${params.name} with ${JSON.stringify(variables)}`);
return fetchGraphQL(params.text, variables);
}
const recordSource = new RecordSource();
console.log(recordSource);
// Store `recordSource` in persisted storage (i.e. localStorage) here.
if(typeof window !== "undefined") { // optional if you're not doing SSR
window.localStorage.setItem("relayStore", JSON.stringify(recordSource));
}
export default new Environment({
network: Network.create(fetchRelay),
store: new Store(recordSource),
});

How do you switch from SQL to Table Storage in Azure Mobile Services?

I've signed up for the free month trial of Azure, and I have created a Mobile Service. I'm using iOS, so I downloaded the model Todo app for iOS.
I am now trying to use Table Storage in the back end instead of a MSSQL store; I have found instructions on using Table Storage here: http://azure.microsoft.com/en-us/documentation/articles/storage-nodejs-how-to-use-table-storage/
However, my app is still storing todo items in the MSSQL storage. I've been told that I don't need to do anything in the client to make the switch, so I assume everything I need to do must be done in the node.js scripts. But I'm clearly missing something.
One thing that confuses me is that after I downloaded the generated node.js script for the Todo app, I didn't see anything in it that seemed to be explicitly talking to the MSSQL database.
Any pointers would be greatly appreciated.
EDIT:
here's my todoitem.insert.js:
var azure = require('azure-storage');
var tableSvc = azure.createTableService();
function insert(item, user, request) {
// request.execute();
console.log('Request received');
console.log(request);
var entGen = azure.TableUtilities.entityGenerator;
var task = {
PartitionKey: entGen.String('learningazure'),
RowKey: entGen.String('1'),
description: entGen.String('add something to TS'),
dueDate: entGen.DateTime(new Date(Date.UTC(2014, 11, 5))),
};
tableSvc.insertEntity('codedelphi',task, {echoContent: true}, function (error, result, response) {
if(!error){
// Entity inserted
console.log('No error on table insert: task created.');
request.respond(statusCodes.SUCCESS, 'OK.');
} else {
console.log('Houston, we have a problem. Entity not added to table.');
console.log(error);
}
});
console.log(JSON.stringify(item, null, 4));
}
tableSvc.createTableIfNotExists('codedelphi', function(error, result, response){
if(!error){
// Table exists or created
console.log('No error, table should exist');
} else {
console.log('We have a problem.');
console.log(error);
}
});
Mobile Services has the built in capability to handle talking to your SQL Database for you. When your script calls "request.execute()" that triggers whatever the request is (insert, update, delete, select) to be ran against the SQL database. Talking to Table Storage instead of SQL requires you to edit those scripts to explicitly talk to Table Storage (i.e. perform your insert, update, deletes, and reads). Today there is no magic switch which will change your "request.execute" from talking to SQL to talk to Table Storage. If you've already edited your scripts to talk to Table Storage and it's not working / you still see data stored in your SQL database, I would suspect that you are either still calling "request.execute" in your scripts, or you haven't pushed them to your Mobile Service (if you've pulled them down locally and then need to push them back to your service). If you've done all of the above, update your question with the Node.js script in question so we can see it.
As Chris pointed out, you are most likely still calling request.execute() from your table scripts. By design, this will explicitly talk to the MSSQL database you configured your application with. You will have to edit your table scripts to not perform "request.execute()" and instead interact with the TableService object.
If you follow the tutorial, and do the following:
1. Import the package.
2. Create the table service object.
3. Create an entity (and modify the variables to store the data you need)
4. Write the entity to your table service.
You should see data being written to table storage rather than SQL database.
Give it a shot and ping back, we'll help you out.

Simultaneously get multiple resources by ID

There exists a DocsClient.get_resource_by_id function to get the document entry for a single ID. Is there a similar way to obtain (in a single call) multiple document entries given multiple document IDs?
My application needs to efficiently download the content from multiple files for which I have the IDs. I need to get the document entries to access the appropriate download URL (I could manually construct the URLs, but this is discouraged in the API docs). It is also advantageous to have the document type and, in the case of spreadsheets, the document entry is required in order to access individual worksheets.
Overall I'm trying to reduce I/O waits, so if there's a way I can bundle the doc ID lookup, it will save me some I/O expense.
[Edit] Backporting AddQuery to gdata v2.0 (from Alain's solution):
client = DocsClient()
# ...
request_feed = gdata.data.BatchFeed()
request_entry = gdata.data.BatchEntry()
request_entry.batch_id = gdata.data.BatchId(text=resource_id)
request_entry.batch_operation = gdata.data.BATCH_QUERY
request_feed.add_batch_entry(entry=request_entry, batch_id_string=resource_id, operation_string=gdata.data.BATCH_QUERY)
batch_url = gdata.docs.client.RESOURCE_FEED_URI + '/batch'
rsp = client.batch(request_feed, batch_url)
rsp.entry is a collection of BatchEntry objects, which appear to refer to the correct resources, but which differ from the entries I'd normally get via client.get_resource_by_id().
My workaround is to convert gdata.data.BatchEntry objects into gdata.docs.data.Resource objects ilke thus:
entry = atom.core.parse(entry.to_string(), gdata.docs.data.Resource)
You can use a batch request to send multiple "GET" requests to the API using a single HTTP request.
Using the Python client library, you can use this code snippet to accomplish that:
def retrieve_resources(gd_client, ids):
"""Retrieve Documents List API Resources using a batch request.
Args:
gd_client: authorized gdata.docs.client.DocsClient instance.
ids: Collection of resource id to retrieve.
Returns:
ResourceFeed containing the retrieved resources.
"""
# Feed that holds the batch request entries.
request_feed = gdata.docs.data.ResourceFeed()
for resource_id in ids:
# Entry that holds the batch request.
request_entry = gdata.docs.data.Resource()
self_link = gdata.docs.client.RESOURCE_SELF_LINK_TEMPLATE % resource_id
request_entry.id = atom.data.Id(text=self_link)
# Add the request entry to the batch feed.
request_feed.AddQuery(entry=request_entry, batch_id_string=resource_id)
# Submit the batch request to the server.
batch_url = gdata.docs.client.RESOURCE_FEED_URI + '/batch'
response_feed = gd_client.Post(request_feed, batch_url)
# Check the batch request's status.
for entry in response_feed.entry:
print '%s: %s (%s)' % (entry.batch_id.text,
entry.batch_status.code,
entry.batch_status.reason)
return response_feed
Make sure to sync to the latest version of the project repository.

Resources