Rename Apollo Federation's OpenTelemetry traces based on operation name (named queries) - monitoring

I have a federated Apollo Graph (using ApolloGateway) and a bunch of subgraph microservices. I’ve set them up to do distributed tracing via OpenTelemetry.
Now, they do produce a bunch of traces that I can see in my backend (Elastic APM). However, they are all named graphql.execute, graphql.parse, etc. It is not easy to look for a specific mutation or query to monitor.
What I want to do instead is to take the operation name from the named queries like:
query HeroNameAndFriends {
hero {
name
friends {
name
}
}
}
and rename the entire trace to HeroNameAndFriends.
Is that an acceptable, recommended thing to do?
How would I achieve that?
Do subgraphs have access to the name of the named queries? Can I start small from one of the microservices or do I have to modify the Gateway?
Thank you!

Related

Batch enroll multiple educationUsers to educationClass

In the beta (and v1.0) endpoints of the Microsoft graph, for "education", is there a way to add multiple teachers and members (educationUser references) to an "educationClass"?
POST /education/classes/{id}/members/$ref
{
"#odata.id":"https://graph.microsoft.com/v1.0/education/users/XXXXX"
}
Right now, it seems that one by one is added instead of batch applying this in the same fashion as when adding members and owners to teams.
Something like this? (fictive request)
"teachers#odata.bind": [
"https://graph.microsoft.com/v1.0/education/users/AAAAA",
"https://graph.microsoft.com/v1.0/education/users/BBBBB"
],
"members#odata.bind": [
"https://graph.microsoft.com/v1.0/education/users/CCCCC",
"https://graph.microsoft.com/v1.0/education/users/DDDDD"
]
Either in a separate $ref operation or directly on the educationClass creation request object.
Is this something I've just been missing when looking in the doc? If not, is this something the Microsoft Graph Education team might consider in a future version of the beta endpoint?
Currently, the group resource (and by extension educationClass) only supports adding one Owner/Member at a time. You may want to look into the JSON Batching functionality. Batching allows you to queue up to 20 Graph calls in a single request.
For managing Teacher, Student, and Class assignment at scale, I'd suggest looking at School Data Sync (SDS). SDS allows you to automatically keep your AAD in sync with a Student Information System.

Mapping Queries result set to domain objects in SDN 3.3.1.RELEASE

I have created a POC with SDN 3.3.1 in which I have deployed a plugin inside the Neo4j server. The plugin contains domain objects, repositories and controllers.
From my application, I'm making rest calls to the controllers to execute repositories methods and return the response.
The issue is that in my queries, I'm returning multiple nodes and relationships. So, to map the response, I created wrapper classes using #QueryResult, #ResultColumn containing references to domain objects for each query. This is because each query has a different result set.
Since, my application has around 150 such queries I will have to create similar number of intermediate wrapper classes.
This is quite tedious and number of wrapper classes will only increase in future as more and more queries are added.
Is there any smarter way to do this?
I tried to have all of my domain objects as references in a single wrapper class. So that I can use it for any of my queries. But it gives exception if any of the fields in the wrapper class is not present in the query result.
Another issue is that, some of my queries are written to return all different nodes connected to a particular node, e.g,
Match (a)-[rel]->(b)-[tempRel]->(tempNodes) Return b,tempRel,tempNodes
I'm not sure how to map this result set to a wrapper class.
Is there any way to achieve it without refactoring the queries to match indvidual paths?
Regards,
Rahul
Good idea to have it ignore unknown classes and fields in both directions perhaps as an additional annotation on the class can you raise a Spring JIRA issue?
something like this e.g.
#QueryResult(requireFields=false, require=false)
class MyResult {
String name;
int age;
}
but you still have them focused on one area or use-case and not 1 for all 150 use-cases but perhaps 15-30 different ones.

Can TransactionTemplate be used with multiple data sources in Grails?

In our Grails application, we use one database per tenant. Each database contains data of all domain objects. We switch the databases based on the request context using pattern like DomainClass.getDS().find().
The transactions do not work out of the box since the request does not know what transaction manager to use. The #Transactional and withTransaction do nothing.
I have implemented my own version of withTransaction():
public static def withTransaction(Closure callable) {
new TransactionTemplate(getTM()).execute(callable as TransactionCallback)
}
The getTM() returns transaction manager based of the request context, for example transactionManager_db0.
This seems to work in my sandbox. Will this also work:
For parallel requests?
If the database has several replicas?
For hierarchical transactions?
I believe in TDD, but with the exception of the last bullet, I have hard times to provide tests to verify the code.

Is it possible to override neo4j default behaviour on creating nodes and relationships?

I'd like to override default neo4j behaviour for creating nodes and relatioships. What I want to achieve is something similar to this:
addNode(graphDb, node, properties) {
performDefaultBehaviourToAddNode(graphDb, node, properties);
performMyCustomOperations(graphDb, node, properties);
}
This code should be inserted (maybe) in a server plugin and executed each time a rest api call to neo4j create node operation is received (http://docs.neo4j.org/chunked/stable/rest-api-nodes.html).
Is it possible to achieve this with a server plugin or something similar?
I know that I could write extensions, but I want to use the original REST API methods to add nodes and relationships, in order to invoke it with py2neo.

Where / How to fit Solr into ASP.net MVC app (using nHibernate / Repository Pattern)

I'm currently in the middle of a reasonably large question / answer based application (kind of like stackoverflow / answerbag.com)
We're using SQL (Azure) and nHibernate for data access and MVC for the UI app.
So far, the schema is roughly along the lines of the stackoverflow db in the sense that we have a single Post table (contains both questions / answers)
Probably going to use something along the lines of the following repository interface:
public interface IPostRepository
{
void PutPost(Post post);
void PutPosts(IEnumerable<Post> posts);
void ChangePostStatus(string postID, PostStatus status);
void DeleteArtefact(string postId, string artefactKey);
void AddArtefact(string postId, string artefactKey);
void AddTag(string postId, string tagValue);
void RemoveTag(string postId, string tagValue);
void MarkPostAsAccepted(string id);
void UnmarkPostAsAccepted(string id);
IQueryable<Post> FindAll();
IQueryable<Post> FindPostsByStatus(PostStatus postStatus);
IQueryable<Post> FindPostsByPostType(PostType postType);
IQueryable<Post> FindPostsByStatusAndPostType(PostStatus postStatus, PostType postType);
IQueryable<Post> FindPostsByNumberOfReplies(int numberOfReplies);
IQueryable<Post> FindPostsByTag(string tag);
}
My question is:
Where / how would i fit solr into this for better querying of these "Posts"
(I'll be using solrnet for the actual communication with Solr)
Ideally, I'd be using the SQL db as merely a persistant store-
The bulk of the above IQueryable operations would move into some kind of SolrFinder class (or something like that)
The Body property is the one that causes the problems currently - it's fairly large, and slows down queries on sql.
My main problem is, for example, if someone "updates" a post - adds a new tag, for example, then that whole post will need re-indexing.
Obviously, doing this will require a query like this:
"SELECT * FROM POST WHERE ID = xyz"
This will of course, be very slow.
Solrnet has an nHibernate facility- but i believe this will be the same result as above?
I thought of a way around this, which I'd like your views on:
Adding the ID to a queue (amazon sqs or something - i like the ease of use with this)
Having a service (or bunch of services) somewhere that do the above mentioned query, construct the document, and re-add it to solr.
Another problem I'm having with my design:
Where should the "re-indexing" method(s) be called from?
The MVC controller? or should i have a "PostService" type class, that wraps the instance of IPostRepository?
Any pointers are greatly received on this one!
On the e-commerce site that I work for, we use Solr to provide fast faceting and searching of the product catalog. (In non-Solr geek terms, this means the "ATI Cards (34), NVIDIA (23), Intel (5)" style of navigation links that you can use to drill-down through product catalogs on sites like Zappos, Amazon, NewEgg, and Lowe's.)
This is because Solr is designed to do this kind of thing fast and well, and trying to do this kind of thing efficiently in a traditional relational database is, well, not going to happen, unless you want to start adding and removing indexes on the fly and go full EAV, which is just cough Magento cough stupid. So our SQL Server database is the "authoritative" data store, and the Solr indexes are read-only "projections" of that data.
You're with me so far because it sounds like you are in a similar situation. The next step is determining whether or not it is OK that the data in the Solr index may be slightly stale. You've probably accepted the fact that it will be somewhat stale, but the next decisions are
How stale is too stale?
When do I value speed or querying features over staleness?
For example, I have what I call the "Worker", which is a Windows service that uses Quartz.NET to execute C# IJob implementations periodically. Every 3 hours, one of these jobs that gets executed is the RefreshSolrIndexesJob, and all that job does is ping an HttpWebRequest over to http://solr.example.com/dataimport?command=full-import. This is because we use Solr's built-in DataImportHandler to actually suck in the data from the SQL database; the job just has to "touch" that URL periodically to make the sync work. Because the DataImportHandler commits the changes periodically, this is all effectively running in the background, transparent to the users of the Web site.
This does mean that information in the product catalog can be up to 3 hours stale. A user might click a link for "Medium In Stock (3)" on the catalog page (since this kind of faceted data is generated by querying SOLR) but then see on the product detail page that no mediums are in stock (since on this page, the quantity information is one of the few things not cached and queried directly against the database). This is annoying, but generally rare in our particularly scenario (we are a reasonably small business and not that high traffic), and it will be fixed up in 3 hours anyway when we rebuild the whole index again from scratch, so we have accepted this as a reasonable trade-off.
If you can accept this degree of "staleness", then this background worker process is a good way to go. You could take the "rebuild the whole thing every few hours" approach, or your repository could insert the ID into a table, say, dbo.IdentitiesOfStuffThatNeedsUpdatingInSolr, and then a background process can periodically scan through that table and update only those documents in Solr if rebuilding the entire index from scratch periodically is not reasonable given the size or complexity of your data set.
A third approach is to have your repository spawn a background thread that updates the Solr index in regards to that current document more or less at the same time, so the data is only stale for a few seconds:
class MyRepository
{
void Save(Post post)
{
// the following method runs on the current thread
SaveThePostInTheSqlDatabaseSynchronously(post);
// the following method spawns a new thread, task,
// queueuserworkitem, whatevever floats our boat this week,
// and so returns immediately
UpdateTheDocumentInTheSolrIndexAsynchronously(post);
}
}
But if this explodes for some reason, you might miss updates in Solr, so it's still a good idea to have Solr do a periodic "blow it all away and refresh", or have a reaper background Worker-type service that checks for out-of-date data in Solr everyone once in a blue moon.
As for querying this data from Solr, there are a few approaches you could take. One is to hide the fact that Solr exists entirely via the methods of the Repository. I personally don't recommend this because chances are your Solr schema is going to be shamelessly tailored to the UI that will be accessing that data; we've already made the decision to use Solr to provide easy faceting, sorting, and fast display of information, so we might as well use it to its fullest extent. This means making it explicit in code when we mean to access Solr and when we mean to access the up-to-date, non-cached database object.
In my case, I end up using NHibernate to do the CRUD access (loading an ItemGroup, futzing with its pricing rules, and then saving it back), forgoing the repository pattern because I don't typically see its value when NHibernate and its mappings are already abstracting the database. (This is a personal choice.)
But when querying on the data, I know pretty well if I'm using it for catalog-oriented purposes (I care about speed and querying) or for displaying in a table on a back-end administrative application (I care about currency). For querying on the Web site, I have an interface called ICatalogSearchQuery. It has a Search() method that accepts a SearchRequest where I define some parameters--selected facets, search terms, page number, number of items per page, etc.--and gives back a SearchResult--remaining facets, number of results, the results on this page, etc. Pretty boring stuff.
Where it gets interesting is that the implementation of that ICatalogSearchQuery is using a list of ICatalogSearchStrategys underneath. The default strategy, the SolrCatalogSearchStrategy, hits SOLR directly via a plain old-fashioned HttpWebRequest and parsing the XML in the HttpWebResponse (which is much easier to use, IMHO, than some of the SOLR client libraries, though they may have gotten better since I last looked at them over a year ago). If that strategy throws an exception or vomits for some reason, then the DatabaseCatalogSearchStrategy hits the SQL database directly--although it ignores some parameters of the SearchRequest, like faceting or advanced text searching, since that is inefficient to do there and is the whole reason we are using Solr in the first place. The idea is that usually SOLR is answering my search requests quickly in full-featured glory, but if something blows up and SOLR goes down, then the catalog pages of the site can still function in "reduced-functionality mode" by hitting the database with a limited feature set directly. (Since we have made explicit in code that this is a search, that strategy can take some liberties in ignoring some of the search parameters without worrying about affecting clients too severely.)
Key takeaway: What is important is that the decision to perform a query against a possibly-stale data store versus the authoritative data store has been made explicit--if I want fast, possibly stale data with advanced search features, I use ICatalogSearchQuery. If I want slow, up-to-date data with the insert/update/delete capability, I use NHibernate's named queries (or a repository in your case). And if I make a change in the SQL database, I know that the out-of-process Worker service will update Solr eventually, making things eventually consistent. (And if something was really important, I could broadcast an event or ping the SOLR store directly, telling it to update, possibly in a background thread if I had to.)
Hope that gives you some insight.
We use solr to query a large product database.
Around 1 million products, and 30 stores.
What we did is we used triggers on the product table and stock tables on our Sql server.
Each time a row is changed it flags the product to be reindexed. And we have a windows service that grabs these products and post them to Solr every 10 seconds. (With a limit of 100 products per batch).
It's super efficient, almost real time info for the stock.
If you have a big text field (your 'body' field), then yes, re-index in background. The solutions you mentioned (queue or periodic background service) will do.
MVC controllers should be oblivious of this process.
I noticed you have IQueryables in your repository interface. SolrNet does not currently have a LINQ provider. Anyway, if those operations are all you're going to do with Solr (i.e. no faceting), you might want to consider using Lucene.Net instead, which does have a LINQ provider.

Resources