Meteor publish-with-relations VS collection-helpers for joins? - join

What's the best way to do joins in Meteor/mongo? Use one of these packages, or something else:
publish-with-relations
https://github.com/erundook/meteor-publish-with-relations/blob/master/publish_with_relations.coffee
collection-helpers
https://github.com/dburles/meteor-collection-helpers

I'm very familiar with PWR, but collection-helpers is new to me. It looks like the two are complimentary.
PWR solves the somewhat complex problem of publishing documents to the client via a reactive join based on arbitrarily complex relationships. See this question for more details.
collection-helpers appears to be a set of convenience functions added to the client to be used when traversing collection relationships inside of a template (given that the required documents have already been published). For example, if you have books and authors in separate (but related) collections, you can immediately get myBook.author.fullName inside of a template without having to type out the extra find for the author.

currently the top popular solution in atmosphere seems to be publish-composite https://atmospherejs.com/reywood/publish-composite

Related

Please comment on this use-case for accessing the Lucene API directly from the context of a Neo4j User Defined Procedure

I would like to build an in-memory Lucene index from a collection of node properties and then search against that index.
These search transactions will be happening in parallel, I need to be able to construct a separate search index for each transaction. It seems like this would not be possible using native (manual) Neo4j indexes, since they are "global", hence the use of a memory-based search index, am I mistaken?
This is working great for our use case. A couple of notes:
Use MemoryIndex rather than RAMDirectory.
Be sure to specify "provided" in your dependencies for Lucene.
You can only use 5.5.0, which is what is used natively by Neo4j.

Nested Jira Search on two independent projects

I need to get a nested Jira search. I am okay with JQL query but I have a usecase that I don't know how to solve
Company uses project=XTBOW for reporting purpose for executives (Epic)
The company also uses project=XTA for underling development work (Task)
The XTA task are linked to the XTBOW Epic for a subset of task, but not all. (There is a large body of XTA task that are not linked to XTBOW)
I need to get a filter going for all XTA projects that are linked to XTBOW Epics only. I would like to use a filter like this:
project = XTA and "Epic Link" in (<project = XTBOW.key>)
I can manually prove this filter works. But need a way to automate this filter, because the number of tickets being created/tracked in growing exponentially, and if someone deletes a key for XTBOW that is in the "Epic Link" field, the JQL search throws and error because the "Key" is missing.
Example - FYI cf[10231] is the "Epic Link" field:
project in (XTA,XTWOF) and cf[10231] in (XTBOW-42,XTBOW-59)
The overall objective is to download the data to a dataframe. So if there is a better suggestion to even avoid JQL and do it through python. I am all ears. Just need so pointers to start. I am just going this route because I have already built a JIRA-Downloader/Parser using Python.
The easiest way to get subsets of issues is with:
search_issues(jql_str, startAt=0, maxResults=50, validate_query=True, fields=None, expand=None, json_result=None)
You should be able to just pull the issue sets using the queries you already created, just make them into strings.
DOC

Ruby on Rails: How to have multiple controllers for one table AND multiple models

I'm new to Ruby and to Rails. I have played a bit with Sinatra but I think that Rails is a more complete framework for my project. However, I am running into trouble with this.
I am working with an fairly substantial existing, and heavily used, mySQL database and I am trying to build an API for this that will report on certain features. The features that are needed are, for the most part, counts of records by certain groupings, then drilling down into details.
For example we have a table - tableA, that contains lots of information relating to documentation. One piece of information we want to report on from that is the number of items in a given language. The language code is stored against each item and based on a get request I would like to return JSON.
Request: /languages/:code/count/:tablename
There are two variables in that most specific URL - the code we are counting and the table we are counting from.
I understand that in routes.rb I can set up a mapping:
get '/languages/:code/count/:table', :controller=>'languages', :action=>'count'
I have a controller - languages_controller.rb with a count method in it. this then matches to a corresponding view file count.html.erb
In all the tutorials I have read and examples I have followed the main point seems that 'languages' would be a table in the database and would therefore be available under the 'magic' Rails approach.
My issue is that it is not a table, rather the results of the call should be a limited subset of the fields in tableA. Such as languagecode and count(id).
The description of the language needs to be looked up 'manually' as it is stored as an internal code that is not in a database anywhere (historic decision/madness).
The questions:
how do I have a model that is only a subset of fields, plus some that are manually populated - languagecode, isocode, description, count
Am I right in thinking that once I have the model defined as such as I could use ActiveRecord to get data from the database and then in the controller add the extra information in?
Can I change table in the model based on the parameter sent in the URL?
Essentially, I am at a loss at the moment on what to do with this. I have the routes defined, the view templates in place and the controller there and ready to go. The database component - getting some data from a pre-existing table seems mysterious to me.
Any help is greatly appreciated, it seems that the framework is currently getting in my way and I know that I can't be the only one trying this sort of thing so if you have any advice please share.
There's really no need for a model here, at all. This isn't what ORMs are for. What you should be doing is just running raw SQL against the database, and iterating over the results. Consider doing something like this: https://stackoverflow.com/a/14840547/229044

How do I construct the cake when using Scalaxb to connect to a SOAP service?

I've read the documentation, but what I need to know is:
I'm not using a fictitious stock quote service (with an imaginary wsdl file). I'm using a different service with a different name.
Where, among the thousands and thousands of lines of code that have been generated, will I find the Scala trait(s) that I need to put together that correspond to this line in the documentation's example:
val service = (new stockquote.StockQuoteSoap12Bindings with scalaxb.SoapClients with scalaxb.DispatchHttpClients {}).service
Now, you might be thinking "Why not just search for Soap12Bindings in the generated code"? Good idea - but that turns up 0 results.
The example in the documentation is outdated, or too specific. (The documentation is also internally inconsistent and inconsistent with the actual filenames output with scalaxb.)
First, search for SoapBindings instead of Soap12Bindings to find the service-specific trait (the first trait).
Then, instead of scalaxb.SoapClients, use scalaxb.Soap11Clients.

Admin generator with hydrate array

I would like to speed up some of my admin-generated modules by hydrating doctrine results with Doctrine::HYDRATE_ARRAY. Is this a good idea? How can I do it?
I don't think that you can do it that easy. All calls in the default admin generator theme use the Doctrine object (i.e. $model->id, and not $model['id']. To use arrays you would probably need to recreate the default theme, as well all calls that retrieve the objects.
Oh, and also the Admin Generator uses the generated forms as it's base for generating the displayed forms.
You would probably be better off optimizing other ways. Make sure you have to correct client side caching headers, optimize the sfViewCacheManager on the server side, use APC, use the doctrine query cache, etc...
This could include some more custom work (for example leveraging the view cache manager), but significantly easier to implement.
I agree with Grad van Horck. Also, make sure your index pages are using the minimum number of queries (easy to see in the development environment's web toolbar). Most of my modules are much more efficient after I create custom table_methods with the proper table joins and also include ONLY the fields I need to have loaded into the object.

Resources