Difference between vcard and vcard_search table in ejabberd - erlang

What is difference between vcard and vcard_search table in ejabberd.I mean to say for what purpose they are used?

vcard table is used to store the raw vCard. vcard_search is an index table, used for searching user vCard by field. It is for example use in user directory queries to find the list of matching users.
You can read more details about each field here: http://docs.ejabberd.im/developer/sql-schema/

Related

Ruby on rails AngularJS - create multiple models from direct upload

I recently am given a task by upper management to implement a mass upload coupon task. The goal is to allow users to upload an list of coupon codes (excel or csv), and subsequently giving these coupons codes attributes such as start date, expiry, quantity and so on.
I have already implemented a form to input these attributes. The current implementation method I am doing is as follows:
1.I upload a list of coupon codes (eg: 333245454dfee) and I do not write them directly to the database, instead I converted them to string and display on a page for the user to view. (done)
2.from that view page, there is a form with all the attributes. The user can then input these attributes. (done)
3.The user can create all of these coupons codes with the attributes set.
However, I am stuck now because I am unsure of how to mass create multiple codes and attach all the attributes to them. Right now, I can only create one coupon at a time.
So, to summarize, I would like to ask is it possible to
have a field that contains all the codes I have uploaded
I have others fields for different attributes
how to create all the codes I have uploaded as separate models.
I do not need the codes, I would like to hear what approach there are. I am thinking of creating a variable to store these coupons codes first, then loop them. But I have no idea how to do all of that by pressing one single button.
Thanks in advance.
Hmm, how I would approach this problem would be to have the user upload an csv file with 1 column and multiple rows of coupon. Using carrierwave to upload the file and parse the csv file. After parsing it, you should have an array of coupon and you can just input them into the database.
A good reference for your problem would be http://railscasts.com/episodes/396-importing-csv-and-excel

Apache Solr: Merging documents from two sources before indexing

I need to index data from a custom application in Solr. The custom app stores metadata in an Oracle RDBMS and documents (PDF, MS Word, etc.) in a file store. The two are linked in the sense that the metadata in the database refers to a physical document (PDF) in the file store.
I am able to index the metadata from the RDBMS without issues. Now I would like to update the indexed documents with an additional field in which I can store the parsed content from the PDFs.
I have considered and tried the following
1. Using Update RequestHandler to try and update the indexed document with . This didn't work and the original document indexed from the RDBMS was overwritten.
2. Using SolrJ to do atomic updates but I am not sure if this is a good approach for something like this
Has anyone come across this issue before and what would be the recommended approach?
You can update the document, but it requires that you know the id of the existing document. For example:
{
"id": "5",
"parsed_content":{"set": "long text field with parsed content"}
}
Instead of just saying "parsed_content":"something" you have to wrap the value in "parsed_content":{"set":"something"} to trigger adding it to the existing document.
See https://wiki.apache.org/solr/UpdateXmlMessages#Optional_attributes_for_.22field.22 for documentation on how to work with multivalued fields etc.

Index if not exists using bulk processor in elasticsearch

I am trying to index a document if it doesn't already exist in elasticsearch. I am using BulkProcessor when indexing my documents and using Requests.add action. I will have the exact same id sometimes, does it not add automatically, but update?
P.S. Update is not a requirement, it can stay as is.
P.S.2 I am trying to integrate a user's past tweets into elasticsearch-twitter-river's user stream.
If you index a doc with the same document id then it will do an update. Otherwise it will add a new document.
In other words, if you PUT a doc to {index}/{type}/{id}, then it will always update (overwrite) the document with that id. If you POST a doc to {index}/{type} then in general Elasticsearch will generate a new document for each of your POSTs. That is, unless you mapped a document field to the _id field in mappings.
It seems that the Twitter River uses the PUT method with explicitly specifying the id so tweets with the same id will probably be overwritten.

Adding custom attributes to Task?

How can i add custom attributes/data to Task via API . for example we wanted to add field like customer contact number or deal amount e.t.c
We don't currently support adding arbitrary metadata to tasks, though it's something we're thinking about. In the meantime, what many customers do is to simply put data in the note field in an easily-parseable form, which works well and also lets humans reading the task see the e.g. ticket number.
It's not a terribly elegant solution, but it works.
https://asana.com/developers/documentation/getting-started/custom-external_data
Custom external data allows a client application to add app-specific metadata to Tasks in the API. The custom data includes a string id that can be used to retrieve objects and a data blob that can store character strings.
See the external field at https://asana.com/developers/api-reference/tasks

List of all movie title, actors, directors, writers on Imdb

I am working on a web app which lets users tell their favourite movies, directors, movie- writers, and actors. For this I want to provide them a dropdown list or auto complete for each of them so that they can just pick their choices.
For this:
I need a list of all movie titles, actors, directors, writers present on Imdb.
I checked Imdbpy and it does not seem to provide methods to get this data.
Would using imdbpy2sql.py to create a database and using sql to query the db, provide the required data? Is there any other way to do this?
Thanks!
Using imdbpy2sql.py to create a database and using SQL to query the db, will provide you the required data.
You can also try using Java Movie Database or imdbdumpimport to read in the text files to SQL.
The last option to do this is parsing the plain text files provided by IMDb yourself.
I think your best option is to parse the plain text files distributed here: imdb interfaces.
You probably just need the 'movies', 'actors', 'actresses' and 'director' file; they are quite easy to parse.

Resources