It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
actually my project is based on ontology(knowledge base). Here i created an ontology from which the data should be retrieved. An application where the question will be asked and the required answers will be retrieved from the ontology and print as output.
Here my doubt is how to construct the inference rules (i.e) how to convert the given keyword into queries?
If anyone has any idea about construction inference rules and the language to be used for inference rules just reply...... Thanx........
I'll define the major points that I've used in the past to explore this vast topic (ontology/web semantic/rdf/etc...):
First you should define your ontology and rules set using some ontology editor (I've used Protégé). This tool gives the opportunity to you create instances and test your ontology (it you check the inference rules)
After that if you want to store your data you need sesame server and some scripts to insert data into sesame.
Sesame has the capability to store the triples information (Sesame) - there is OpenRDF Workbench that acts as a administration console for sesame (good tool)
After that I've used python and some libraries (SuRF, rdflib) to gather information from the web (querying data using SPARQL - I've used the dblp sparql endpoint) and inserting that data (triples) into my sesame server
To make queries you will need to learn SPARQL :) give it a try --> http://dblp.rkbexplorer.com/sparql/
Good luck!
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I am doing a project on motion estimation between two frames of a video sequence using Block Matching Algorithm and using SAD metrics. It involves computing SAD between each block of reference frame and each block of a candidate frame in window size to get the motion vector between the two frames.
I want to implement the same using Map Reduce. Splitting the frames in key-value pairs, but am not able to figure out the logic because everywhere I see I find the wordCount or query search problem which is not analogus to mine
I would also appreciate If you are able to provide me more Map Reduce examples.
Hadoop is being used in situations where computations can happen in parallel and using a single machine might take a lot of time for the processing. There is nothing stopping you using Hadoop for video processing. Check this and this for more information on where Hadoop can be used. Some of these are related to video processing.
Start with understanding the WordCount example and Hadoop in general. Run the example on Hadoop. And then work from there. Would also suggest to buy the Hadoop - The Definitive Guide book. Hadoop and its ecosystem is changing at a very fast pace and it's tough to keep up-to-date, but the book will definitely give you a start on Hadoop.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
what are the best practices to process images in enterptices web applications.
I mean
storing
assign with entity
fast loading/caching
delayed / ajax loading
suitable format (png, jpeg)
on fly editing (resizing, compress)
free libs/helpers
image watermarking/copyrighting on fly
Especially, appreciated already production approaches!
As always, every project has their own requirements, restrictions and resources (The 3Rs). There is no 'super pattern' or 'one size fits all' method.
We cannot tell you how to implement you project as every project is different. It's up to you to use your skills/knowledge and experience to make informed decisions on implementation.
The 'best practice' is to individually research and learn each of the technologies/methods you have listed and gain the knowledge to know when to use them based on your projects requirements, restrictions and resources.
I use ImageMagickObject in my mvc projects. It can:
suitable format (png, jpeg)
on flyediting (resizing, compress)
freelibs/helpers image
watermarking/copyrighting on fly
fast loading/caching: may be memcached?
delayed / ajax loading: jquery is a good solution
assign with entity: Entity Framework can work with almost all databases
storing: hard question. all depend to the functionality
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Im going to make my own search engine.
When searching about search engine, crawler, and so on, I confused about Nutch.
I don’t understand what is Nutch. Is it for internal use like Lucene (correct me if Im wrong) or a framework for creating a search engine (example:google, bing, yahoo)?
Nutch is a full featured search engine - it can crawl external web sites, and it understands and respects robots.txt.
http://nutch.apache.org/about.html
Overview Nutch is open source
web-search software. It builds on
Lucene and Solr, adding web-specifics,
such as a crawler, a link-graph
database, parsers for HTML and other
document formats, etc.
Nutch can run on a single machine, but
gains a lot of its strength from
running in a Hadoop cluster
The system can be enhanced (eg other
document formats can be parsed) using
a plugin mechanism.
For more information about Nutch,
please see the Nutch wiki.
Nutch is a ready-made, configurable web crawler with a Java Servlet for performing searches. If you wanted to do this as a project, Nutch probably does too much since all that's left is creating the pages for entering searches and displaying results.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
How to create a huge database in Informix (IDS) version 11.50?
Is 2+ Terabytes big enough for you?.. Best way is to create a table, then load an ascii file into it. I have a 2TB+ (nrows=10M, rowsize=2048) ascii test file (with pipe delimiters) which has unique: fullnames, adresses, phone numbers and a variety of other datatypes, like DATE, SMALLINT, DECIMAL (9,2), etc. for testing/benchmarking purposes.
Problem is. how can i get it to you?
You could also create an SPL to insert random data, but for a huge test table, its not going to produce realistic or meaninful data for test purposes.
It is not very different from creating normal database. You can create tables as normal, but then you fill tables with huge amount of data. I think the best you can do is to create application that will fill database with random data. Of course you can use some real data like dates, city names, first names etc, or create "looking normal" names using Markov chain. Look at some examples in Python: Python Markov Chains and how to use them.
For massive inserts you should use PreparedStatement (this is quite easy with Java or Jython), or create huge text file and load them using dbimport.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 13 years ago.
I've been learning Groovy & Grails recently, and in terms of developer productivity it seems to be light years ahead of other Java solutions (Spring, Struts, EJB, JSF). If I search monster.ca, for either Groovy or Grails, 0 matches are returned, which suggest Grails isn't doing too well in terms of adoption.
I realise that:
Grails is relatively new and adoption takes time
Success of a technology depends on more than just it's technical merits (e.g. marketing $)
Search results on monster.ca are at best a very rough proxy for global adoption. It's possible that lots of people are using it, just not in Canada, or Canadian companies that are using it simply aren't hiring at the moment
Are there other reasons why it hasn't been adopted to the extent it seems to "deserve"?
There are probably more people using Grails than you think. Job boards show you what are the skills people are looking for. Grails is fairly new and there are not a lot of people experienced with it out on the job market.
Grails and in particular Groovy are very close to Java. A few quick lessons in Groovy and a Java developer and quickly feel at home. You can very easily take a vanilla Java developer posting and put that person on into a position developing with grails.
I would say that you will see more Groovy/Grails postings in the future as more Java shops adopt these technologies.