Generate unique integer in range using Faker gem [closed] - ruby-on-rails

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
This seems like it should be simple but I can't find an answer anywhere!
I'm building a sample_data rake file in Rails to populate my db using the Faker gem. (though I don't think I need that gem for just generating integers)
Some of the fields need to be an integer within a set range but each must be unique. For instance:
10.times do |a|
a.special_number = rand(1..10)
end
works well except for the fact that the numbers aren't unique...

Instead of trying to generate a list of unique random numbers, why don't you generate a range of numbers and shuffle that list?

I would suggest to use hash rather than array, because the complexity to compare a number in Array is array.length while in hash it's 1. And you can finally transfer hash keys into an array.
hash = {} r = [ ]
while hash.length < n
a = rand(max)
if !hash_has_key? (a)
hash(a) = :ok
end
end
r = hash.keys
if you test n=30000 and max = 500000, the time consumed is very different compared with using array.

Related

How to insert data into my app efficiently? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I hv a lot of data that format like this:
『 No.1
introduction:
explanation:
Parts1:
Parts2:
..
..
parts8.
No.2
introduction:
explanation:
....
....
....
....
No.100
...
』
I am setting up of my app's model that hold the data as NSMutableDictionary.
So that i can find the data by input a key.
The problem is that there is a lot of data(over 500 sets), Can i have a efficient ways to insert the data without "boring typing"?????
please help.
thank You!^_^"
Create either a JSON file or a property list file and use the built-in JSON or property list parsing facilities to read the file. Much better than building your own parser.
They way I would approach this problem is to open your file in a text pad and change all the : to pipes |. Now you have a pipe delimited file that you can use to parse.
No.1 introduction| explanation| Parts1| Parts2| parts3| (this is on line1)
No.2 introduction| explanation| Parts1| Parts2| parts3| (this is on line2)
Now put this file into a string and go over line by line to parse the string. Get each of the values put them in an array and then save the array in your NSDictonary with a key value. I will try to search for some sample code. From here you know
array(0) is - No.1 introduction
array(1) is - explanation
...
Check this post NSString tokenize in Objective-C

takes this string message as an input and returns a corresponding value in terms of a number [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
For instance 1 is made of 2 dashes, 8 is made of 7 dashes and so on.write a function that takes this string message as an input and returns a corresponding value in terms of a number. This number is the count of dashes in the string message.
String has a count method:
"abc--de-f-".count('-') #=> 4
Just get a string with nothing but the dashes from your input string, and then check the length of that string:
dash_string = input_string.gsub(/[^-]/, '')
number = dash_string.length
You might want to subtract 1 from that answer based on your examples, bearing in mind that a string with no dashes would turn into -1 in that case.

Sort an array/table of words from shortest to longest [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Corona/Lua how to sort a table of strings from shortest to longest
Assuming your table is a indexed table and not a keyed one try
test = {'123','1234','1245','1','12'}
table.sort(test, function(a,b) return #a<#b end)
for i,v in ipairs(test) do
print (i,v)
end
The important line here is
table.sort(test, function(a,b) return #a<#b end)
Words will only sorted by length and order within matching lengths will be arbitrary. If you want to sort by additional criteria, extend the function for the sort
eg function(a,b) return #a<#b end

How to implement query searching in a specific cluster after document clustering? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have two clusters as a class which has
Cluster : class
DocumentList : List<Document>
centroidVector : Map<String,Double>
Now the problem is that when the query is searched it is parsed as a file and then made into a document object , added to documentIndex and its index is constructed along with other documents . I did that because it had to go through the same procedure i.e tokenizing ,stemming etc. But now i want to implement query search in a specific cluster with which the query vector is most similar with , i.e dot product ~ 0.5 -1 . So i would have to take a dot product between the query vector and the cluster vector to do that. But i dont know how to implement it because the index is created in memory and is not stored in the database. Still in the process of doing that .
Thank you
Clustering is not meant for searching (i.e. indexing etc.). It is an analysis step meant to find possible unknown structure within your data set, not to retrieve information faster.
You can exploit the structure sometimes for faster search, but then you need an index that can make use of this.
Just do an index right away if you want to do similarity search! Then try to improve the index by doing some clustering before.

How to make text file (or other documents') parser? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have following task to do: to fill spell check dictionary (simple txt file) I need parser
which should: - parse within text file (or another type of document), extract
each word and then create text file with simple list of words like this:
adfadf
adfasdfa
adfasfdasdf
adsfadf
...
etc
What scripting language and library you would suggest? If possible, please, give example of code (especially for extracting each word). Thanks!
What you want is not a parser, but just a tokenizer. This can be done in any language with a bunch of regular expressions, but I do recommend Python with NLTK:
>>> from nltk.tokenize import word_tokenize
>>> word_tokenize('Hello, world!')
['Hello', ',', 'world', '!']
Generally, just about any NLP toolkit will include a tokenizer, so there's no need to reinvent the wheel; tokenizing isn't hard, but it involves writing a lot of heuristics to handle all the exceptions such as abbreviations, acronyms, etc.

Resources