Rails Form Field for Selecting a State - ruby-on-rails

I'm creating a Payment Form where users have to enter their state
In my users model, I have a state attribute
I have a states array as follows
states = %w(AL AK AZ AR CA CO CT DC DE FL GA HI ID IL IN IA KS KY LA ME MA MI MN MO MS
MT NE NV NH NJ NM NY NC ND OH OK OR PA RI SC SD TN TX UT VA VT WA WI WV WY)
I want to create a drop-down menu where users can select a state
I read the documentation, but am confused on how to implement it.
Would something like this work?
<%= select_tag(:state, options_for_select(states)) %>
The output should look like this
AL
AK
...
WY
And you can select each of the options.

While I am not always a fan of just grabbing and installing gems, I have found the Better State Select gem to be handy in these scenarios. Mostly because I can't be bothered to keep arrays of states in multiple apps :)!
Better State Select
I ran your code just fine in one of my apps so I can't see an issue with what you have but wanted to offer the Better State Select gem suggestion simply because I've found it an easier way to manage things like states (and as we expand into Canadia, it already has their provinces, etc.).
Anyway, your code seems fine/workable to me as is!

Related

R - how to exclude pennystocks from environment before calculating adjusted stock returns

Within my current research I'm trying to find out, how big the impact of ad-hoc sentiment on daily stock returns is.
Calculations functioned quite well and results also are plausible.
The calculations until now with quantmod package and yahoo financial data look like below:
getSymbols(c("^CDAXX",Symbols) , env = myenviron, src = "yahoo",
from = as.Date("2007-01-02"), to = as.Date("2016-12-30")
Returns <- eapply(myenviron, function(s) ROC(Ad(s), type="discrete"))
ReturnsDF <- as.data.table(do.call(merge.xts, Returns))
# adjust column names
colnames(ReturnsDF) <- gsub(".Adjusted","",colnames(ReturnsDF))
ReturnsDF <- as.data.table(ReturnsDF)
However, to make it more robust towards noisy influence of pennystock data I wonder, how its possible to exclude stocks that once in the time period go below a certain value x, let's say 1€.
I guess, the best thing would be to exclude them before calculating the returns and merge the xts object results or even better, before downloading them with the getSymbols command.
Has anybody an idea how this could work best? Thanks in advance.
Try this:
build a price frame of the Adj. closing prices of your symbols
(I use the PF function of the quantmod add-on package qmao which has lots of other useful functions for this type of analysis. (install.packages("qmao", repos="http://R-Forge.R-project.org”))
check by column if any price is below your minimum trigger price
select only columns which have no closings below the trigger price
To stay more flexible I would suggest to take a sub period - let’s say no price below 5 during the last 21 trading days.The toy example below may illustrate my point.
I use AAPL, FB and MSFT as the symbol universe.
> symbols <- c('AAPL','MSFT','FB')
> getSymbols(symbols, from='2018-02-01')
[1] "AAPL" "MSFT" "FB"
> prices <- PF(symbols, silent = TRUE)
> prices
AAPL MSFT FB
2018-02-01 167.0987 93.81929 193.09
2018-02-02 159.8483 91.35088 190.28
2018-02-05 155.8546 87.58855 181.26
2018-02-06 162.3680 90.90299 185.31
2018-02-07 158.8922 89.19102 180.18
2018-02-08 154.5200 84.61253 171.58
2018-02-09 156.4100 87.76771 176.11
2018-02-12 162.7100 88.71327 176.41
2018-02-13 164.3400 89.41000 173.15
2018-02-14 167.3700 90.81000 179.52
2018-02-15 172.9900 92.66000 179.96
2018-02-16 172.4300 92.00000 177.36
2018-02-20 171.8500 92.72000 176.01
2018-02-21 171.0700 91.49000 177.91
2018-02-22 172.5000 91.73000 178.99
2018-02-23 175.5000 94.06000 183.29
2018-02-26 178.9700 95.42000 184.93
2018-02-27 178.3900 94.20000 181.46
2018-02-28 178.1200 93.77000 178.32
2018-03-01 175.0000 92.85000 175.94
2018-03-02 176.2100 93.05000 176.62
Let’s assume you would like any instrument which traded below 175.40 during the last 6 trading days to be excluded from your analysis :-) .
As you can see that shall exclude AAPL and FB.
apply and the base function any applied(!) to a 6-day subset of prices will give us exactly what we want. Showing the last 3 days of prices excluding the instruments which did not meet our condition:
> tail(prices[,apply(tail(prices),2, function(x) any(x < 175.4)) == FALSE],3)
FB
2018-02-28 178.32
2018-03-01 175.94
2018-03-02 176.62

Retrieving the node with more authors with published papers in the Neo4j

I have the following script in the Neo4J
CREATE (PaperA:Paper {title:'User Experience of Mobile Augmented
Reality: A Review of Studies'}) CREATE (Irshad:Autor {name:'S.
Irshad'}) CREATE (Rambli:Autor {name:'D. Rohaya Bt Awang Rambli'})
CREATE(PaperB:Paper {title:'Quality of Experience in the Multimedia
Internet of Things: definition and practical use cases'})
CREATE(Floris:Autor {name:'A. Floris'}) CREATE(Atzori:Autor {name:'L.
Atzori'})
CREATE(PaperC:Paper {title:'What Changes from Ubiquitous Computing to
Internet of Things in Interaction Evaluation?'}) CREATE(Andrade:Autor
{name:'Andrade, R. M.'}) CREATE(Carvalho:Autor {name:'Carvalho, R.
M.'}) CREATE(deAraújo:Autor {name:'de Araújo, I. L.'})
CREATE(Oliveira:Autor {name:'Oliveira, K. M.'}) CREATE(Maia:Autor
{name:'Maia, M. E'})
CREATE(PaperD:Paper {title:'A QoE-aware Approach for Smart Home Energy
Management'}) CREATE(Meloni:Autor {name:'Meloni, A'})
CREATE(Pilloni:Autor {name:'Pilloni, V.'})
(Irshad)-[:IS_AUTHOR]->(PaperA), (Rambli)-[:IS_AUTHOR]->(PaperA),
(Floris)-[:IS_AUTHOR]->(PaperB), (Floris)- [:IS_AUTHOR] -> (PaperD),
(Floris)- [:IS_AUTHOR] -> (PaperH), (Atzori)-[:IS_AUTHOR]->(PaperB),
(Atzori)- [:IS_AUTHOR] -> (PaperD), (Atzori)- [:IS_AUTHOR] ->
(PaperH), (Meloni)-[:IS_AUTHOR]->(PaperD),
(Pilloni)-[:IS_AUTHOR]->(PaperD), (Andrade)-[:IS_AUTHOR]->(PaperC),
(Carvalho)-[:IS_AUTHOR]->(PaperC), (deAraújo)-[:IS_AUTHOR]->(PaperC),
(Oliveira)-[:IS_AUTHOR]->(PaperC), (Maia)-[:IS_AUTHOR]->(PaperC),
I´d like a script which return the authors with more published papers, which, in this case is Floris and Atzori. I am using the Neo4J 3.0.8 version.
Thank you very much.
EDIT:
Edited to answer correctly the requirements of the question:
// First step: getting the greatest number of publications by author
MATCH(author:Autor)-[r:IS_AUTHOR]->(:Paper)
WITH author, count(r) as count
ORDER BY count DESC LIMIT 1
// Second step: getting all author who have number
// of publications equals to `count`
MATCH (a:Autor)-[r:IS_AUTHOR]->(p:Paper)
WITH a, count, count(r) AS r WHERE r = count
RETURN a
The output will be:
╒════════════════════╕
│"a" │
╞════════════════════╡
│{"name":"A. Floris"}│
├────────────────────┤
│{"name":"L. Atzori"}│
└────────────────────┘

Firebase using queryEqualTo() two times

I have the following data structure in my database:
I do understand how to query for questions in one language like de:
let ref = FIRDatabase.database().reference().child("madeGames")
ref.queryOrderedByChild("de").queryEqualToValue(true).observeSingleEventOfType(FIRDataEventType.Value) { (snap: FIRDataSnapshot) in
print("Data:", snap.value)
}
Problem:
It seems to be impossible for me to just get games in de AND en from the server.
Of course I know I can first download all the de question and then loop throw them offline but that doesn't seem to be very intelligent for me.
Does anyone know how to do that more efficiency?
One possible option is to change your structure by combining the children properties into a single property like this.
made_games
-jjjjsd903
de_en: true
de_fr: true
-j090iQ0ss
de: true
If you have a lot of languages you could also do this:
made_games
-jjjjsd903
language_available: 0 //just de
-j090iQ0ss
language_available: 3 //de_en
languages_available
0: de
1: en
2: fr
3: de_en
4: de_en_fr
Edit - and another option I just thought of... go binary!
00000001 = en
00000010 = fr
00000100 = de
00001000 = du
00010000 = ge
Then to get any combination of languages, perform a bitwise 'or' operation on the two (or more) you want.
So to get en_de, it's 00001001. To get en_du_ge it would be 00011001. If you have 8 languages, that could be represented by 8 bits and 256 combinations. So if you have 11 languages, that would be 2^11 = up to 4096 combinations.
Using this technique you could avoid storing the en_fr etc totally and just search for the bitwised value.
You could store decimal versions of the binary numbers in Firebase... just showing the binary here for clarity.

Data Element Type(DET) in Function Point Analysis?

I am studying function point analysis from Alvin Alexander's website.
http://alvinalexander.com/FunctionPoints/
In his example, he is calculating DETs from GUI screen but I cannot understand how he is counting. For example according to him at
http://alvinalexander.com/FunctionPoints/node26.shtml (end of page) DET of Create Project is five while there are only three input fields. Same is with other Screens. Can anyone help me? I'm stuck here.
A DET (Data Element Type) isn't just an input field: it's any piece of information recognizable by the user that crosses the application boundary. Usually, every input field on the screen is indeed a DET, but not always. I'm not going to get into that now, though, since in this particular case all the input fields are indeed DETs. Let's just talk about those 2 DETs that seem unaccounted for.
You should count 3 DETs for the 3 input fields (Project Name, Project Type and Project Description), and also 1 DET for the act of clicking on the Save button. Note that even if there were multiple ways to save the project (clicking on the Save button, pressing Enter etc) you would still count only 1 DET.
As for the fifth DET, I'm assuming the author is counting 1 DET for any messages the application is capable of showing in the process of creating a new project (confirmation message, any error messages, warnings etc). Again, you should only count 1 DET no matter how many possible messages there are. And I said I'm assuming because, while it is correct to count 1 DET for the capability of showing messages (it is, after all, information recognizable by the user that crosses the application boundary), he should have explicitly mentioned at least one message, especially since it's a teaching example.
DET basically count of controls/fields, error message & button/href on UI screen for transaction functions.
- 1 DET for 1 controls/fields.
- 1 DET for all error messages.
- 1 DET for all buttons/hrefs.
eg, 1 Text field = 1 DET
1 Label = 1 DET
1 Radio button group = 1 DET
2 Button (Submit & Cancel) = 1 DET
Total 4 DET.

splitting space delimited entries into new columns in R

I am coding a survey that outputs a .csv file. Within this csv I have some entries that are space delimited, which represent multi-select questions (e.g. questions with more than one response). In the end I want to parse these space delimited entries into their own columns and create headers for them so i know where they came from.
For example I may start with this (note that the multiselect columns have an _M after them):
Q1, Q2_M, Q3, Q4_M
6, 1 2 88, 3, 3 5 99
6, , 3, 1 2
and I want to go to this:
Q1, Q2_M_1, Q2_M_2, Q2_M_88, Q3, Q4_M_1, Q4_M_2, Q4_M_3, Q4_M_5, Q4_M_99
6, 1, 1, 1, 3, 0, 0, 1, 1, 1
6,,,,3,1,1,0,0,0
I imagine this is a relatively common issue to deal with but I have not been able to find it in the R section. Any ideas how to do this in R after importing the .csv ? My general thoughts (which often lead to inefficient programs) are that I can:
(1) pull column numbers that have the special suffix with grep()
(2) loop through (or use an apply) each of the entries in these columns and determine the levels of responses and then create columns accordingly
(3) loop through (or use an apply) and place indicators in appropriate columns to indicate presence of selection
I appreciate any help and please let me know if this is not clear.
I agree with ran2 and aL3Xa that you probably want to change the format of your data to have a different column for each possible reponse. However, if you munging your dataset to a better format proves problematic, it is possible to do what you asked.
process_multichoice <- function(x) lapply(strsplit(x, " "), as.numeric)
q2 <- c("1 2 3 NA 4", "2 5")
processed_q2 <- process_multichoice(q2)
[[1]]
[1] 1 2 3 NA 4
[[2]]
[1] 2 5
The reason different columns for different responses are suggested is because it is still quite unpleasant trying to retrieve any statistics from the data in this form. Although you can do things like
# Number of reponses given
sapply(processed_q2, length)
#Frequency of each response
table(unlist(processed_q2), useNA = "ifany")
EDIT: One more piece of advice. Keep the code that processes your data separate from the code that analyses it. If you create any graphs, keep the code for creating them separate again. I've been down the road of mixing things together, and it isn't pretty. (Especially when you come back to the code six months later.)
I am not entirely sure what you trying to do respectively what your reasons are for coding like this. Thus my advice is more general – so just feel to clarify and I will try to give a more concrete response.
1) I say that you are coding the survey on your own, which is great because it means you have influence on your .csv file. I would NEVER use different kinds of separation in the same .csv file. Just do the naming from the very beginning, just like you suggested in the second block.
Otherwise you might geht into trouble with checkboxes for example. Let's say someone checks 3 out of 5 possible answers, the next only checks 1 (i.e. "don't know") . Now it will be much harder to create a spreadsheet (data.frame) type of results view as opposed to having an empty field (which turns out to be an NA in R) that only needs to be recoded.
2) Another important question is whether you intend to do a panel survey(i.e longitudinal study asking the same participants over and over again) . That (among many others) would be a good reason to think about saving your data to a MySQL database instead of .csv . RMySQL can connect directly to the database and access its tables and more important its VIEWS.
Views really help with survey data since you can rearrange the data in different views, conditional on many different needs.
3) Besides all the personal / opinion and experience, here's some (less biased) literature to get started:
Complex Surveys: A Guide to Analysis Using R (Wiley Series in Survey Methodology
The book is comparatively simple and leaves out panel surveys but gives a lot of R Code and examples which should be a practical start.
To prevent re-inventing the wheel you might want to check LimeSurvey, a pretty decent (not speaking of the templates :) ) tool for survey conductors. Besides I TYPO3 CMS extensions pbsurvey and ke_questionnaire (should) work well too (only tested pbsurvey).
Multiple choice items should always be coded as separate variables. That is, if you have 5 alternatives and multiple choice, you should code them as i1, i2, i3, i4, i5, i.e. each one is a binary variable (0-1). I see that you have values 3 5 99 for Q4_M variable in the first example. Does that mean that you have 99 alternatives in an item? Ouch...
First you should go on and create separate variables for each alternative in a multiple choice item. That is, do:
# note that I follow your example with Q4_M variable
dtf_ins <- as.data.frame(matrix(0, nrow = nrow(<initial dataframe>), ncol = 99))
# name vars appropriately
names(dtf_ins) <- paste("Q4_M_", 1:99, sep = "")
now you have a data.frame with 0s, so what you need to do is to get 1s in an appropriate position (this is a bit cumbersome), a function will do the job...
# first you gotta change spaces to commas and convert character variable to a numeric one
y <- paste("c(", gsub(" ", ", ", x), ")", sep = "")
z <- eval(parse(text = y))
# now you assing 1 according to indexes in z variable
dtf_ins[1, z] <- 1
And that's pretty much it... basically, you would like to reconsider creating a data.frame with _M variables, so you can write a function that does this insertion automatically. Avoid for loops!
Or, even better, create a matrix with logicals, and just do dtf[m] <- 1, where dtf is your multiple-choice data.frame, and m is matrix with logicals.
I would like to help you more on this one, but I'm recuperating after a looong night! =) Hope that I've helped a bit! =)
Thanks for all the responses. I agree with most of you that this format is kind of silly but it is what I have to work with (survey is coded and going into use next week). This is what I came up with from all the responses. I am sure this is not the most elegant or efficient way to do it but I think it should work.
colnums <- grep("_M",colnames(dat))
responses <- nrow(dat)
for (i in colnums) {
vec <- as.vector(dat[,i]) #turn into vector
b <- lapply(strsplit(vec," "),as.numeric) #split up and turn into numeric
c <- sort(unique(unlist(b))) #which values were used
newcolnames <- paste(colnames(dat[i]),"_",c,sep="") #column names
e <- matrix(nrow=responses,ncol=length(c)) #create new matrix for indicators
colnames(e) <- newcolnames
#next loop looks for responses and puts indicators in the correct places
for (i in 1:responses) {
e[i,] <- ifelse(c %in% b[[i]],1,0)
}
dat <- cbind(dat,e)
}
Suggestions for improvement are welcome.

Resources