What analysis would need to be conducted for a 0 - 100 scale - spss

Most guidance on scale analysis in SPSS I find is focused solely on Likert Scales, my study didn't use one however and instead used a 0 - 100 scale. Which analysis is best suited for this?
Any help would be massively appreciated.

Related

Sample size in k-means clustering with 3D data

I want to run an experiment that I calculate peoples' score in 3 variables (features) and it is an unsupervised learning experiment meaning that I need to use explortory methods like k-means to find clusters in the data. Yet, I don't know how define the suitable sample size for this experiment.
I have had about 50 participants so far but I am not sure if this much is enough or I need more data.
I would appreciate any help to define the number of participants I am goind to need.

Specific amount of training data required for fine-tuning Resnet-50 model

for decent generalization, how many images per class is needed for fine-tuning the Resnet-50 model for ASL HandSign Classification(24 classes)? I have around 600 images per class and the model is overfitting very badly.
I can't give you a number, but a method to find it yourself. The technique is plotting a graph called "learning curve" where the x-axis is the number if training samples and the y-axis is the score. You start at 1 training sample and increase to 600. You plot two curves: the training error and the test error. You can then see how much influence more data without any other change will have on the result.
More details and the following image in my masters thesis, section 2.5.4:
In this example you can see that having up to 20 training samples each new example is improving the test score a lot (green curve goes down a lot). But after that, throwing just more data at the problem will not help a lot.
The curve will look different in your case, but the principle should be the same.
Other analysis
Look at chapter 2.5 and 2.6 of my masters thesis. I especially recommend having a look at the confusion matrix and confusion matrix ordering. This will give you an idea which classes are confused. Maybe the classes are just inherently difficult to distinguish? Maybe one can add more features? Maybe there are labeling errors? Have a look at chapter 2.5 for more of those "maybe's"

DBSCAN one demension, finding core points

One of the practice quiz question(not homework) is asking to find how many core points in one dimensional points with given EPS and MinPTS. I thought DBSCAN should be used for only two dimensions. Any guidance is much appreciated.
Question
Yes, DBSCAN can be used on 1 dimensional data.
It's not particularly smart (just use Kernel Density Estimation instead) but why do you think this would not work?

Gaussian Weighting for point re-distribution

I am working with some points which are very compact together and therefore forming clusters amongst them is proving very difficult. Since I am new to this concept, I read in a paper about the concept of Gaussian weighting the points randomly or rather resampling using gaussian weight.
My question here is how are gaussian weight applied to the data points? Is it the actual normal distribution where I have to compute the means and the variance and SD and than randomly sample or there is other ways to do it. I am confused on this concept?
Can I get some hints on the concept please
I think you should look at book:
http://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738
There is a good chapters on modeling point distributions.

Qlearning - Defining states and rewards

I need some help with solving a problem that uses the Q-learning algorithm.
Problem description:
I have a rocket simulator where the rocket is taking random paths and also crashes sometimes. The rocket has 3 different engines that can be either on or off. Depending on which engine(s) is activated, the rocket flies towards different directions.
Functions for turning the engines off/on is available
The task:
Construct the Q-learning controller that will turn to rocket to face up all the time.
A sensor that reads the angle of the rocket is available as input.
My solution:
I have the following states:
I also have the following actions:
all engines off
left engine on
right engine on
middle engine on
left and right on
left and middle on
right and middle on
And the following rewards:
Angle = 0, Reward = 100
All other angles, reward = 0
Question:
Now to the question, is this a good choice of rewards and states ? Can I improve my solution ? Is it better to have more rewards for other angles ?
Thanks in advance
16 states x 7 actions is a very small problem.
Rewards for other angles will help you learn faster, but can create odd behaviors later depending on your dynamics.
If you don't have momentum you may decrease the number of states, which will speed up learning and reduce memory useage (which is already tiny). To find the optimal number of states, try decreasing the number of states while analyzing a metric such as reward/timesteps over multiple games, or mean error (normalized by starting angle) over multiple games. Some state representations may perform much better than others. If not, choose the one which converges fastest. This should be relatively cheap with your small Q table.
If you want to learn quickly, you may also try Q-lambda or some other modified Reinforcement Learning algorithm to make use of temporal difference learning.
Edit: Depending on your dynamics this problem may not actually be suitable as a Markov Decision Process. For example, you may need to include the current rotation rate.
Try putting smaller rewards on the states next to the desired state. This will get your agent to learn to go up quicker.

Resources