How do I modify the part 4 of G.Shell.GGIR function in R if I do not have a sleep log - r-package

I am trying to analyse physical activity data collected from geneactiv accelerometer using the GGIR shell function in R (code copied from ). The Part 4 section of the code requires comparing the sleep data generated by the accelerometer and a self-reported sleep log. The "loglocation" argument requires you to enter the CSV filename for the manual sleep log. However my study does not include the self-reported sleep log as I am mainly interested in physical activity levels not sleep analysis. How do I modify the code to exclude including the sleep log and get my code to run successfully and produce the necessary visual data? Please find below the R code for physical activity analysis.
library(GGIR)
g.shell.GGIR(#=======================================
# INPUT NEEDED:
mode=c(1,2,3,4,5),
datadir="C:/hadiza/mydata",
outputdir="D:/myresults",
f0=1, f1=2,
#-------------------------------
# Part 1:
#-------------------------------
# Key functions: reading file, auto-calibration, and extracting features
do.enmo = TRUE, do.anglez=TRUE,
chunksize=1, printsummary=TRUE,
#-------------------------------
# Part 2:
#-------------------------------
strategy = 2, ndayswindow=7,
hrs.del.start = 0, hrs.del.end = 0,
maxdur = 9, includedaycrit = 16,
winhr = c(5,10),
qlevels = c(c(1380/1440),c(1410/1440)),
qwindow=c(0,24),
ilevels = c(seq(0,400,by=50),8000),
mvpathreshold =c(100,120),
bout.metric = 4,
closedbout=FALSE,
#-------------------------------
# Part 3:
#-------------------------------
# Key functions: Sleep detection
timethreshold= c(5), anglethreshold=5,
ignorenonwear = TRUE,
#-------------------------------
# Part 4:
#-------------------------------
# Key functions: Integrating sleep log (if available) with sleep detection
# storing day and person specific summaries of sleep
excludefirstlast = TRUE,
includenightcrit = 16,
def.noc.sleep = c(),
loglocation= "C:/mydata/sleeplog.csv",
outliers.only = TRUE,
criterror = 4,
relyonsleeplog = FALSE,
sleeplogidnum = TRUE,
colid=1,
coln1=2,
do.visual = TRUE,
nnights = 9,
#-------------------------------
# Part 5:
# Key functions: Merging physical activity with sleep analyses
#-------------------------------
threshold.lig = c(30), threshold.mod = c(100), threshold.vig = c(400),
boutcriter = 0.8, boutcriter.in = 0.9, boutcriter.lig = 0.8,
boutcriter.mvpa = 0.8, boutdur.in = c(1,10,30), boutdur.lig = c(1,10),
boutdur.mvpa = c(1), timewindow = c("WW"),
#-----------------------------------
# Report generation
#-------------------------------
# Key functions: Generating reports based on meta-data
do.report=c(2,4,5),
visualreport=TRUE, dofirstpage = TRUE,
viewingwindow=1)

Commenting this part loglocation= "C:/mydata/sleeplog.csv" work for me.
#-------------------------------
# Part 4:
#-------------------------------
# Key functions: Integrating sleep log (if available) with sleep detection
# storing day and person specific summaries of sleep
excludefirstlast = TRUE,
includenightcrit = 16,
def.noc.sleep = c(),
#loglocation= "C:/mydata/sleeplog.csv",
outliers.only = TRUE,
criterror = 4,
relyonsleeplog = FALSE,
sleeplogidnum = TRUE,
colid=1,
coln1=2,
do.visual = TRUE,
nnights = 9,
And this is the same solution that is shown in this video https://youtu.be/S8YPTrYNWdU?t=219

Related

How to spllit laserscan data from lidar into sections and view them on rviz

I was trying to split the laser scan range data into subcategories and like to publish each category into different laser topics.
to specify more, the script should get one topic as an input - /scan and the script should publish three topics as follow = scan1, scan2, scan3
is there a way to split the laser scan and publish back and look them on rviz
I tried the following
def callback(laser):
current_time = rospy.Time.now()
regions["l_f_fork"] = laser.ranges[0:288]
regions["l_f_s"] = laser.ranges[289:576]
regions["stand"] = laser.ranges[576:864]
l.header.stamp = current_time
l.header.frame_id = 'laser'
l.angle_min = 0
l.angle_max = 1.57
l.angle_increment =0
l.time_increment = 0
l.range_min = 0.0
l.range_max = 100.0
l.ranges = regions["l_f_fork"]
l.intensities = [0]
left_fork.publish(l)
# l.ranges = regions["l_f_s"]
# left_side.publish(l)
# l.ranges = regions["stand"]
# left_side.publish(l)
rospy.loginfo("publishing new info")
I can see the different topics on rviz, but they are lies on the same line,
Tutorial
The following code splits the LaserScan data into three equal sections:
#! /usr/bin/env python3
"""
Program to split LaserScan into three parts.
"""
import rospy
from sensor_msgs.msg import LaserScan
class LaserScanSplit():
"""
Class for splitting LaserScan into three parts.
"""
def __init__(self):
self.update_rate = 50
self.freq = 1./self.update_rate
# Initialize variables
self.scan_data = []
# Subscribers
rospy.Subscriber("/scan", LaserScan, self.lidar_callback)
# Publishers
self.pub1 = rospy.Publisher('/scan1', LaserScan, queue_size=10)
self.pub2 = rospy.Publisher('/scan2', LaserScan, queue_size=10)
self.pub3 = rospy.Publisher('/scan3', LaserScan, queue_size=10)
# Timers
rospy.Timer(rospy.Duration(self.freq), self.laserscan_split_update)
def lidar_callback(self, msg):
"""
Callback function for the Scan topic
"""
self.scan_data = msg
def laserscan_split_update(self, event):
"""
Function to update the split scan topics
"""
scan1 = LaserScan()
scan2 = LaserScan()
scan3 = LaserScan()
scan1.header = self.scan_data.header
scan2.header = self.scan_data.header
scan3.header = self.scan_data.header
scan1.angle_min = self.scan_data.angle_min
scan2.angle_min = self.scan_data.angle_min
scan3.angle_min = self.scan_data.angle_min
scan1.angle_max = self.scan_data.angle_max
scan2.angle_max = self.scan_data.angle_max
scan3.angle_max = self.scan_data.angle_max
scan1.angle_increment = self.scan_data.angle_increment
scan2.angle_increment = self.scan_data.angle_increment
scan3.angle_increment = self.scan_data.angle_increment
scan1.time_increment = self.scan_data.time_increment
scan2.time_increment = self.scan_data.time_increment
scan3.time_increment = self.scan_data.time_increment
scan1.scan_time = self.scan_data.scan_time
scan2.scan_time = self.scan_data.scan_time
scan3.scan_time = self.scan_data.scan_time
scan1.range_min = self.scan_data.range_min
scan2.range_min = self.scan_data.range_min
scan3.range_min = self.scan_data.range_min
scan1.range_max = self.scan_data.range_max
scan2.range_max = self.scan_data.range_max
scan3.range_max = self.scan_data.range_max
# LiDAR Range
n = len(self.scan_data.ranges)
scan1.ranges = [float('inf')] * n
scan2.ranges = [float('inf')] * n
scan3.ranges = [float('inf')] * n
# Splitting Block [three equal parts]
scan1.ranges[0 : n//3] = self.scan_data.ranges[0 : n//3]
scan2.ranges[n//3 : 2*n//3] = self.scan_data.ranges[n//3 : 2*n//3]
scan3.ranges[2*n//3 : n] = self.scan_data.ranges[2*n//3 : n]
# Publish the LaserScan
self.pub1.publish(scan1)
self.pub2.publish(scan2)
self.pub3.publish(scan3)
def kill_node(self):
"""
Function to kill the ROS node
"""
rospy.signal_shutdown("Done")
if __name__ == '__main__':
rospy.init_node('laserscan_split_node')
LaserScanSplit()
rospy.spin()
The following are screenshots of the robot and obstacles in the environment in Gazebo and RViz:
References:
ROS1 Python Boilerplate
atreus

Does creating a data loader inside another data loader in pytorch slow things down (during meta-learning)?

I was trying to create a data loader for meta-learning but got that my code is extremely slow and I can't figure out why. I am doing this because a set of data sets (so I need data loaders for them) is what is used in meta-learning.
I am wondering if it's because I have a collate function generating data loaders.
Here is the collate function that generates data loaders (and receives ALL the data sets):
class GetMetaBatch_NK_WayClassTask:
def __init__(self, meta_batch_size, n_classes, k_shot, k_eval, shuffle=True, pin_memory=True, original=False, flatten=True):
self.meta_batch_size = meta_batch_size
self.n_classes = n_classes
self.k_shot = k_shot
self.k_eval = k_eval
self.shuffle = shuffle
self.pin_memory = pin_memory
self.original = original
self.flatten = flatten
def __call__(self, all_datasets, verbose=False):
NUM_WORKERS = 0 # no need to change
get_data_loader = lambda data_set: iter(data.DataLoader(data_set, batch_size=self.k_shot+self.k_eval, shuffle=self.shuffle, num_workers=NUM_WORKERS, pin_memory=self.pin_memory))
#assert( len(meta_set) == self.meta_batch_size*self.n_classes )
# generate M N,K-way classification tasks
batch_spt_x, batch_spt_y, batch_qry_x, batch_qry_y = [], [], [], []
for m in range(self.meta_batch_size):
n_indices = random.sample(range(0,len(all_datasets)), self.n_classes)
# create N-way, K-shot task instance
spt_x, spt_y, qry_x, qry_y = [], [], [], []
for i,n in enumerate(n_indices):
data_set_n = all_datasets[n]
dataset_loader_n = get_data_loader(data_set_n) # get data set for class n
data_x_n, data_y_n = next(dataset_loader_n) # get all data from current class
spt_x_n, qry_x_n = data_x_n[:self.k_shot], data_x_n[self.k_shot:] # [K, CHW], [K_eval, CHW]
# get labels
if self.original:
#spt_y_n = torch.tensor([n]).repeat(self.k_shot)
#qry_y_n = torch.tensor([n]).repeat(self.k_eval)
spt_y_n, qry_y_n = data_y_n[:self.k_shot], data_y_n[self.k_shot:]
else:
spt_y_n = torch.tensor([i]).repeat(self.k_shot)
qry_y_n = torch.tensor([i]).repeat(self.k_eval)
# form K-shot task for current label n
spt_x.append(spt_x_n); spt_y.append(spt_y_n) # array length N with tensors size [K, CHW]
qry_x.append(qry_x_n); qry_y.append(qry_y_n) # array length N with tensors size [K, CHW]
# form N-way, K-shot task with tensor size [N,W, CHW]
spt_x, spt_y, qry_x, qry_y = torch.stack(spt_x), torch.stack(spt_y), torch.stack(qry_x), torch.stack(qry_y)
# form N-way, K-shot task with tensor size [N*W, CHW]
if verbose:
print(f'spt_x.size() = {spt_x.size()}')
print(f'spt_y.size() = {spt_y.size()}')
print(f'qry_x.size() = {qry_x.size()}')
print(f'spt_y.size() = {qry_y.size()}')
print()
if self.flatten:
CHW = qry_x.shape[-3:]
spt_x, spt_y, qry_x, qry_y = spt_x.reshape(-1, *CHW), spt_y.reshape(-1), qry_x.reshape(-1, *CHW), qry_y.reshape(-1)
## append to N-way, K-shot task to meta-batch of tasks
batch_spt_x.append(spt_x); batch_spt_y.append(spt_y)
batch_qry_x.append(qry_x); batch_qry_y.append(qry_y)
## get a meta-set of M N-way, K-way classification tasks [M,K*N,C,H,W]
batch_spt_x, batch_spt_y, batch_qry_x, batch_qry_y = torch.stack(batch_spt_x), torch.stack(batch_spt_y), torch.stack(batch_qry_x), torch.stack(batch_qry_y)
return batch_spt_x, batch_spt_y, batch_qry_x, batch_qry_y
that is passed to another data loader here:
def get_meta_set_loader(meta_set, meta_batch_size, n_episodes, n_classes, k_shot, k_eval, pin_mem=True, n_workers=4):
"""[summary]
Args:
meta_set ([type]): the meta-set
meta_batch_size ([type]): [description]
n_classes ([type]): [description]
pin_mem (bool, optional): [Since returning cuda tensors in dataloaders is not recommended due to cuda subties with multithreading, instead set pin=True for fast transfering of the data to cuda]. Defaults to True.
n_workers (int, optional): [description]. Defaults to 4.
Returns:
[type]: [description]
"""
if n_classes > len(meta_set):
raise ValueError(f'You really want a N larger than the # classes in the meta-set? n_classes, len(meta_set = {n_classes, len(meta_set)}')
collator_nk_way = GetMetaBatch_NK_WayClassTask(meta_batch_size, n_classes, k_shot, k_eval)
episodic_sampler = EpisodicSampler(total_classes=len(meta_set), n_episodes=n_episodes)
episodic_metaloader = data.DataLoader(
meta_set,
num_workers=n_workers,
pin_memory=pin_mem, # to make moving to cuda more efficient
collate_fn=collator_nk_way, # does the collecting to return M N,K-shot task
batch_sampler=episodic_sampler # for keeping track of the episode
)
return episodic_metaloader
(will generate a smaller example)
related:
https://discuss.pytorch.org/t/what-does-runtimeerror-cuda-driver-error-initialization-error-mean/87505/6
Conceptually pytorch dataloaders should have no problem being fast even if one is inside the other. One way to debug your issue is to use the line_profiler package to get a better idea of where the slowdown happens.
If you cannot resolve the issue after using the line_profiler, please update your questions with the output of the profiler to help us understand what might be wrong. Allow the profiler to run for some time to gather enough statistics about the execution of your dataloader. The #profile decorator works for both functions and class functions too so it should work for your dataloader functions.

Is it possible to use parallel assignment for keys in Ruby hash?

Given that it is an immutable object, ruby allows paraller assignments as this:
sample[:alpha] = sample[:beta] = sample[:gamma] = 0
But is there any other simple way to do this? , something like :
sample[alpha:, beta:, gamma: => 0]
or:
sample[:alpha, :beta, :gamma] => 0, 0, 0
Firstly, this does not work as you expect:
sample = {}
sample[:alpha], sample[:beta], sample[:gamma] = 0
This will result in:
sample == { alpha: 0, beta: nil, gamma: nil }
To get the desired result, you could instead use parallel assignment:
sample[:alpha], sample[:beta], sample[:gamma] = 0, 0, 0
Or, loop through the keys to assign each one separately:
[:alpha, :beta, :gamma].each { |key| sample[key] = 0 }
Or, merge the original hash with your new attributes:
sample.merge!(alpha: 0, beta: 0, gamma: 0)
Depending on what you're actually trying to do here, you may wish to consider giving your hash a default value. For example:
sample = Hash.new(0)
puts sample[:alpha] # => 0
sample[:beta] += 1 # Valid since this defaults to 0, not nil
puts sample # => {:beta=>1}
What about this one?
sample.merge!(alpha: 0, beta: 0, gamma: 0)
There is nothing like the thing you've described but you can run a loop with all the and assign the value.
keys = [:alpha, :beta, :gamma, :theta, ...]
keys.each {|key| sample[key] = 0}
Reduces the number of extra keystrokes, and very easy to change the keys array.

increment by one ruby string interpolation loop

I am trying to make data variable increments by 1 every time it gets collected in the barcode array.
def generate_barcode
batch_number = params[:batch_number].to_i
business_partner_id = params[:business_partner].to_i
current_business_partner = BusinessPartner.find(business_partner_id)
serial_number = "00000000"
final_value = current_business_partner.partner_code << serial_number
barcodes = batch_number.times.collect {
data = "#{final_value + '1'}"
Barby::EAN13.new(data) #currently collecting the same object batch number of times with data value being the same......
}
end
How do I increment data by 1 each time the collect happens?
Ruby has a handy helper method for "incrementing" strings, called succ/succ!. Observe:
serial_number = "00000000"
15.times { puts serial_number.succ! }
# >> 00000001
# >> 00000002
# >> 00000003
# >> 00000004
# >> 00000005
# >> 00000006
# >> 00000007
# >> 00000008
# >> 00000009
# >> 00000010
# >> 00000011
# >> 00000012
# >> 00000013
# >> 00000014
# >> 00000015
Meditate on this:
5.times { |i| i } # => 5
5.times.collect{ |i| i * 2 } # => [0, 2, 4, 6, 8]
The documentation for times shows it passes a value each time it iterates.
Iterates the given block int times, passing in values from zero to int - 1.
Your question isn't easy to understand because it isn't clear what you want to accomplish. From what I understood you want to increment "00000000" > "00000001", etc.
For that you could use String.rjust:
number_of_digits = 8
serial_number = 0 # Use an integer!!!
barcodes = batch_number.times.collect {
serial_number += 1
data = serial_number.to_s.rjust(number_of_digits, '0')
# Do what you want with data
}
Best wishes!
Update: Sergio Tulentsev's answer is very handy too and even more elegant for this problem, but if you want more control over the serial number, this would be the way.

How to made a http request in a thread an keep the call order?

I want to do a function which call a remote service each second. To do this, I have something like this :
stop = false
text = ""
while stop == false
r = RestClient.post 'http://example.com'
text += r.to_str
sleep 1
# after a treatment, the value of stop will set to true
end
The problem is than the program is blocked until the http request is done and I don't want it. I can put this code in a subprocess but I want to keep the result in call order. For example, I can have this request :
time | anwser
--------------
10 | Happy
100 | New
10 | Year
The second request is longer than the third so, with threads, I will have the third result before the second and the value of the variable text will be HappyYearNew and I want HappyNewYear.
I there a way to have multiple process and to keep the original order? It's a very little program, I don't want to have to install a server like redis if it's possible.
Using hash
Since ruby-1.9, hash keys order is guaranteed. A simple solution here would be to take advantage of that, by putting your requests in a hash and storing their result accessing the hash element by its key :
requests = {
foo: [ 'a', 1 ],
bar: [ 'b', 5 ],
foobar: [ 'c', 2 ]
}
requests.each do |name, config|
Thread.new( name, config ) do |name, config|
sleep config[1]
requests[ name ] = config[0]
end
end
sleep 6
requests.each do |name, result|
puts "#{name} : #{result}"
end
Produces :
foo : a
bar : b
foobar : c
Thus, to match your provided code :
stop, i, text, requests = false, 0, '', {}
until stop
i += 1
requests[ i ] = nil
Thread.new( i ) do |key|
r = RestClient.post 'http://example.com'
requests[ i ] = r.to_str
sleep 1
# after a treatment, the value of stop will set to true
end
end
# you will have to join threads, here
text = requests.values.join
Using array
If the last example is good for you, you could even simplify that using an array. Array order is of course guaranteed too, and you can take advantage of ruby array dynamic size nature :
a = []
a[5] = 1
p a
=> [nil, nil, nil, nil, nil, 1]
So, previous example can be rewritten :
stop, i, text, requests = false, 0, '', []
until stop
i += 1
Thread.new( i ) do |key|
r = RestClient.post 'http://example.com'
requests[ i ] = r.to_str
sleep 1
# after a treatment, the value of stop will set to true
end
end
# you will have to join threads, here
text = requests.join
Here's a pretty simple solution with Threads. I have results and rmutex as instance variables, you could make them global, class, or a lot of other things:
stop = false
Thread.current[:results] = {}
Thread.current[:rmutex] = Mutex.new
counter = 0
while(!stop)
Thread.new(counter, Thread.current) do |idex, parent|
r = RestClient.post 'http://example.com'
parent[:rmutex].lock
parent[:results][idex] = r.to_str
parent[:rmutex].unlock
end
sleep 1
end
text = Thread.current[:results].to_a.sort_by {|o| o[0]}.map {|o| o[1]}.join
This works by storing the "index" into fetching that each threads operation is at, storing the result with its index, and putting it all together after sorting by index at the end.

Resources