How to structure data for an exercise app [closed] - ios

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
This seems like it should be so simple to create, but have had my gears turning on how exactly to set it up. Basically I want to create a way for users to be able to save and update their exercise data as they use the app. They will also be able to go back and review any day for their exercise results on that day. I am currently considering using property lists for doing this, but is there a better way?
Example:
04-28-12
-- Exercise: Bench Press Set 1: 115 lbs x 12 reps Set 2: 125 x 10 reps Set 3: 130lbs x 8 reps
-- Exercise Squats Set 1: 215 x 10 reps Set 2...etc;
I really appreciate any input that you guys have on this!
Thanks.

One way to do this is to use an NSArray which contains other arrays and an NSDictionary which can all be stored in a property list.
Then your structure would be like:
People (NSArray)
|
--> Exercises (NSArray)
|
--> Sets (NSArray)
|
--> 0 (NSDictionary)
|
--> Weight (key): 115 (value)
|
--> Repetitions (key): 12 (value)
|
--> Type (key): 1 (value integer mapped to exercise type like Squats)
This can be pretty cumbersome though when you get a lot of data in this one property list, so you might want to consider using Core Data instead (http://www.raywenderlich.com/934/core-data-on-ios-5-tutorial-getting-started)

Related

How to profile CPU used by method/function in Swift [duplicate]

This question already has answers here:
How to measure total time spent in a function?
(5 answers)
Closed 6 years ago.
The app I'm developing does some intensive processing and I'd like to understand where the time is being used. The Time Profiler in Instruments shows the tree of calls but I can't see how to get the information I need.
My app has a structure like this:
A
B
B1
E1
B2
B3
E1
C
C1
C2
E1
C3
D
D1
D2
E1
Now, method E1 is called from a number of places and I'd like to see how much CPU it is using. However, from the profiler output I can only see the time taken in E1 down each branch of the tree. Is there any way of getting a report by method/function regardless of where it is called from? e.g. Sum the total time spent in E1?
Thanks,
Julian
The link posted by naglerrr is the answer - How to measure total time spent in a function?
You need to find the function in the profiler tree, right click and select "Focus on Calls Made By". Check the linked answer for more information.

Add data series to highlight cases on a box plot (Excel, SPSS or R)

first time user of this forum - guidance on how to provide enough information is very appreciated. I am trying to replicate the presentation of data used in the Medical education field. This will help improve the quality of examiners' marking of trainees in a Clinical Exam. What I would like to communicate will be similar to what is already communicated in the College of General Practitioners regarding one of their own exams, please see www.gp10.com.au/slides/thursday/slide29.pdf to help understand what it is I want to present. I have access to Excel, SPSS and R, so any help with any of these would be great. However as a first attempt I have used SPSS and created 3 variables: dummy variable, a "station score" and a "global rating score"(GRS). The "station score"(ST) is a value between 0 and 10 (non-integers) and is on the y-axis similar to the pdf presentation of "Candidate Final Marks". The x-axis is the "global rating scale", an integer from 1 to 6 and is represented in the pdf as the "Overall Performance Scale". When I use SPSS's boxplot I get a boxplot as depicted.
.
What I would like to do is overlay a single examiners own scoring of X number of examinees. So for one examiner (examiner A) provided the following marks:
ST: 5.53,7.38,7.38,7.44,6.81
GRS: 3,4,4,5,3
(this is transposed into two columns).
Whether it be SPSS, Excel or R how would I be able to overlay the box and whisker plots with the individual data points provided by the one examiner? This would help show the degree to which the examiners' marking styles are in concordance with the expected distribution of ST scores across GRS. Any help greatly appreciated! I like Excel graphics but I have found it very difficult to work with when choosing the examiners' data as a separate series - somehow the examiners' GRS scores do not line up nicely on the x-axis. I am very new to R but am also very interested in R, and would expend time to get a good result in R if a good result is viable. I understand JMP may be preferable for this type of thing but access to this may not be possible.

Stata: replace dummy for multiple observations

Title might be misleading.
I have a longitudinal dataset with a dummy (dummy1) variable indicating if a condition is met in a certain year, for given category. I want this event to be taken into account for the next twenty years as well. Hence, I want to create a new dummy (dummy2), which takes the value 1 for the 19 observations following the observation where dummy1 was 1, as well as that same observation (example below).
I was trying to create a loop with lag operators, but failed to get it to work so far.
Even code that failed might be close to a good solution. Not giving code that failed means that we can't explain your mistakes. Furthermore, questions focusing on how to use software to do something are widely considered marginal or off-topic on SO.
One approach is
bysort category (year) : gen previous = year if dummy1
by category : replace previous = previous[_n-1] if missing(previous)
gen byte dummy2 = (year - previous) < 20
The trick here is to create a variable holding the last year that the dummy (indicator) was 1, and the trick in that is spelled out in How can I replace missing values with previous or following nonmissing values or within sequences?
Note that this works independently of
whether the panel identifier is numeric (it could be string here, on the evidence given)
whether you have tsset or xtset the data
what happens before the first event; for such years, previous is born missing and remains missing (however, in general, watch for problems with code at the ends of time series).

Examples of very concise Forth applications? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
In this talk, Chuck Moore (the creator of Forth) makes some very bold, sweeping claims, such as:
"Every application that I have seen that I didn't code has ten times as much code in it as it needs"
"About a thousand instructions seems about right to me to do about anything"
"If you are writing code that needs [local variables] you are writing non-optimal code. Don't use local variables."
I'm trying to figure out whether Mr. Moore is a) an absolutely brilliant genius or b) a crackpot. But that's a subjective question, and I'm not looking for the answer to that question here. What I am looking for are examples of complex, real-world problems which can be solved in "1000 instructions or less" using Forth, and source code demonstrating how to do so. An example showing just one non-trivial piece of a real-world system would be fine, but no "toy" code samples which could be duplicated in 5 or 10 lines of another high-level language, please.
If you have written real-world systems in Forth, using just a small amount of source code, but aren't at liberty to show the source (because it is proprietary), I'd still like to hear about it.
You need to understand that Chuck Moore is a little different than you and me. He was trained in an era when mainframe computers consisted of 16 KB or its equivalent in core memory, and he was able to do quite a lot of things with the computers of the time. Perhaps the biggest success for Forth, outside of his OKAD-II chip design package (that's not a typo), was a multi-user multi-tasking Forth system responsible for concurrently controlling data acquisition instruments and data analysis/visualization software at NRAO on a fairly modestly sized computer barely able to compile Fortran source code on its own.
What he calls an "application", we might consider to be a "component" of a larger, more nebulous thing called an application. More generally, it's good to keep in mind that one Moore "application" is more or less equivalent to one "view" in an MVC triad today. To keep memory consumption small, he relies heavily on overlays and just-in-time compilation techniques. When switching from one program interface to another, it typically involves recompiling the entire application/view from source. This happens so fast you don't know it's happening. Kind of like how Android recompiles Dalvik code to native ARM code when you activate an application every time today.
At any given time, OKAD-II has no more than about 2.5 KB of its code loaded into memory and running. However, the on-disk source for OKAD-II is considerably larger than 2.5 KB. Though, it is still significantly more compact than its nearest competitor, SPICE.
I'm often curious about Chuck Moore's views and find his never-ending strive for simplicity fascinating. So, in MythBusters fashion, I put his claims to the test by trying to design my own system as minimally as I could make it. I'm happy to report he's very nearly spot-on in his claims, on both hardware and software issues. Case in point, during last September's Silicon Valley Forth Interest Group (SVFIG) meeting, I used my Kestrel-2 itself to generate video for the slide deck. This required I wrote a slide presentation program for it, which took up 4 KB of memory for the code, and 4 KB for the slide deck data structures. With an average space of six bytes per Forth word (for reasons I won't go into here), the estimate of "about 1000 (Forth) instructions" for the application is just about spot on to what Chuck Moore estimates his own "applications" to be.
If you're interested in speaking to real-world Forth coders (or who have done so in the past, as it increasingly seems to be), and you happen to be in the Bay Area, the Silicon Valley Forth Interest Group still meets every fourth Saturday of the month, except for November and December, which is the third Saturday. If you're interested in attending a meeting, even if only to interview Forth coders and get a taste of what "real-world" Forth is like, check us out on meetup.com and tag along. We also new stream our meetings on YouTube, but we're not very good at it. We're abusing inappropriate hardware and software to do our bidding, since we have a budget of zero for this sort of thing. :)
Forth is indeed amazingly compact! Words without formal parameters (and zero-operand instructions at the hardware - e.g. the GA144) saves a lot. The other main contributor to its compactness is the absolutely relentless factoring of redundant code that the calling convention and concatenative nature affords.
I don't know if it qualifies as a non-toy example, but the Turtle Graphics implementation for the Fignition (in FigForth) is just 307 bytes compiled and fits in a single source block! This includes the fixed point trig and all the normal turtle commands. This isn't the best example of readable Forth because of trying to squeeze it into a single source block with single-character names and such:
\ 8.8 fixed point sine table lookup
-2 var n F9F2 , E9DD , CEBD , AA95 , 7F67 , 4E34 , 1A c,
: s abs 3C mod dup 1D > if 3C swap - then dup E > if
-1 1E rot - else 1 swap then n + c# 1+ * ;
0 var x 0 var y 0 var a
0 var q 0 var w
: c 9380 C80 0 fill ; \ clear screen
: k >r 50 + 8 << r> ! ;
: m dup q # * x +! w # * y +! ; \ move n-pixels (without drawing)
: g y k x k ; \ go to x,y coord
: h dup a ! dup s w ! 2D + s q ! ; \ heading
: f >r q # x # y # w # r 0 do >r r + >r over + \ forward n-pixels
dup 8 >> r 8 >> plot r> r> loop o y ! x ! o r> o ;
: e key 0 vmode cls ; \ end
: b 1 vmode 1 pen c 0 0 g 0 h ; \ begin
: t a # + h ; \ turn n-degrees
Using it is extremely concise as well.
: sin 160 0 do i i s 4 / 80 + plot loop ;
: burst 60 0 do 0 0 g i h 110 f loop ;
: squiral -50 50 g 20 0 do 100 f 21 t loop ;
: circle 60 0 do 4 f 1 t loop ;
: spiral 15 0 do circle 4 t loop ;
: star 5 0 do 80 f 24 t loop ;
: stars 3 0 do star 20 t loop ;
: rose 0 50 0 do 2 + dup f 14 t loop ;
: hp 15 0 do 5 f 1 t loop 15 0 do 2 f -1 t loop ;
: petal hp 30 t hp 30 t ;
: flower 15 0 do petal 4 t loop ;
(shameless blog plug: http://blogs.msdn.com/b/ashleyf/archive/2012/02/18/turtle-graphics-on-the-fignition.aspx)
What is not well understood today is the way Forth anticipated an approach to coding that became popular early in the 21st century in association with agile methods. Specifically:
Forth introduced the notion of tiny method coding -- the use of small objects with small methods. You could make a case for Smalltalk and Lisp here too, but in the late 1980s both Smalltalk and Lisp practice tended toward larger and more complex methods. Forth always embraced very small methods, if only because it encouraged doing so much on the stack.
Forth, even more than Lisp, popularized the notion that the interpreter was just a little software pattern, not a dissertation-sized brick. Got a problem that's hard to code? The Forth solution had to be, "write a little language", because that's what Forth programming was.
Forth was very much a product of memory and time constraints, of an era where computers were incredibly tiny and terribly slow. It was a beautiful design that lets you build an operating system and a compiler in a matchbox.
An example of just how compact Forth can be, is Samuel Falvo's screencast Over the Shoulder 1 - Text Preprocessing in Forth (1h 06 min 25 secs, 101 MB, MPEG-1 format - at least VLC can play it). Alternative source ("Links and Resources" -> "Videos").
Forth Inc's polyFORTH/32 VAX/VMS assembler definitions took some 8 blocks of source. A VAX assembler, in 8K of source. Commented. I'm still amazed, 30 years later.
I can't verify at the moment, but I'm guessing the instruction count to parse those CODE definitions would be in the low hundreds. And when I said 'took some 8 blocks', it still takes, the application using that nucleus is live and in production, 30 years later.

Huffman coding prove on a 8 bit sequence [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
A data file contains a sequence of 8-bit characters such that all 256 characters are about as common: the maximum character frequency is less than twice the minimum character frequency. Prove that Huffman coding in this case is not more efficient than using an ordinary 8-bit fixed-length code.
The proof is direct. Assume w.l.o.g. that the characters are sorted in ascending order of frequency. We know that f(1) and f(2) will be joined first into f'(1), and since f(2) >= f(1) and 2*f(1) > f(256), this won't be joined until after f(256) is joined with something. By the same token, f(3) and f(4) will be joined into f'(2) with f'(2) >= f'(1) > f(256). Continuing thusly, we get f(253) and f(254) joined into f'(127) >= ... >= f'(1) > f(256). Finally, f(255) and f(256) are joined into f'(128) >= f'(127) >= ... >= f'(1). We now recognize that since f(256) < 2*f(1) <= f'(1) and f'(128) <= 2*f(256), f'(128) <= 2*f(256) < 4*f(1) <= 2*f'(1). Ergo, f'(128) < 2*f'(1), the same condition that held for the first round of the Huffman algorithm.
Since the condition holds on this round, it is straightforward to argue that it will similarly hold on all rounds. Huffman will perform 8 rounds until all nodes are joined to one, the root (128, 64, 32, 16, 8, 4, 2, 1), at which point the algorithm will terminate. Since at each stage each node is joined to another one which has, to that point, received the same treatment by the Huffman algorithm, each branch of the tree will have the same length: 8.
This is somewhat informal, more of a sketch than a proof, really, but it should be more than enough for you to write something more formal.

Resources