I have to draw a DFA that accepts set of all strings containing 1101 as a substring in it. I tried one by myself but wanted to make sure if it's correct but can't attach the image as I'm a new user.
Thanks
Its an easy DFA. It requires 5 states.
state 0 :
On receiving 1 move from state 0 to state 1
On receiving 0 stay on state 0
state 1 :
On receiving 1 move from state 1 to state 2
On receiving 0 move from state 1 to state 0
state 2 :
On receiving 0 move from state 2 to state 3
On receiving 1 stay on state 2
state 3 :
On receiving 1 move from state 3 to state 4
On receiving 0 move from state 3 to state 0
state 4 :
On receiving 1 stay on state 4
On receiving 0 stay on state 4
So it will look like
Related
I read some documents and all of them say that the Std CAN have higher priority than the Ext CAN because the SRR bit is always Recessive in EXT CAN when they have the same ID, but from my understanding it depends.
https://copperhilltech.com/blog/controller-area-network-can-bus-tutorial-extended-can-protocol/
To simplify, let's say we have message ID 0x1(Std CAN) and 0x1(Ext CAN) sending simultaneously on the same bus.
The arbitration field of the Std CAN be compared to Ext CAN should be like this:
Std CAN: 0 0 0 0 0 0 0 0 0 0 1 0 (The bold bit is RTR)
Ext CAN: 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 (The bold bits are SRR, IDE and RTR)
At the 11th bit, The node that sends Std CAN is sending 1 (Recessive bit), and the node that sends Ext CAN is sending 0 (Dominant bit), so the Ext CAN wins the bus access and the node that sends Std CAN switch to listen mode and not sending anything after that, so the SRR and IDE bits never be reached to decide the message is Ext CAN or Std CAN.
Is my above understanding correct?
Thank you in advance,
Yes, an 29 bit frame with RTR set has higher priority than an 11 bit frame without RTR, given the first 11 bits of the identifiers are identical. So saying that standard frames have higher priority than extended is a simplification.
RTR frames is a bit of an oddball case overall, as they may also have varied length in the DLC area even though there's no data at all in the frame.
I have a dataset where each ID has visited a website and recorded their risk level which is coded 0-3. They have then returned to the website at a future date and recorded their risk level again. I want to calculate the difference between each ID's risk level from their first recorded risk level.
For example my dataset looks like this:
ID Timestamp RiskLevel
1 20-Jan-21 2
1 04-Apr-21 2
2 05-Feb-21 1
2 12-Mar-21 2
2 07-May-21 3
3 09-Feb-21 2
3 14-Mar-21 1
3 18-Jun-21 0
And I would like it to look like this:
ID Timestamp RiskLevel DifFromFirstRiskLevel
1 20-Jan-21 2 .
1 04-Apr-21 2 0
2 05-Feb-21 1 .
2 12-Mar-21 2 1
2 07-May-21 3 2
3 09-Feb-21 2 .
3 14-Mar-21 1 -1
3 18-Jun-21 0 -2
What should I do?
One way to approach this is with the strategy in my answer here, but I will use a different approach here:
sort cases by ID timestamp.
compute firstRisk=risklevel.
if $casenum>1 and ID=lag(ID) firstRisk=lag(firstRisk).
execute.
compute DifFromFirstRiskLevel=risklevel-firstRisk.
Id' like to know if synchronised looping is supported for AKPlayer(s) that are multiples in their duration?
Seems that is not supported or if not intended, it's a bug? Found similar report here (How to use the loop if the track was not started from the beginning (with buffering type = .always in AKPlayer )), where I thought I was providing a solution but after plenty of tests found that the solution provided does not work either. See attachment (*)
I've planned to record some loops that have a duration that is the same or a multiple of the smallest loop. Firstly, found that synchronization failed when trying to start .play for several AKPlayer at the same AVAudioTime start point. After a few attempts, fixed by sticking to buffering .always, among other things such as .prepare method. So, hopefully, that's out of the way...
The problem is that I expect to listen to a bunch of loops play synchronously, even if some are 2x or 4x longer in duration...
So while expecting to have looping work for the main requirement where:
- Loop1 of duration 2.5 [looping]
- Loop2 of duration 2.5 [looping]
- Loop3 of duration 5 [looping]
Noticed that the Loop3 behaves badly, where the last half repeats a few times, let's say for a 4/4, looking at the beat numbers we'd hear the following:
- Loop1: 1 2 3 4, 1 2 3 4, 1 2 3 4, 1 2 3 4
- Loop2: 1 2 3 4, 1 2 3 4, 1 2 3 4, 1 2 3 4
- Loop3: 1 2 3 4 5 6 7 8, 5 6 7 8, 5 6 7 8
Is this expected to fail? is loop of separate players that the duration is multiples, a feature that is supported?
After a few more tests, I find that this happens after adding a third track. For example:
- Loop1: 1 2 3 4
- Loop2: 1 2 3 4 5 6 7 8
Seems to work fine this far, but now I add a new track:
Loop1: 1 2 3 4
Loop2: 1 2 3 4 5 6 7 8
Loop3: 1 2 3 4
And what I hear is:
Loop1: 1 2 3 4 1 2 3 4 1 2 3 4
Loop2: 1 2 3 4 1 2 3 4 5 6 7 8
Loop3: 1 2 3 4 1 2 3 4 1 2 3 4
I'd try AKClipRecorder but just found that I need to declare the length ahead of recording time, it breaks the main requirement :)
(*) Audio file exposing the issue, this test was done with AKWaveTable but seems to be the same problem. I'll look into rewriting some code that is easier to share to see if it's related to my implementation but, there's the link I've shared at the top, where someone else exposes the same problem.
https://drive.google.com/open?id=1zxIJgFFvTwGsve11RFpc-_Z94gEEzql7
I believe that I got the problem and that is related to scheduling the play start time for newer loops.
Before, I'd record a loop and then play it at the currentTime that is the value of a master player. The problem with that is regarding the startTime that the player holds in its state, which is immutable given that is read from memory, from my point of view. Which will always be true to more or less the end-point of the master loop, which is mid-point for the recorded loop that happens to be twice the size or another multiple of the master loop.
To solve this I've scheduled the player items differently, as follows:
player.startTime = 0
player.endTime = audioFile.duration
let offsetCurrentime = ((beatLength * 4.0) - currentTime)
player.play(at: AVAudioTime.now() + offsetCurrentime)
The .startTime defines the start of the loop start point, I've also declared the duration length as the .endTime; Finally, I've computed the length of the master bar or the master loop that I use as a reference (or looper clock), which then is passed to the play method. Meaning that I'm scheduling it to play to the startTime and not from the currentTime as that would cause issues, as I've exposed before!
To summarize, use the property at of method .play to schedule when to start from the starting point and NOT from the current time the loop is on playing.
Consider this data table
NumberOfAccidents MeanDistance
1 5
3 0
0 NA
0 NA
6 1.2
2 0
the first feature is the number of accidents and the second is the average distance of these accidents to a certain point. It is obvious for a record with zero accident, there won't be a value for MeanDistance. However, imputing these missing values are not logical!
MY SOLUTION: I have decided to discretize the MeanDistance with NAs being a level (bin) and the rest of the data being in bins like: [0,1), [1,2.5), [2.5, Inf). the final table will look like this:
NumberOfAccidents NAs first_bin sec_bin third_bin
1 0 0 0 1
3 0 1 0 0
0 1 0 0 0
0 1 0 0 0
6 0 0 1 0
2 0 1 0 0
What is your idea with these types of missing values that cannot be imputed?
what is your solution to this problem?
It really depends on the domain and what you are trying to predict. Even though your solution is fine, I wouldn't bin the rest of the data as you did. Giving that the NumberOfAccidents feature already tells what MeanDistance have NA values, I would probably just impute 0 into the NA values (for computations) and leave the rest of the data as it is.
Nevertheless, there is no need to limit yourself, just try different approaches and keep the one that boost your KPI (Key Performance Indicator).
A MP neuron of NAND can be constructed using the truth table below:
P Q P(and not)Q
1 1 0
1 0 1
0 1 0
0 0 0
The neuron that shows this:
Inputs:
P +2
Q -1
If the threshold is 2
This will give an output of Y=1
My professor seemed confused and didn't clarify why this isn't correct when it is (to the best of my knowledge). Did he make a mistake or have i got this wrong?
A solution would be great.
Side note: I have sketched out this neuron but cannot draw on this page (new to SO).
First of all NAND is not "and not" but "not and", the logical table is
P Q NAND(P,Q)
1 1 0
1 0 1
0 1 1
0 0 1
second of all, there is nothing hard about NAND nor your gate. The "only" problematic one is XOR (and nXOR).
P Q XOR(P,Q)
1 1 0
1 0 1
0 1 1
0 0 0
So:
single perceptron can easily represent both NAND(p,q) = NOT(AND(p,q)) as well as AND(p, NOT(q)) (which you call NAND).
the impossible to represent gate is XOR and its negation.