I have a program written in Delphi 7 that appears to be experiencing some logic issues. the following line never gives a true value even when my watch window says it should.
Seq^.step[1] :=
(PlcStart^ and (not Seq^.Step[2])) or
(RetryDelay^.Done and (not Seq^.Step[2])) or
(Seq^.Step[1] and (not Seq^.Step[reset_]));
my watch window shows that (PlcStart^ and (not Seq^.Step[2])) or (RetryDelay^.Done and (not Seq^.Step[2])) or (Seq^.Step[1] and (not Seq^.Step[reset_])) is true but the value of Seq^.Step[1] never gets set to true.
The real strange part is that I have a number of programs with the exact same line that all appear to be working correctly.
Seq^.step[1] :=
(PlcStart^ and (not Seq^.Step[2])) or
(RetryDelay^.Done and (not Seq^.Step[2])) or
(Seq^.Step[1] and (not Seq^.Step[reset_]));
I'm not familiar with Delphi but I am familiar with boolean logic. If I'm reading this right your're saying:
(A ∧ ¬B) ∨ (C ∧ ¬B) ∨ (D ∧ ¬E)
in javascript that's:
(a && !b) || (c && !b) || (d && !e)
Using http://mustpax.github.io/Truth-Table-Generator/ to generate a truth table and converting "false" to "0" and "true" to "1", we get the truth table:
a b c d e (a & !b) | (c & !b) | (d & !e)
1 1 1 1 1 0
0 1 1 1 1 0
1 0 1 1 1 1
0 0 1 1 1 1
1 1 0 1 1 0
0 1 0 1 1 0
1 0 0 1 1 1
0 0 0 1 1 0
1 1 1 0 1 0
0 1 1 0 1 0
1 0 1 0 1 1
0 0 1 0 1 1
1 1 0 0 1 0
0 1 0 0 1 0
1 0 0 0 1 1
0 0 0 0 1 0
1 1 1 1 0 1
0 1 1 1 0 1
1 0 1 1 0 1
0 0 1 1 0 1
1 1 0 1 0 1
0 1 0 1 0 1
1 0 0 1 0 1
0 0 0 1 0 1
1 1 1 0 0 0
0 1 1 0 0 0
1 0 1 0 0 1
0 0 1 0 0 1
1 1 0 0 0 0
0 1 0 0 0 0
1 0 0 0 0 1
0 0 0 0 0 0
This table may or may not be correct, I haven't verified it. You can go through it and decide for yourself. Anyway, assuming it is correct, you could check the expected output for your given input and verify whether your expectations are correct.
Related
I'm trying to do leet puzzle https://leetcode.com/problems/max-area-of-island/, requiring labelling connected (by sides, not corners) components.
How can I transform something like
0 0 1 0 0
0 0 0 0 0
0 1 1 0 1
0 1 0 0 1
0 1 0 0 1
into
0 0 1 0 0
0 0 0 0 0
0 2 2 0 3
0 2 0 0 3
0 2 0 0 3
I've played with the stencil ⌺ operator and also tried using scan operators but still not quite there. Can somebody help?
We can start off by enumerating the ones. We do the by applying the function ⍸ (where, but since all are 1s, it is equivalent to 1,2,3,…) # at the subset masked by ⊢ the bits themselves, i.e. ⍸#⊢:
⍸#⊢m
0 0 1 0 0
0 0 0 0 0
0 2 3 0 4
0 5 0 0 6
0 7 0 0 8
Now we need to flood-fill the lowest number in each component. We do this with repeated application until the fix-point ⍣≡ of processing Moore neighbourhoods ⌺3 3. To get the von Neumann neighbours, we reshape the 9 elements in the Moore neighbourhood into a 4-row 2-column matrix with 4 2⍴ and use ⊢/ to select the right column. We remove any 0s with 0~⍨ them prepend , the original value ⍵[2;2] (even if 0) and have ⌊/ select the smallest value:
{⌊/⍵[2;2],0~⍨⊢/4 2⍴⍵}⌺3 3⍣≡⍸#⊢m
0 0 1 0 0
0 0 0 0 0
0 2 2 0 4
0 2 0 0 4
0 2 0 0 4
We map the values to indices by finding their ⊢ indices ⍳⍨ in the unique elements of ∘∪ 0 followed by , the ravelled matrix ,:
(⊢⍳⍨∘∪0,,){⌊/⍵[2;2],0~⍨⊢/4 2⍴⍵}⌺3 3⍣≡⍸#⊢m
1 1 2 1 1
1 1 1 1 1
1 3 3 1 4
1 3 1 1 4
1 3 1 1 4
And decrement which adjusts back to begin with zero:
¯1+(⊢⍳⍨∘∪0,,){⌊/⍵[2;2],0~⍨⊢/4 2⍴⍵}⌺3 3⍣≡⍸#⊢m
0 0 1 0 0
0 0 0 0 0
0 2 2 0 3
0 2 0 0 3
0 2 0 0 3
Given a lengthy sequence of integers in the range of 0-1 I would like to be able to predict the next likely integer.
Example dataset:
1 1 1 0 0 0 0 1 1 0 0 1 0 1 1 0 0 0 1 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 1 0 0 1 0 1 1 0 1 0 1 0 1 0 1 0 0 1 0 0 0 0 1 1 1 1 0 0 0 1 0 0 1 1 0 0 0 1 0 1 1 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0
A quick look at the above perhaps shows some obvious patterns which may be recognised by an ML model.
I do have other features available in the dataset but I don't think they correlate to the integer result so the prediction should be based purely on the statistical relevance of the supplied integer dataset.
I'm unsure how to approach this using ML.NET. I have successfully classified models previously but those predictions are all made based on multiple features. In this case if I just supply a 0 or 1 there's no relevant historical sequence to aid the prediction.
How do I train an ML.NET model to return a prediction based on a range of previous data?
Working theory: the above dataset has 100 integers. I could create a class which has 100 properties (Integer0..Integer99) and painstakingly map each field and submit that but it seems really clunky.
I am trying to iterate through a Dask dataframe and compare the values in one of its columns to a column in another Dask dataframe with the same name. If the columns match I would like to update the value is the target Dask dataframe. The code below runs, but the values are not updated to '1' where I expected, or anywhere. I am new to Dask and suspect I am missing some crucial step or am not understanding the framework.
def populateSymptomsDDF(row):
for vac in row['vac_codes']:
if vac in symptoms_ddf.columns:
symptoms_ddf[vac] = symptoms_ddf[vac].where(symptoms_ddf['dog'] == row['dog'], 1)
with ProgressBar():
x = vac_ddf.apply(lambda x: populateSymptomsDDF(x), meta=('int64'), axis=1)
x.compute(scheduler='processes')
symptoms_ddf.compute()
Head of icd_ddf:
dog vac_codes
0 1 [G35, E11.40, R53.1, Z79.899, I87.2]
1 2 [G35, R53.83, G47.00]
2 3 [G35, G95.9, R53.83, F41.9]
3 4 [G35, N53.9, E55.9, Z74.09]
4 5 [G35, M51.26, R53.1, M47.816, R25.2, G82.50, R...
Head of symptoms_ddf (before running code):
dog W19 W10 W05.0 V00.811 R53.83 R53.8 R53.1 R47.9 R47.89 ... G81.12 G81.11 G81.10 G50.0 G31.84 F52.8 F52.31 F52.22 F52.0 F03
0 1 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
1 2 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
2 3 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
3 4 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
4 5 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
Thank you for any insights you can provide!
Dask dataframes don't have the same in-place behavior as pandas. Generally every operation should be a bulk parallel operation. Otherwise there isn't much reason to use Dask.
Also, iterating through dataframes will generally be quite slow. This is also true with Pandas.
Fortunately, I think that you're maybe just looking for a join or merge operation. I would encourage you to look up the documentation for Pandas merge
https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html
enter image description hereI'm new to R and I was trying to run a Detrended correspondence analysis (DCA) which is a multivariate statistical analysis for ordination of species, I have four sites. I keep getting the error message:
> Error rowsums x must be numeric
Species Haasfontein Mini Pit Vlaklaagte Mini Pit Vlaklaagte Block 3 Mini Pit Block 10 Mini Pit
Agrostis lachnantha 1 0 0 0
Aristida congesta subsp. Congesta 0 0 0 0
Brachiaria nigropedata 0 0 0 0
Cynodon dactylon 0 12 2 3
Cyperus esculentus 0 5 0 0
Digitaria eriantha 0 1 6 20
Elionurus muticus 0 0 0 0
Eragrostis acraea De Winter 0 0 1 0
Eragrostis chloromelas 35 0 12 4
Eragrostis curvula 6 0 0 0
Eragrostis lehmanniana 5 0 0 0
Eragrostis rigidior 3 0 1 0
Eragrostis rotifer 3 0 0 0
Eragrostis trichophora 10 1 2 2
Hyparrhenia hirta 0 0 9 1
Melinis repens 0 0 2 0
Panicum coloratum 0 4 0 0
Panicum deustum 3 0 0 0
Paspalum dilatatum 0 0 0 0
Setaria sphacelata var. sphacelata 0 1 0 0
Sporobolus africanus 0 0 2 0
Sporobolus centrifuges 1 0 1 0
Sporobolus fimbriatus 0 0 0 0
Sporobolus ioclados 2 0 5 1
Themeda triandra 0 0 0 0
Trachypogon spicatus 0 0 0 0
Tragus berteronianus 0 0 0 1
Verbena bonariensis 16 0 2 0
Cirsium vulgare 0 0 0 0
Eucalyptus cameldulensis 1 0 0 0
Xanthium strumarium 0 0 0 0
Argemone ochroleuca 0 0 0 0
Solanum sisymbriifolium 0 0 0 0
Campuloclinium macrocephalum 7 0 0 0
Paspalum dilatatum 0 0 0 0
Senecio ilicifolius 0 0 0 0
Pseudognaphalium luteoalbum (L.) 8 0 0 0
Cyperus esculentus 0 0 0 0
Foeniculum vulgare 0 0 0 0
Conyza canadensis 0 0 0 1
Tagetes minuta 0 0 0 0
Hypochaeris radicata 0 0 0 0
Solanum incanum 0 0 0 0
Asclepias fruticosa 11 0 0 0
Hypochaeris radicata 0 0 0 0
My data is organised as shown above and I'm not sure if my data is organised correctly or there is some other error. Can someone please assist me
You're still fighting to get you data into R. That is your first problem. After you tackle this problem and manage to read in your data, you have the following problems:
You should not have empty (all zero) rows in your data, but they will give an error (empty columns are removed and only give a warning).
DCA treats rows and columns non-symmetrically, and you should have species as columns and sampling units as rows. You should transpose your data (function t()).
You really should not use DCA with only four sampling units. It will be meaningless.
I think the last point is most important.
I have been trying to solve this problem of titanic survived problem. Where i splitted x to be the passengers and y to be the survived. But the problem is i couldn't able to get the y_pred (ie) prediction results. As it is 0 for all the values. I get 0 value as prediction. It would be helpful for me if anyone can solve it. As it is my first classifier problem as a beginner
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('C:/Users/Umer/train.csv')
x = df['PassengerId'].values.reshape(-1,1)
y = df['Survived']
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.25,
random_state = 0)
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
x_train = sc_x.fit_transform(x_train)
x_test = sc_x.transform(x_test)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(x_train,y_train)
#predicting the test set results
y_pred = classifier.predict(x_test)
I couldn't reproduce the same result, in fact, I copied-pasted your code and did not get them all zeros as you described the issue as, instead I got:
[0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0]
Nevertheless, there are a few things I noticed in your approach that you may want to know about:
The default separator in Pandas read_csv is , , so if your dataset variables separated by a tab (same like the one I have) , you then should specify the separator like this:
df = pd.read_csv('titanic.csv', sep='\t')
PassengerId has no useful information that your model may learn from in order to predict the Survived people, it's just a continuous number that increments with each new passenger. Generally speaking, in classification, you need to avail of all features that make your model learns from (unless of course there are redundant features that add no information to the model) especially in your dataset, it's a multivariate dataset.
There is no point of scaling the PassengerId, because features scaling is usually used when features highly vary in magnitudes, units and range (e.g. 5kg and 5000gms) and in your case, as I mentioned, it's just an incremental integer which has no real information to the model.
One last thing, you should get your data as type float for StandardScaler to avoid warnings like the follow:
DataConversionWarning: Data with input dtype int64 was converted to float64 by StandardScaler.
So you do convert like this from the beginning:
x = df['PassengerId'].values.astype(float).reshape(-1,1)
Finally if you're still getting the same result, then please add a link to your dataset.
Update
After providing the dataset, it turns out that the result you're getting is correct, that's again because of reason number 2 I mentioned above (that is PassengerId provides no useful information to the model so it cannot predict correctly!)
You can test it yourself via comparing the log loss before and after adding more features from the dataset:
from sklearn.metrics import log_loss
df = pd.read_csv('train.csv', sep=',')
x = df['PassengerId'].values.reshape(-1,1)
y = df['Survived']
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.25,
random_state = 0)
classifier = LogisticRegression()
classifier.fit(x_train,y_train)
y_pred_train = classifier.predict(x_train)
# calculate and print the loss function using only the PassengerId
print(log_loss(y_train, y_pred_train))
#predicting the test set results
y_pred = classifier.predict(x_test)
print(y_pred)
Output
13.33982681120802
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0]
Now by using many "supposedly-useful" information:
from sklearn.metrics import log_loss
df = pd.read_csv('train.csv', sep=',')
# denote the words female and male as 0 and 1
df['Sex'].replace(['female','male'], [0,1], inplace=True)
# try three features that you think they are informative to the model
# so it can learn from them
x = df[['Fare', 'Pclass', 'Sex']].values.reshape(-1,3)
y = df['Survived']
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.25,
random_state = 0)
classifier = LogisticRegression()
classifier.fit(x_train,y_train)
y_pred_train = classifier.predict(x_train)
# calculate and print the loss function with the above 3 features
print(log_loss(y_train, y_pred_train))
#predicting the test set results
y_pred = classifier.predict(x_test)
print(y_pred)
Output
7.238735137632405
[0 0 0 1 1 0 1 1 0 1 0 1 0 1 1 1 0 0 0 0 0 1 0 0 1 1 0 1 1 1 0 1 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 1 0 1 0 1 1 1 0 0 0
0 1 1 0 0 0 0 0 1 0 0 1 1 1 1 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 1 1 1 1 0 1 0
1 0 1 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 1 1 0 1
1 0 0 1 1 0 1 0 1 0 1 1 0 0 1 1 0 0 0 0 0 0 0 1 0 0 1 0 1 0 0 1 0 0 0 0 0
0 1 0 0 1 1 0 1 1 0 0 0 1 0 0 0 1 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 1 0 1
1]
In Conclusion:
As you can see, the loss gave better value (lesser than before) and the prediction is now more reasonable!