gridExtra colour different columns with tableGrob - gridextra

I have a question regarding tableGrob/grid.table from the gridExtra package. Is there a way to customize different colors for each column? So far and in this stackoverflow link, I have only found how to customize for different rows or cell specific.
Much obliged for any suggestion if possible!

you can pass a vector of colours (fills) for each individual cell,
fills <- rep(blues9, each=nrow(iris[1:4, 1:3]))
tt <- ttheme_default(core=list(bg_params=list(fill=fills)))
grid.table(iris[1:4, 1:3], theme=tt)

grid.table column color/fill: This example is gradient fill for a single column.
library(grid)
library(gridExtra)
library(scales)
library(dplyr)
# build a vector color/fill choice for the first two columns
blkz <- rep(c("NA", "NA"), times = c(4,4)) #NA is for transparent
# generate continuous color scales based off a vector of colors from https://themockup.blog/posts/2020-05-16-gt-a-grammer-of-tables/
red_color_generator <- scales::col_numeric(c("red", "white"), domain = NULL)
redz2 <-red_color_generator(seq(10, 60, by = 10))[1:4] #%>% scales::show_col()
# cmobine the two vectors
blkz_redz <- c(blkz, redz2)
tt <- ttheme_default(core=list(bg_params=list(fill= blkz_redz, col = "gray56")))
dev.off()
grid.table(iris[1:4, 1:3], theme=tt)
#~~~~~~
To make the color fill conditioned on the value in the variable. Follow this steps.
#conditional color mapper function
clrize <-
function(df, x) {
df %>%
mutate(cc =
ifelse(x == 1.3, "#FFB299",
ifelse(x == 1.4, "#FF8969",
ifelse(x == 1.5, "#FF5B3A",
"#FF0000"))))
}
#map this to the column build a vector
dt <- iris[1:4,1:3] %>% as.data.frame()
# apply color based on the value on petal.length variable
clrize(dt, dt$Petal.Length) -> redz3
# cmobine the two vectors
blkz_redz <- c(blkz, redz3$cc) # cc is var added inside the function
tt <- ttheme_default(core=list(bg_params=list(fill= blkz_redz, col = "gray56")))
dev.off()
grid.table(iris[1:4, 1:3], theme=tt)

Related

R-Leaflet Map - Help me to Combine 2 legends in R leaflet

I making an R Leaflet Map and I have 2 legend. how to combine them?
thanks
Understanding the structure of your map (str(mapObject))object in R can be a helpful starting point. This can be useful for making "aftermarket" edits to legends.
I tried this as a solution to your problem:
# Concatenate the vectors that define each set of colors and their corresponding values:
require(spData)
require(leaflet)
require(sf)
# loading shapes of countries from the package spData
data(world)
world <- st_read(system.file("shapes/world.gpkg", package="spData"))
africa <- world[world$continent == "Africa",]
asia <- world[world$continent == "Asia", ]
asiaPal <- colorNumeric("Reds", domain = asia$pop)
africaPal <- colorNumeric("Blues", domain = africa$pop)
map <- leaflet() %>%
addProviderTiles(providers$CartoDB.Positron) %>%
addPolygons(data = asia,
color = ~asiaPal(asia$pop)) %>%
addPolygons(data = africa,
color = ~africaPal(africa$pop)) %>%
addLegend("bottomright", pal = asiaPal, values = asia$pop, title = "Asian Population") %>%
addLegend("bottomright", pal = africaPal, values = africa$pop, title = "African Population")
# Colors
map$x$calls[[5]]$args[[1]]$colors <-
c(map$x$calls[[5]]$args[[1]]$colors, map$x$calls[[4]]$args[[1]]$colors)
# Labels
map$x$calls[[5]]$args[[1]]$labels <-
c(map$x$calls[[5]]$args[[1]]$labels, map$x$calls[[4]]$args[[1]]$labels)
# Get rid of Old Legend:
map$x$calls[[4]] <- NULL
where your legends result from elements 4 & 5 of map$x$calls.
This doesnt work very nicely. I suspect it's because these list elements are not the end result, and the elements of the map object are provided to javascript/html when rendering the map. That said, I dont know if it's easily possible to do what you are trying to achieve, without poking around in the actual HTML that results.

Citing within an RMarkdown table

I am attempting to create a table which has citations built into the table. Here is a visual of what I am trying to achieve.
As far as I know you can only add footnotes in rowvars or colvars in kableExtra (love that package).
# Create a dataframe called df
Component <- c('N2','P3')
Latency <- c('150 to 200ms', '625 to 800ms')
Location <- c('FCz, Fz, Cz', 'Pz, Oz')
df <- data.frame(Component, Latency, Location)
Below is my attempt after reading through kableExtra's Git page
# Trying some code taken from the kableExtra guide
row.names(df) <- df$Component
df[1] <- NULL
dt_footnote <- df
names(dt_footnote)[1] <- paste0(names(dt_footnote)[2],
footnote_marker_symbol(1))
row.names(dt_footnote)[2] <- paste0(row.names(dt_footnote)[2],
footnote_marker_alphabet(1))
kable(dt_footnote, align = "c",
# Remember this escape = F
escape = F, "latex", longtable = T, booktabs = T, caption = "My Table Name") %>%
kable_styling(full_width = F) %>%
footnote(alphabet = "Jones, 2013",
symbol = "Footnote Symbol 1; ",
footnote_as_chunk = T)
But this code only works on the headers. The ultimate goal would be if I could use a BibTex reference such as #JonesFunctionalMixedEffectModels2013 such that the final part of the code would look like
footnote(alphabet = #davidsonFunctionalMixedEffectModels2009,
symbol = "Footnote Symbol 1; ", footnote_as_chunk = T)
Anyone have any ideas?
Thanks
What I did at the end was to generate a temporary table with pander, then copy the references' number manually to my kable
pander(
df,
caption = "Temporal",
style = "simple",
justify = "left")

How to get class labels from TensorFlow prediction

I have a classification model in TF and can get a list of probabilities for the next class (preds). Now I want to select the highest element (argmax) and display its class label.
This may seems silly, but how can I get the class label that matches a position in the predictions tensor?
feed_dict={g['x']: current_char}
preds, state = sess.run([g['preds'],g['final_state']], feed_dict)
prediction = tf.argmax(preds, 1)
preds gives me a vector of predictions for each class. Surely there must be an easy way to just output the most likely class (label)?
Some info about my model:
x = tf.placeholder(tf.int32, [None, num_steps], name='input_placeholder')
y = tf.placeholder(tf.int32, [None, 1], name='labels_placeholder')
batch_size = batch_size = tf.shape(x)[0]
x_one_hot = tf.one_hot(x, num_classes)
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in
tf.split(x_one_hot, num_steps, 1)]
tmp = tf.stack(rnn_inputs)
print(tmp.get_shape())
tmp2 = tf.transpose(tmp, perm=[1, 0, 2])
print(tmp2.get_shape())
rnn_inputs = tmp2
with tf.variable_scope('softmax'):
W = tf.get_variable('W', [state_size, num_classes])
b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0))
rnn_outputs = rnn_outputs[:, num_steps - 1, :]
rnn_outputs = tf.reshape(rnn_outputs, [-1, state_size])
y_reshaped = tf.reshape(y, [-1])
logits = tf.matmul(rnn_outputs, W) + b
predictions = tf.nn.softmax(logits)
A prediction is an array of n types of classes(labels). It represents the model's "confidence" that the image corresponds to each of its classes(labels). You can check which label has the highest confidence value by using:
prediction = np.argmax(preds, 1)
After getting this highest element index using (argmax function) out of other probabilities, you need to place this index into class labels to find the exact class name associated with this index.
class_names[prediction]
Please refer to this link for more understanding.
You can use tf.reduce_max() for this. I would refer you to this answer.
Let me know if it works - will edit if it doesn't.
Mind that there are sometimes several ways to load a dataset. For instance with fashion MNIST the tutorial could lead you to use load_data() and then to create your own structure to interpret a prediction. However you can also load these data by using tensorflow_datasets.load(...) like here after installing tensorflow-datasets which gives you access to some DatasetInfo. So for instance if your prediction is 9 you can tell it's a boot with:
import tensorflow_datasets as tfds
_, ds_info = tfds.load('fashion_mnist', with_info=True)
print(ds_info.features['label'].names[9])
When you use softmax, the labels you train the model on are either numbers 0..n or one-hot encoded values. So if original labels of your data are let's say string names, you must map them to integers first and keep the mapping as a variable (such as 0 -> "apple", 1 -> "orange", 2 -> "pear" ...).
When using integers (with loss='sparse_categorical_crossentropy'), you get predictions as an array of probabilities, you just find the array index with the max value. You can use this predicted index to reverse-map to your label:
predictedIndex = np.argmax(predictions) // 2
predictedLabel = indexToLabelMap[predictedIndex] // "pear"
If you use one-hot encoded labels (with loss='categorical_crossentropy'), the predicted index corresponds with the "hot" index of your label.
Just for reference, I needed this info when I was working with MNIST dataset used in Google's Machine learning crash course. There is also a good classification tutorial in the Tensorflow docs.

How to join DecisionTreeRegressor predict output to the original data

I am developing a model that uses DecisionTreeRegressor. I have built and fit the tree using training data, and predicted the results from more recent data to confirm the model's accuracy.
To build and fit the tree:
X = np.matrix ( pre_x )
y = np.matrix( pre_y )
regr_b = DecisionTreeRegressor(max_depth = 4 )
regr_b.fit(X, y)
To predict new data:
X = np.matrix ( pre_test_x )
trial_pred = regr_b.predict(X, check_input=True)
trial_pred is an array of the predicted values. I need to join it back to pre_test_x so I can see how well the prediction matches what actually happened.
I have tried merges:
all_pred = pre_pre_test_x.merge(predictions, left_index = True, right_index = True)
and
all_pred = pd.merge (pre_pre_test_x, predictions, how='left', left_index=True, right_index=True )
and either get no results or a second copy of the columns appended to the bottom of the DataFrame with NaN in all the existing columns.
Turns out it was simple. Leave the predict output as an array, then run:
w_pred = pre_pre_test_x.copy(deep=True)
w_pred['pred_val']=trial_pred

Total sum from a set (logic)

I have a logic problem for an iOS app but I don't want to solve it using brute-force.
I have a set of integers, the values are not unique:
[3,4,1,7,1,2,5,6,3,4........]
How can I get a subset from it with these 3 conditions:
I can only pick a defined amount of values.
The sum of the picked elements are equal to a value.
The selection must be random, so if there's more than one solution to the value, it will not always return the same.
Thanks in advance!
This is the subset sum problem, it is a known NP-Complete problem, and thus there is no known efficient (polynomial) solution to it.
However, if you are dealing with only relatively low integers - there is a pseudo polynomial time solution using Dynamic Programming.
The idea is to build a matrix bottom-up that follows the next recursive formulas:
D(x,i) = false x<0
D(0,i) = true
D(x,0) = false x != 0
D(x,i) = D(x,i-1) OR D(x-arr[i],i-1)
The idea is to mimic an exhaustive search - at each point you "guess" if the element is chosen or not.
To get the actual subset, you need to trace back your matrix. You iterate from D(SUM,n), (assuming the value is true) - you do the following (after the matrix is already filled up):
if D(x-arr[i-1],i-1) == true:
add arr[i] to the set
modify x <- x - arr[i-1]
modify i <- i-1
else // that means D(x,i-1) must be true
just modify i <- i-1
To get a random subset at each time, if both D(x-arr[i-1],i-1) == true AND D(x,i-1) == true choose randomly which course of action to take.
Python Code (If you don't know python read it as pseudo-code, it is very easy to follow).
arr = [1,2,4,5]
n = len(arr)
SUM = 6
#pre processing:
D = [[True] * (n+1)]
for x in range(1,SUM+1):
D.append([False]*(n+1))
#DP solution to populate D:
for x in range(1,SUM+1):
for i in range(1,n+1):
D[x][i] = D[x][i-1]
if x >= arr[i-1]:
D[x][i] = D[x][i] or D[x-arr[i-1]][i-1]
print D
#get a random solution:
if D[SUM][n] == False:
print 'no solution'
else:
sol = []
x = SUM
i = n
while x != 0:
possibleVals = []
if D[x][i-1] == True:
possibleVals.append(x)
if x >= arr[i-1] and D[x-arr[i-1]][i-1] == True:
possibleVals.append(x-arr[i-1])
#by here possibleVals contains 1/2 solutions, depending on how many choices we have.
#chose randomly one of them
from random import randint
r = possibleVals[randint(0,len(possibleVals)-1)]
#if decided to add element:
if r != x:
sol.append(x-r)
#modify i and x accordingly
x = r
i = i-1
print sol
P.S.
The above give you random choice, but NOT with uniform distribution of the permutations.
To achieve uniform distribution, you need to count the number of possible choices to build each number.
The formulas will be:
D(x,i) = 0 x<0
D(0,i) = 1
D(x,0) = 0 x != 0
D(x,i) = D(x,i-1) + D(x-arr[i],i-1)
And when generating the permutation, you do the same logic, but you decide to add the element i in probability D(x-arr[i],i-1) / D(x,i)

Resources