R-Leaflet Map - Help me to Combine 2 legends in R leaflet - r-leaflet

I making an R Leaflet Map and I have 2 legend. how to combine them?
thanks

Understanding the structure of your map (str(mapObject))object in R can be a helpful starting point. This can be useful for making "aftermarket" edits to legends.
I tried this as a solution to your problem:
# Concatenate the vectors that define each set of colors and their corresponding values:
require(spData)
require(leaflet)
require(sf)
# loading shapes of countries from the package spData
data(world)
world <- st_read(system.file("shapes/world.gpkg", package="spData"))
africa <- world[world$continent == "Africa",]
asia <- world[world$continent == "Asia", ]
asiaPal <- colorNumeric("Reds", domain = asia$pop)
africaPal <- colorNumeric("Blues", domain = africa$pop)
map <- leaflet() %>%
addProviderTiles(providers$CartoDB.Positron) %>%
addPolygons(data = asia,
color = ~asiaPal(asia$pop)) %>%
addPolygons(data = africa,
color = ~africaPal(africa$pop)) %>%
addLegend("bottomright", pal = asiaPal, values = asia$pop, title = "Asian Population") %>%
addLegend("bottomright", pal = africaPal, values = africa$pop, title = "African Population")
# Colors
map$x$calls[[5]]$args[[1]]$colors <-
c(map$x$calls[[5]]$args[[1]]$colors, map$x$calls[[4]]$args[[1]]$colors)
# Labels
map$x$calls[[5]]$args[[1]]$labels <-
c(map$x$calls[[5]]$args[[1]]$labels, map$x$calls[[4]]$args[[1]]$labels)
# Get rid of Old Legend:
map$x$calls[[4]] <- NULL
where your legends result from elements 4 & 5 of map$x$calls.
This doesnt work very nicely. I suspect it's because these list elements are not the end result, and the elements of the map object are provided to javascript/html when rendering the map. That said, I dont know if it's easily possible to do what you are trying to achieve, without poking around in the actual HTML that results.

Related

question for dask output when using dask.array.map_overlap

I would like to use dask.array.map_overlap to deal with the scipy interpolation function. However, I keep meeting errors that I cannot understand and hoping someone can answer this to me.
Here is the error message I have received if I want to run .compute().
ValueError: could not broadcast input array from shape (1070,0) into shape (1045,0)
To resolve the issue, I started to use .to_delayed() to check each partition outputs, and this is what I found.
Following is my python code.
Step 1. Load netCDF file through Xarray, and then output to dask.array with chunk size (400,400)
df = xr.open_dataset('./Brazil Sentinal2 Tile/' + data_file +'.nc')
lon, lat = df['lon'].data, df['lat'].data
slon = da.from_array(df['lon'], chunks=(400,400))
slat = da.from_array(df['lat'], chunks=(400,400))
data = da.from_array(df.isel(band=0).__xarray_dataarray_variable__.data, chunks=(400,400))
Step 2. declare a function for da.map_overlap use
def sumsum2(lon,lat,data, hex_res=10):
hex_col = 'hex' + str(hex_res)
lon_max, lon_min = lon.max(), lon.min()
lat_max, lat_min = lat.max(), lat.min()
b = box(lon_min, lat_min, lon_max, lat_max, ccw=True)
b = transform(lambda x, y: (y, x), b)
b = mapping(b)
target_df = pd.DataFrame(h3.polyfill( b, hex_res), columns=[hex_col])
target_df['lat'] = target_df[hex_col].apply(lambda x: h3.h3_to_geo(x)[0])
target_df['lon'] = target_df[hex_col].apply(lambda x: h3.h3_to_geo(x)[1])
tlon, tlat = target_df[['lon','lat']].values.T
abc = lNDI(points=(lon.ravel(), lat.ravel()),
values= data.ravel())(tlon,tlat)
target_df['out'] = abc
print(np.stack([tlon, tlat, abc],axis=1).shape)
return np.stack([tlon, tlat, abc],axis=1)
Step 3. Apply the da.map_overlap
b = da.map_overlap(sumsum2, slon[:1200,:1200], slat[:1200,:1200], data[:1200,:1200], depth=10, trim=True, boundary=None, align_arrays=False, dtype='float64',
)
Step 4. Using to_delayed() to test output shape
print(b.to_delayed().flatten()[0].compute().shape, )
print(b.to_delayed().flatten()[1].compute().shape)
(1065, 3)
(1045, 0)
(1090, 3)
(1070, 0)
which is saying that the output from da.map_overlap is only outputting 1-D dimension ( which is (1045,0) and (1070,0) ), while in the da.map_overlap, the output I am preparing is 2-D dimension ( which is (1065,3) and (1090,3) ).
In addition, if I turn off the trim argument, which is
c = da.map_overlap(sumsum2,
slon[:1200,:1200],
slat[:1200,:1200],
data[:1200,:1200],
depth=10,
trim=False,
boundary=None,
align_arrays=False,
dtype='float64',
)
print(c.to_delayed().flatten()[0].compute().shape, )
print(c.to_delayed().flatten()[1].compute().shape)
The output becomes
(1065, 3)
(1065, 3)
(1090, 3)
(1090, 3)
This is saying that when trim=True, I cut out everything?
because...
#-- print out the values
b.to_delayed().flatten()[0].compute()[:10,:]
(1065, 3)
array([], shape=(1045, 0), dtype=float64)
while...
#-- print out the values
c.to_delayed().flatten()[0].compute()[:10,:]
array([[ -47.83683837, -18.98359832, 1395.01848583],
[ -47.8482856 , -18.99038681, 2663.68391094],
[ -47.82800624, -18.99207069, 1465.56517187],
[ -47.81897323, -18.97919009, 2769.91556363],
[ -47.82066663, -19.00712956, 1607.85927095],
[ -47.82696896, -18.97167714, 2110.7516765 ],
[ -47.81562653, -18.98302933, 2662.72112163],
[ -47.82176881, -18.98594465, 2201.83205114],
[ -47.84567 , -18.97512514, 1283.20631652],
[ -47.84343568, -18.97270783, 1282.92117225]])
Any thoughts for this?
Thank You.
I guess I got the answer. Please let me if I am wrong.
I am not allowing to use trim=True is because I change the shape of output array (after surfing the internet, I notice that the shape of output array should be the same with the shape of input array). Since I change the shape, the dask has no idea how to deal with it so it returns the empty array to me (weird).
Instead of using trim=False, since I didn't ask cutting-out the buffer zone, it is now okay to output the return values. (although I still don't know why the dask cannot concat the chunked array, but believe is also related to shape)
The solution is using delayed function on da.concatenate, which is
delayed(da.concatenate)([e.to_delayed().flatten()[idx] for idx in range(len(e.to_delayed().flatten()))])
In this case, we are not relying on the concat function in map_overlap but use our own concat to combine the outputs we want.

Index values from table into another table

I want to store the values by selecting the keys of a table into another table, for example:
polyline = {color="blue", thickness=2, npoints=4}
stuff = {"polyline.color":[polyline.thickness]}
print(stuff)
Should produce:
blue 2
However, I get the following error:
input:3: '}' expected near ':'
local polyline = {color="blue", thickness=2, npoints=4}
local stuff = {polyline.color, polyline.thickness}
print(table.unpack(stuff))
I believe, You're mixing in some Python syntax. Do you notice using two different (wrong) ways of accessing the values?
I guess, this is what You've meant with your snippet of Lua code:
polyline = {color = "blue", thickness = 2, npoints = 4}
stuff = {[polyline.color] = polyline.thickness}
for key, val in pairs(stuff) do
print(key, val)
end

gridExtra colour different columns with tableGrob

I have a question regarding tableGrob/grid.table from the gridExtra package. Is there a way to customize different colors for each column? So far and in this stackoverflow link, I have only found how to customize for different rows or cell specific.
Much obliged for any suggestion if possible!
you can pass a vector of colours (fills) for each individual cell,
fills <- rep(blues9, each=nrow(iris[1:4, 1:3]))
tt <- ttheme_default(core=list(bg_params=list(fill=fills)))
grid.table(iris[1:4, 1:3], theme=tt)
grid.table column color/fill: This example is gradient fill for a single column.
library(grid)
library(gridExtra)
library(scales)
library(dplyr)
# build a vector color/fill choice for the first two columns
blkz <- rep(c("NA", "NA"), times = c(4,4)) #NA is for transparent
# generate continuous color scales based off a vector of colors from https://themockup.blog/posts/2020-05-16-gt-a-grammer-of-tables/
red_color_generator <- scales::col_numeric(c("red", "white"), domain = NULL)
redz2 <-red_color_generator(seq(10, 60, by = 10))[1:4] #%>% scales::show_col()
# cmobine the two vectors
blkz_redz <- c(blkz, redz2)
tt <- ttheme_default(core=list(bg_params=list(fill= blkz_redz, col = "gray56")))
dev.off()
grid.table(iris[1:4, 1:3], theme=tt)
#~~~~~~
To make the color fill conditioned on the value in the variable. Follow this steps.
#conditional color mapper function
clrize <-
function(df, x) {
df %>%
mutate(cc =
ifelse(x == 1.3, "#FFB299",
ifelse(x == 1.4, "#FF8969",
ifelse(x == 1.5, "#FF5B3A",
"#FF0000"))))
}
#map this to the column build a vector
dt <- iris[1:4,1:3] %>% as.data.frame()
# apply color based on the value on petal.length variable
clrize(dt, dt$Petal.Length) -> redz3
# cmobine the two vectors
blkz_redz <- c(blkz, redz3$cc) # cc is var added inside the function
tt <- ttheme_default(core=list(bg_params=list(fill= blkz_redz, col = "gray56")))
dev.off()
grid.table(iris[1:4, 1:3], theme=tt)

Citing within an RMarkdown table

I am attempting to create a table which has citations built into the table. Here is a visual of what I am trying to achieve.
As far as I know you can only add footnotes in rowvars or colvars in kableExtra (love that package).
# Create a dataframe called df
Component <- c('N2','P3')
Latency <- c('150 to 200ms', '625 to 800ms')
Location <- c('FCz, Fz, Cz', 'Pz, Oz')
df <- data.frame(Component, Latency, Location)
Below is my attempt after reading through kableExtra's Git page
# Trying some code taken from the kableExtra guide
row.names(df) <- df$Component
df[1] <- NULL
dt_footnote <- df
names(dt_footnote)[1] <- paste0(names(dt_footnote)[2],
footnote_marker_symbol(1))
row.names(dt_footnote)[2] <- paste0(row.names(dt_footnote)[2],
footnote_marker_alphabet(1))
kable(dt_footnote, align = "c",
# Remember this escape = F
escape = F, "latex", longtable = T, booktabs = T, caption = "My Table Name") %>%
kable_styling(full_width = F) %>%
footnote(alphabet = "Jones, 2013",
symbol = "Footnote Symbol 1; ",
footnote_as_chunk = T)
But this code only works on the headers. The ultimate goal would be if I could use a BibTex reference such as #JonesFunctionalMixedEffectModels2013 such that the final part of the code would look like
footnote(alphabet = #davidsonFunctionalMixedEffectModels2009,
symbol = "Footnote Symbol 1; ", footnote_as_chunk = T)
Anyone have any ideas?
Thanks
What I did at the end was to generate a temporary table with pander, then copy the references' number manually to my kable
pander(
df,
caption = "Temporal",
style = "simple",
justify = "left")

Pc-Stable from pcalg

I am using the pc-stable from the package ‘pcalg’ version 2.0-10 to learn the structure . what I understand this algorithm does not effect the the order of the input data because it is order_independent. when I run it with different order ,I got different graph. can any one help me with this issue and this is my code.
library(pracma)
randindexMatriax <- matrix(0,10,ncol(TrainData))
numberUnique_val_col = vector()
pdf("Graph for Test PC Stable with random order.pdf")
par(mfrow=c(2,1))
for (i in 1:10)
{
randindex<-randperm(1:ncol(TrainData))
randindexMatriax[i,]<-randindex
TrainDataRandOrder<-data[,randindex]
V <- colnames( TrainDataRandOrder)
UD <-data.frame(TrainDataRandOrder)
numberUnique_val_col= sapply(UD,function(x)length(unique(x)))
suffStat <- list(dm = TrainDataRandOrder,nlev = c(numberUnique_val_col[1],numberUnique_val_col[2], numberUnique_val_col[3],numberUnique_val_col[4],
numberUnique_val_col[5],numberUnique_val_col[6], numberUnique_val_col[7],
numberUnique_val_col[8],numberUnique_val_col[9],
numberUnique_val_col[10],numberUnique_val_col[11],
numberUnique_val_col[12],numberUnique_val_col[13],
numberUnique_val_col[14],numberUnique_val_col[15],
numberUnique_val_col[16],numberUnique_val_col[17],
numberUnique_val_col[18],numberUnique_val_col[19], numberUnique_val_col[20]), adaptDF = FALSE)
pc.fit <- pc(suffStat, indepTest= disCItest, alpha=0.01, labels=V, fixedGaps = NULL, fixedEdges = NULL,NAdelete = TRUE, m.max = Inf,skel.method = "stable", conservative = TRUE,solve.confl = TRUE, verbose = TRUE)
The "Stable" part of PC-Stable only affects the Skeleton phase of the algorithm. The Orientation phase is still order-dependent. Do the two graphs have identical "skeletons"? That is, if you convert all directed edges into undirected edges, are the two graphs identical?
If not, you may have uncovered a bug in pcalg! Please post a sample dataset and two orderings of the columns that produce graphs with different skeletons.

Resources