st_buffer multipoint with different distance - buffer

I have a sfc_multipoint object and want to use st_buffer but with different distances for every single point in the multipoint object.
Is that possible?
The multipoint object are coordinates.
table = data
Every coordinate point (in the table in "lon" and "lat") should have a buffer with a different size. This buffer size is containt in the table in row "dist".
The table is called data.
This is my code:
library(sf)
coords <- matrix(c(data$lon,data$lat), ncol = 2)
tt <- st_multipoint(coords)
sfc <- st_sfc(tt, crs = 4326)
dt <- st_sf(data.frame(geom = sfc))
web <- st_transform(dt, crs = 3857)
geom <- st_geometry(web)
buf <- st_buffer(geom, dist = data$dist)
But it uses just the first dist of (0.100).
This is the result. Just really small buffers.
small buffer
For visualization see this picture. It´s just an example to show that the buffer should get bigger. example result

I think that he problem here is in how you are "creating" the points dataset.
Replicating your code with dummy data, doing this:
library(sf)
data <- data.frame(lat = c(0,1,2,3), lon = c(0,1,2,3), dist = c(0.1,0.2,0.3, 0.4))
coords <- matrix(c(data$lon,data$lat), ncol = 2)
tt <- st_multipoint(coords)
does not give you multiple points, but a single MULTIPOINT feature:
tt
#> MULTIPOINT (0 0, 1 1, 2 2, 3 3)
Therefore, only a single buffer distance can be "passed" to it and you get:
plot(sf::st_buffer(tt, data$dist))
To solve the problem, you need probably to build the point dataset differently. For example, using:
tt <- st_as_sf(data, coords = c("lon", "lat"))
gives you:
tt
#> Simple feature collection with 4 features and 1 field
#> geometry type: POINT
#> dimension: XY
#> bbox: xmin: 0 ymin: 0 xmax: 3 ymax: 3
#> epsg (SRID): NA
#> proj4string: NA
#> dist geometry
#> 1 0.1 POINT (0 0)
#> 2 0.2 POINT (1 1)
#> 3 0.3 POINT (2 2)
#> 4 0.4 POINT (3 3)
You see that tt is now a simple feature collection made of 4 points, on which buffering with multiple distances will indeed work:
plot(sf::st_buffer(tt, data$dist))
HTH!

Related

Constrain axis limits in chordDiagram (circlize) when making gifs

I hope somebody will be able to help me with this chordDiagram visualisation I am trying to create. I am well aware that maybe this visualization type was not suitable for this particular data, but somehow it was something I had in my head (or how I wanted to visualize this data) and what I wanted to create, and now I think it is too late to give it up :) too curious how one can fix it. It is my first real post here, though I am an active user of stackoverflow and I genuinely admire the audience here.
So I have this data on the change in the size of area in km2 over time (d0) and I am trying to create a GIF out of it using example here: https://guyabel.com/post/animated-directional-chord-diagrams/
The data "d0":
Time <- as.numeric(c(10,10,10,100,100,100,200,200,200,5,5,5,50,50,50,0,0,0))
Year <- as.character(c(2050,2100,2200,2050,2100,2200,2050,2100,2200,2050,2100,2200,2050,2100,2200,2050,2100,2200))
Area_km2 <- as.numeric(c(4.3075211,7.1672926,17.2780622,5.9099250,8.2909189,16.9748961,6.5400554,8.9036313,16.5627228,3.0765610,6.3929883,18.0708108,5.3520782,8.4503856,16.7938196,0.5565978,1.8415855,12.5089476))
(d0 <- as.data.frame(cbind(Time,Year,Area_km2)))
I also have the color codes stored in a separate dataframe (d1) following the above mentioned example.
The data "d1":
year <- as.numeric(c(2050,2100,2200))
order1 <- as.character(c(1,2,3))
col1 <- c("#40A4D8","#33BEB7","#0C5BCE")
(d1 <- as.data.frame(cbind(year,order1,col1)))
So the idea was to have self-linking flows within each sector increasing in size over time, which will look like just growing segments in a final animated GIF (or like growing pie segments), but I noticed that regardless how hard I try I can't seem to manage to constrain the axis of each segment to limits of that particular year in an every single frame. It seems that the axis is being added on and keeps on adding over time, which is not what I want.
Like for example in the first figure (figure0) or "starting frame" the size of the links matches well the dataframe:
figure0
So it is
orig_year
Area_km2
.frame
2050
0.557
0
2100
1.84
0
2200
12.5
0
But when one plots next figure (figure1), the axis seems to have taken the values from the starting frame and added on the current values (4, 7.4 and 19 respectively) instead of (3.08, 6.39 and 18.1) or what should have been the values according the data frame:
figure1
orig_year
Area_km2
.frame
2050
3.08
1
2100
6.39
1
2200
18.1
1
And it keep on doing so as one loops through the data and creates new plots for the next frames. I wonder whether it is possible to constrain the axis and create the visualization in a way that the links just gradually increase over time and the axis is, so to say, following the increase or does also increase gradually following the data???
Any help is highly appreciated!
Thanks.
My code:
Sort decreasing
(d0 <- arrange(d0,Time))
Copy columns
(d0$Dest_year <- d0$Year)
Re-arrange data
library(tweenr)
(d2 <- d0 %>%
mutate(corridor=paste(Year,Dest_year,sep="->")) %>%
dplyr::select(Time,corridor,Area_km2) %>%
mutate(ease="linear") %>%
tweenr::tween_elements('Time','corridor','ease',nframes=30) %>%
tibble::as_tibble())
(d2 <- d2 %>%
separate(col=.group,into=c("orig_year","dest_year"),sep="->") %>%
dplyr::select(orig_year,dest_year,Area_km2,everything()))
d2$Time <- NULL
Create a directory to store the individual plots
dir.create("./plot-gif/")
Fixing scales
scale_gap <- function(Area_km2_m,Area_km2_max,gap_at_max=1,gaps=NULL) {
p <- Area_km2_m/Area_km2_max
if(length(gap_at_max)==1 & !is.null(gaps)) {
gap_at_max <- rep(gap_at_max,gaps)
}
gap_degree <- (360-sum(gap_at_max))*(1-p)
gap_m <- (gap_degree + sum(gap_at_max))/gaps
return(gap_m)
}
Function to derive the size of gaps in each frame for an animated GIF
(d3 <- d2 %>% group_by(orig_year) %>% mutate(gaps=scale_gap(Area_km2_m=Area_km2,Area_km2_max=max(.$Area_km2),gap_at_max=4,gaps=9)))
library(magrittr)
Get the values for axis limits
(axmax <- d2 %>% group_by(orig_year,.frame) %>% mutate(max=mean(Area_km2)))
Creating unique chordDiagrams for each frame
library(circlize)
for(f in unique(d2$.frame)){
png(file=paste0("./plot-gif/figure",f,".png"),height=7,width=7,units="in",res=500)
circos.clear()
par(mar=rep(0,4),cex=1)
circos.par(start.degree=90,track.margin=c(-0.1,0.1),
gap.degree=filter(d3,.frame==f)$gaps,
points.overflow.warning=FALSE)
chordDiagram(x=filter(d2,.frame==f),directional=2,order=d1$year,
grid.col=d1$col1,annotationTrack=c("grid","name","axis"),
transparency=0.25,annotationTrackHeight=c(0.05,0.1),
direction.type=c("diffHeight"),
diffHeight=-0.04,link.sort=TRUE,
xmax=axmax$max)
dev.off()
}
Now make a GIF
library(magick)
img <- image_read(path="./plot-gif/figure0.png")
for(f in unique(d2$.frame)[-1]){
img0 <- image_read(path=paste0("./plot-gif/figure",f,".png"))
img <- c(img,img0)
message(f)
}
img1 <- image_scale(image=img,geometry="720x720")
ani0 <- image_animate(image=img1,fps=10)
image_write(image=ani0,path="./plot-gif/figure.gif")
I will start with your d0 object. I first construct the d0 object but I do not convert everything to characters, just put them as the original numeric format. Also I reorder d0 by Time and Year:
Time = c(10,10,10,100,100,100,200,200,200,5,5,5,50,50,50,0,0,0)
Year = c(2050,2100,2200,2050,2100,2200,2050,2100,2200,2050,2100,2200,2050,2100,2200,2050,2100,2200)
Area_km2 = c(4.3075211,7.1672926,17.2780622,5.9099250,8.2909189,16.9748961,6.5400554,8.9036313,16.5627228,3.0765610,6.3929883,18.0708108,5.3520782,8.4503856,16.7938196,0.5565978,1.8415855,12.5089476)
d0 = data.frame(Time = Time,
Year = Year,
Area_km2 = Area_km2,
Dest_year = Year)
d0 = d0[order(d0$Time, d0$Year), ]
The key thing is to calculate proper values for "gaps" between sectors so that the same unit from data corresponds to the same degree in different plots.
We first calculate the maximal total width of the circular plot:
width = tapply(d0$Area_km2, d0$Time, sum)
max_width = max(width)
We assume there are n sectors (where n = 3 in d0). We let the first n-1 gaps to be 2 degrees and we dynamically adjust the last gap according to the total amount of values in each plot. For the plot with the largest total value, the last gap is also set to 2 degrees.
n = 3
degree_per_unit = (360 - n*2)/max_width
Now degree_per_unit can be shared between multiple plots. Every time we calculate the value for last_gap:
for(t in sort(unique(Time))) {
l = d0$Time == t
d0_current = d0[l, c("Year", "Dest_year", "Area_km2")]
last_gap = 360 - (n-1)*2 - sum(d0_current$Area_km2)*degree_per_unit
circos.par(gap.after = c(rep(2, n-1), last_gap))
chordDiagram(d0_current, grid.col = c("2050" = "red", "2100" = "blue", "2200" = "green"))
circos.clear()
title(paste0("Time = ", t, ", Sum = ", sum(d0_current$Area_km2)))
Sys.sleep(1)
}

Transferring 2d boundaries onto its 1d grid

I have a matrix defined mxn 128x128. And I have translated my 2d x,y positions onto this 1D matrix grid. My 2d coordinates accept positions using numbers 0->127 i.e. any combo in ranges {x=0,y=0}-->{x=127,y=127}. I'm implementing algorithms that take the neighboring positions of these nodes. Specifically the 8 surrounding positions of distance i (lets say i=1). So considering node={0,0}, my neighbours are generated by adding these vectors to said node:
two_d_nodes={
{0,i*1},{0,-i*1},{-i*1,0},{i*1,0},
{i*1,i*1},{i*1,-i*1},{-i*1,-i*1},{-i*1,i*1}
}
In terms of 2d though I am excluding neighbours outside the boundary. So in the above for node={0,0}, only neighours {0,1},{1,1}{1,0} are generated. Setting the boundary is basically just implementing some form of:
if x>=0 and y>=0 and x<=127 and y<=127 then...
The 1d translation of node={0,0} is node={0} and my vector additions translated to 1d are:
one_d_nodes={{128},{-128},{-1},{1},{129},{-127},{-129},{127}}
However the relationship with the 2d boundary expressions doesn't hold true here. Or at least I don't know how to translate it. In response I tried generating all the loose cases of the grid:
{0,127,16256,16383} --the 4 corner positions
node%128==0 --right-side boundary
node%128==1 --left-side boundary
node>1 and node<128 --top-side boundary
node>127*128 and node<128*128 --bottom-side boundary
Then tried implementing special cases....where I just ignored generating the specific out of bounds neighbours. That was messy, and didn't even work for some reason. Regardless I feel I am missing a much cleaner method.
So my question is: How do I translate my 2d boundaries onto my 1d grid for the purposes of only generating neighbours within the boundary?
The following is in regards to the answer below:
function newmatrix(node) --node={x=0,y=0}
local matrix={}
add(matrix,{(node.y<<8)+node.x}) --matrix= {{0},...}
--lets say [1 2 3] is a width=3; height=1 matrix,
--then the above line maps my 2d coord to a matrix of width=256, height=128
matrix.height, matrix.width = #node,#node[1] --1,1
return matrix
end
function indexmatrix(matrix, r,c)
if r > 1 and r <= matrix.height and c > 1 and c <= matrix.width then
return matrix[matrix.width * r + c]
else
return false
end
end
function getneighbors(matrix, r, c)
local two_d_nodes={
{0,1},{0,-1},{-1,0},{1,0},
{1,1},{1,-1},{-1,-1},{-1,1}
}
local neighbors = {}
for index, node in ipairs(two_d_nodes) do
table.insert(neighbors, indexmatrix(matrix, r + node[1], c + node[2]))
end
return neighbors
end
--Usage:
m={x=0,y=0}
matrix=newmatrix(m) --{{0}}
--here's where I'm stuck, cause idk what r and c are
--normally I'd grab my neighbors next....
neighbors=getneighbors(matrix)
--then I have indexmatrix for...?
--my understanding is that I am using indexmatrix to
--check if the nieghbors are within the bounds or not, is that right?
--can you illustrate how it would work for my code here, it should
--toss out anything with x less then 0 and y less than 0. Same as in OP's ex
indexmatrix(matrix) ---not sure what to do here
Attempt 2 in regards to the comment sections below:
function indexmatrix(matrix, x ,y)
if x > 1 and x <= matrix['height'] and y > 1 and y <= matrix['width'] then
return matrix[matrix['width'] * x + y]
else
return false
end
end
function getneighbors(matrix, pos_x, pos_y)
local two_d_nodes={
{0,1},{0,-1},{-1,0},{1,0},
{1,1},{1,-1},{-1,-1},{-1,1}
}
local neighbors = {}
for _, node in ipairs(two_d_nodes) do
add(neighbors, indexmatrix(matrix, pos_x + node[1], pos_y + node[2]))
end
return neighbors
end
matrix={} --128 columns/width, 128 rows/height
for k=1,128 do
add(matrix,{}) ----add() is same as table.insert()
for i=1,128 do
matrix[k][i]=i
end
end
id_matrix={{}} --{ {1...16k}}
for j=1,128*128 do
id_matrix[1][j]=j
end
id_matrix.height, id_matrix.width = 128,128
position={x=0,y=0}
neighbors = getNeighbors(matrix, position.x, position.y)
Attempt 3: A working dumbed down version of the code given. Not what I wanted at all.
function indexmatrix(x,y)
if x>=0 and y>=0 and x<127 and y<127 then
return 128 * x + y
else
return false
end
end
function getneighbors(posx,posy)
local two_d_nodes={
{0,1},{0,-1},{-1,0},{1,0},
{1,1},{1,-1},{-1,-1},{-1,1}
}
local neighbors = {}
for _, node in pairs(two_d_nodes) do
add(neighbors, indexmatrix(posx+node[1], posy + node[2]))
end
return neighbors
end
pos={x=0,y=10}
neighbors = getneighbors(pos.x,pos.y)
Edit: The equation to map 2D coordinates to 1D, y = mx + z, is a function of two variables. It is not possible for a multivariable equation to have a single solution unless a system of equations is given that gets x or z in terms of the other variable. Because x and z are independent of one another, the short answer to the question is: no
Instead, the constraints on x and z must be used to ensure integrity of the 1D coordinates.
What follows is an example of how to work with a 1D array as if it were a 2D matrix.
Let's say we have a constructor that maps a 2D table to a 1D matrix
local function newMatrix(m) -- m is 128x128 Matrix
local Matrix = {}
--logic to map m to 1D array
-- ...
return Matrix -- Matrix is m 1x16384 Array
end
The numeric indices are reserved, but we can add non-numeric keys to store information about the matrix. Let's store the number of rows and columns as height and width. We can do this in the constructor
local function newMatrix(m)
local Matrix = {}
--logic to map to 1D array
-- ...
-- Store row and column info in the matrix
Matrix.height, Matrix.width = #m, #m[1] -- Not the best way
return Matrix
end
Although the matrix is now a 1x16384 array, we can create a function that allows us to interact with the 1D array like it's still a 2D matrix. This function will get the value of a position in the matrix, but we return false/nil if the indices are out of bounds.
To be clear, the formula to map 2D coordinates to a 1D coordinate for a matrix, and can be found here:
1D position = 2D.x * Matrix-Width + 2D.y
And here's what that function could look like:
local function indexMatrix(Matrix, r,c)
if r >= 1 and r <= Matrix.height and c >= 1 and c <= Matrix.width then
return Matrix[Matrix.width * r + c] -- the above formula
else
return false -- out of bounds
end
end
We can now index our Matrix with any bounds without fear of returning an incorrect element.
Finally, we can make a function to grab the neighbors given a 2D position. In this function, we add vectors to the given 2D position to get surrounding positions, and then index the matrix using the indexMatrix function. Because indexMatrix checks if a 2D position is within the bounds of the original Matrix (before it was converted), we only get neighbors that exist.
local function getNeighbors(Matrix, r, c) -- r,c = row, column (2D position)
local two_d_nodes={
{0,1},{0,-1},{-1,0},{1,0},
{1,1},{1,-1},{-1,-1},{-1,1}
}
local neighbors = {}
for index, node in ipairs(two_d_nodes) do
-- Add each vector to the given position and get the node from the Matrix
table.insert(neighbors, indexMatrix(Matrix, r + node[1], c + node[2]))
end
return neighbors
end
You can either skip elements that return false from indexMatrix or remove them after the fact. Or anything else that sounds better to you (this code is not great, it's just meant to be an example). Wrap it in a for i ... do loop and you can go out an arbitrary distance.
I hope I haven't assumed too much and that this is helpful. Just know it's not foolproof (the # operator stops counting at the first nil, for instance)
Edit: Usage
Matrix = {
{1,2,3...128}, -- row 1
{1,2,3...128},
...
{1,2,3...128}, -- row 128
}
Array = newMatrix(Matrix) -- Convert to 1D Array ({1,2,3,...,16384})
--Array.width = 128, Array.height = 128
position = {x=0, y=0}
neighbors = getNeighbors(Array, position.x, position.y)
-- neighbors is: {{0,1}, false, false, {1,0}, {1,1}, false, false, false}

Is there a way to determine the major orientation of polygon in sf?

I would like to know if there is an off-the-shelf tool or if anyone has developed a method for determining the man axis geographic orientation of spatial shapes. In general, I would like to be able to determine if a shape is oriented east-west or north-south, but ideally there will be an angle or degree measurement associated with each shape.
ArcGIS offers the 'calculate main angle tool' but it is designed for orthogonal shapes and I am working with wildfire perimeters which are blob-like or at least not very orthogonal. At first glance, the Arc tool provides very coarse measurements.
I would like to do this using an sf object, so for an example perhaps use the North Carolina data in the sf package. What is the geographic orientation of each of the 100 counties in North Carolina?
nc <- st_read(system.file("shape/nc.shp", package = "sf"), quiet = TRUE)
Thanks for your help!
THe flightplanning-R package has a function that calculates the minimum bounding rectangle, angle of orientation, height, and width. (https://github.com/caiohamamura/flightplanning-R)
I've adjusted it slightly and used it below in another function to return an sf object with angle of orientation and a POLYGON geometry column. The angle is from 0 (east-west) to 180(also east-west), with 90 being north-south.
# Copied function getMinBBox()
# from https://github.com/caiohamamura/flightplanning-R/blob/master/R/utils.R
# credit there given to: Daniel Wollschlaeger <https://github.com/ramnathv>
library(tidyverse)
library(sf)
library(sfheaders)
nc <- st_read(system.file("shape/nc.shp", package = "sf"), quiet = TRUE) %>%
st_geometry() %>% st_as_sf()
getMinBBox <- function(x) {
stopifnot(is.matrix(x), is.numeric(x), nrow(x) >= 2, ncol(x) == 2)
## rotating calipers algorithm using the convex hull
H <- grDevices::chull(x) ## hull indices, vertices ordered clockwise
n <- length(H) ## number of hull vertices
hull <- x[H, ] ## hull vertices
## unit basis vectors for all subspaces spanned by the hull edges
hDir <- diff(rbind(hull, hull[1, ])) ## hull vertices are circular
hLens <- sqrt(rowSums(hDir^2)) ## length of basis vectors
huDir <- diag(1/hLens) %*% hDir ## scaled to unit length
## unit basis vectors for the orthogonal subspaces
## rotation by 90 deg -> y' = x, x' = -y
ouDir <- cbind(-huDir[ , 2], huDir[ , 1])
## project hull vertices on the subspaces spanned by the hull edges, and on
## the subspaces spanned by their orthogonal complements - in subspace coords
projMat <- rbind(huDir, ouDir) %*% t(hull)
## range of projections and corresponding width/height of bounding rectangle
rangeH <- matrix(numeric(n*2), ncol=2) ## hull edge
rangeO <- matrix(numeric(n*2), ncol=2) ## orthogonal subspace
widths <- numeric(n)
heights <- numeric(n)
for(i in seq(along=numeric(n))) {
rangeH[i, ] <- range(projMat[ i, ])
## the orthogonal subspace is in the 2nd half of the matrix
rangeO[i, ] <- range(projMat[n+i, ])
widths[i] <- abs(diff(rangeH[i, ]))
heights[i] <- abs(diff(rangeO[i, ]))
}
## extreme projections for min-area rect in subspace coordinates
## hull edge leading to minimum-area
eMin <- which.min(widths*heights)
hProj <- rbind( rangeH[eMin, ], 0)
oProj <- rbind(0, rangeO[eMin, ])
## move projections to rectangle corners
hPts <- sweep(hProj, 1, oProj[ , 1], "+")
oPts <- sweep(hProj, 1, oProj[ , 2], "+")
## corners in standard coordinates, rows = x,y, columns = corners
## in combined (4x2)-matrix: reverse point order to be usable in polygon()
## basis formed by hull edge and orthogonal subspace
basis <- cbind(huDir[eMin, ], ouDir[eMin, ])
hCorn <- basis %*% hPts
oCorn <- basis %*% oPts
pts <- t(cbind(hCorn, oCorn[ , c(2, 1)]))
## angle of longer edge pointing up
dPts <- diff(pts)
e <- dPts[which.max(rowSums(dPts^2)), ] ## one of the longer edges
eUp <- e * sign(e[2]) ## rotate upwards 180 deg if necessary
deg <- atan2(eUp[2], eUp[1])*180 / pi ## angle in degrees
return(list(pts=pts, width=heights[eMin], height=widths[eMin], angle=deg))
}
##############
## Use getMinBBox in a custom function to return an sf object
##############
min_box_sf <- function(x){
crs <- st_crs(x)
x_as_matrix <- st_coordinates(x)[,1:2]
min_box <- getMinBBox(x_as_matrix)
box <- sfheaders::sf_polygon(min_box$pts) %>%
st_set_crs(crs)
box$angle <- min_box$angle
box
}
# Testing on a county in the nc dataset with an unusual shape and orientation:
min_box_sf(nc[56,])
#> Simple feature collection with 1 feature and 2 fields
#> Geometry type: POLYGON
#> Dimension: XY
#> Bounding box: xmin: -76.19819 ymin: 35.11926 xmax: -75.31058 ymax: 36.23016
#> Geodetic CRS: NAD27
#> id geometry angle
#> 1 1 POLYGON ((-76.19819 36.0092... 117.4866
#Plotting county 56 & the associated minimum bounding box
ggplot() +
geom_sf(data = nc[56,],
fill = 'red',
alpha = .2) +
geom_sf(data = min_box_sf(nc[56,]),
fill = NA)
The unusually shaped Dare County, NC has a minimum bounding box with a 'long' orientation of about 117 degrees, or north-north-west to south-south-east.
# Using the function on each row of an sf object.
# note the crs is not retained.
pmap_dfr(nc, min_box_sf)
#> Simple feature collection with 100 features and 2 fields
#> Geometry type: POLYGON
#> Dimension: XY
#> Bounding box: xmin: -84.32385 ymin: 33.86573 xmax: -75.31058 ymax: 36.87134
#> CRS: NA
#> First 10 features:
#> id angle geometry
#> 1 1 177.0408464 POLYGON ((-81.74847 36.2486...
#> 2 1 179.0078231 POLYGON ((-81.3505 36.36728...
#> 3 1 178.4492784 POLYGON ((-80.97202 36.2365...
#> 4 1 136.8896308 POLYGON ((-75.59489 36.2906...
#> 5 1 149.5889916 POLYGON ((-77.71197 36.8713...
#> 6 1 179.5157854 POLYGON ((-77.21774 36.2322...
#> 7 1 147.1227419 POLYGON ((-75.90195 36.2792...
#> 8 1 0.1751954 POLYGON ((-76.95329 36.2937...
#> 9 1 0.1759289 POLYGON ((-78.32017 36.1949...
#> 10 1 179.0809855 POLYGON ((-80.02092 36.5467...
Plotting all the counties minimum bounding boxes together:
pmap_dfr(nc, min_box_sf) %>%
ggplot() +
geom_sf(alpha = .2)
Created on 2021-08-20 by the reprex package (v2.0.1)

Linear Interpolation - shrinking a line

Suppose we have a 1D array named that consists of 9 elements:
Source[0 to 8].
Using "Linear Interpolation" we want to shrink it into a smaller 4 point array: Destination [0 to 3].
This is how I understand the Algorithm:
Calculate the ratio between both array lengths: 9/4 = 2.5
Iterate over the destination coordinates and find the appropriate source coordinate:
Destination [0] = 0 * 2.5 = Source [0] -> Success! use this exact value.
Destination [1] = 1 * 2.5 = Source [2.5] -> No such element! Calculate the average of Source[2] and Source[3].
Destination [2] = 2 * 2.5 = Source [5] -> Success! use this exact value.
Destination [2] = 3 * 2.5 = Source [7.5] -> No such element! Calculate the average of Source[7] and Source[8].
Is this correct ?
Almost correct. 9/4 = 2.25. ;-)
Anyway, if you want to preserve the endpoint values, you should calculate the ratio as (9-1)/(4-1) = 2.666... (Between points 0, 1, 2, 3 there are only three segments, thus the length equals to 3. The same refers to 0...8).
If you don't hit the exact value, remember to compute a weigheted mean, e.g.
Destination[1] = 1 * 2.667 -> (3-2.667)*Source[2] + (2.667-2)*Source[3]
This is from the equation,
y = y0(x1-x) + y1(x-x0)
where, in this case,
x=2.66
x0=2
x1=3
y0=Source[2]
y1=Source[3]

combine time series plot by using R

I wanna combine three graphics on one graph. The data from inside of R which is " nottem ". Can someone help me to write code to put a seasonal mean and harmonic (cosine model) and its time series plots together by using different colors? I already wrote model code just don't know how to combine them together to compare.
Code :library(TSA)
nottem
month.=season(nottem)
model=lm(nottem~month.-1)
summary(nottem)
har.=harmonic(nottem,1)
model1=lm(nottem~har.)
summary(model1)
plot(nottem,type="l",ylab="Average monthly temperature at Nottingham castle")
points(y=nottem,x=time(nottem), pch=as.vector(season(nottem)))
Just put your time series inside a matrix:
x = cbind(serie1 = ts(cumsum(rnorm(100)), freq = 12, start = c(2013, 2)),
serie2 = ts(cumsum(rnorm(100)), freq = 12, start = c(2013, 2)))
plot(x)
Or configure the plot region:
par(mfrow = c(2, 1)) # 2 rows, 1 column
serie1 = ts(cumsum(rnorm(100)), freq = 12, start = c(2013, 2))
serie2 = ts(cumsum(rnorm(100)), freq = 12, start = c(2013, 2))
require(zoo)
plot(serie1)
lines(rollapply(serie1, width = 10, FUN = mean), col = 'red')
plot(serie2)
lines(rollapply(serie2, width = 10, FUN = mean), col = 'blue')
hope it helps.
PS.: zoo package is not needed in this example, you could use the filter function.
You can extract the seasonal mean with:
s.mean = tapply(serie, cycle(serie), mean)
# January, assuming serie is monthly data
print(s.mean[1])
This graph is pretty hard to read, because your three sets of values are so similar. Still, if you want to simply want to graph all of these on the sample plot, you can do it pretty easily by using the coefficients generated by your models.
Step 1: Plot the raw data. This comes from your original code.
plot(nottem,type="l",ylab="Average monthly temperature at Nottingham castle")
Step 2: Set up x-values for the mean and cosine plots.
x <- seq(1920, (1940 - 1/12), by=1/12)
Step 3: Plot the seasonal means by repeating the coefficients from the first model.
lines(x=x, y=rep(model$coefficients, 20), col="blue")
Step 4: Calculate the y-values for the cosine function using the coefficients from the second model, and then plot.
y <- model1$coefficients[2] * cos(2 * pi * x) + model1$coefficients[1]
lines(x=x, y=y, col="red")
ggplot variant: If you decide to switch to the popular 'ggplot2' package for your plot, you would do it like so:
x <- seq(1920, (1940 - 1/12), by=1/12)
y.seas.mean <- rep(model$coefficients, 20)
y.har.cos <- model1$coefficients[2] * cos(2 * pi * x) + model1$coefficients[1]
plot_Data <- melt(data.frame(x=x, temp=nottem, seas.mean=y.seas.mean, har.cos=y.har.cos), id="x")
ggplot(plot_Data, aes(x=x, y=value, col=variable)) + geom_line()

Resources