Hmisc and LaTeX multicloumn - latex

I am trying to create a multicolumn in a LaTeX table. It's easy to do this manually but I create these tabled from data frames generated fro many reports so doing it manually is time consuming. What I have is :
cat(cfrm, file="table2.tex") # this categotises the file
t2<-latex(tabl2, file="table2.tex", cellTexCmds = cellTex, rowname=NULL, colheads=c(paste("Recent Average Catch (",minyear," - ",maxyear,")",sep=""), "Skipjack", "Yellowfin","Bigeye","SP Albacore"),
caption=paste("Key fishery statistics averaged from ",minyear," to C$_{latest}$ for each of the four main target tunas. Where C$_{latest}$ is the last year in each species assessment model. \\label{tab2}", sep=""), caption.lot=NULL, caption.loc='top',
longtable=TRUE, collabel.just=c("c","c","c","c","c"), size="small",
col.just=c("l","c","c","c","c"), colnamesTexCmd=c("apple") , append=TRUE)
which fives me the table attached:
What I would like to do is set the blue rows to be a multicolumn.

Related

how to select SpatRaster layers from their names?

I've got a SpatRaster of (150 x 150 x 1377) that shows temporal evolution of precipitations. Each layer is a given hour in a 2-month interval, but some hours are missing, and the dataset isn't continuous. The layers names are strings as "YYYYMMDDhhmm".
I need to find the mean value every three hours even on whole intervals or on missing-data intervals. On entire ones I want to average three data and on missing-data ones I would like to average two of them or, if two are missing, to select the unique value as the averaged one.
How can I use data names to select how to act?
I've already tried this code but I'm averaging on three continuous layers by index and not by hours. How can I convert names in DateTime form from "tidyverse" in order to use rollapply() to see if two steps back I find the DateTime I am expecting? Is there any other method to check this out?
HSAF=rast(c((paste0(resfolder, "HSAF_final1_5.tif")),(paste0(resfolder, "HSAF_final6_10.tif")),(paste0(resfolder, "HSAF_final11_15.tif")),
(paste0(resfolder, "HSAF_final16_20.tif")),(paste0(resfolder, "HSAF_final21_25.tif")),(paste0(resfolder, "HSAF_final26_30.tif")),
(paste0(resfolder, "HSAF_final31_N04.tif")),(paste0(resfolder, "HSAF_finalN05_N08.tif")),(paste0(resfolder, "HSAF_finalN09_N13.tif")),
(paste0(resfolder, "HSAF_finalN14_N18.tif")),(paste0(resfolder, "HSAF_finalN19_N23.tif")),(paste0(resfolder, "HSAF_finalN24_N28.tif")),
(paste0(resfolder, "HSAF_finalN29_N30.tif"))))
index=names(HSAF)
j=2
for (i in seq(1,3, by=3))
{third_el<- HSAF[index[i+j]]
second_el <- HSAF[index[i+j-1]]
first_el<- HSAF[index[i+j-2]]
newraster<- c(first_el, second_el, third_el)
newraster<- mean(newraster, filename=paste0(tempfile(), ".tif"))
names(newraster)<- paste0(index[i+j-2],index[i+j-1],index[i+j])
}
for (i in seq(4,1374 , by=3))
{ third_el<- HSAF[index[i+j]]
second_el <- HSAF[index[i+j-1]]
first_el<- HSAF[index[i+j-2]]
subraster<- c(first_el, second_el, third_el)
subraster<- mean(subraster, filename=paste0(tempfile(), ".tif"))
names(subraster)<- paste0(index[i+j-2],index[i+j-1],index[i+j])
add(newraster)<- subraster
}

[Google Data Studio]: Can't create histogram as bin dimension is interpreted as metric

I like to make a histogram of some data that is saved in a nested BigQuery table. In a simplified manner the table can be created in the following way:
CREATE TEMP TABLE Sessions (
id int,
hits
ARRAY<
STRUCT<
pagename STRING>>
);
INSERT INTO Sessions (id, hits)
VALUES
( 1,[STRUCT('A'),STRUCT('A'),STRUCT('A')]),
( 2,[STRUCT('A')]),
( 3,[STRUCT('A'),STRUCT('A')]),
( 4,[STRUCT('A'),STRUCT('A')]),
( 5,[]),
( 6,[STRUCT('A')]),
( 7,[]),
( 8,[STRUCT('A')]),
( 9,[STRUCT('A')]),
(10,[STRUCT('A'),STRUCT('A')]);
and it looks like
id
hits.pagename
1
A
A
A
2
A
3
A
A
and so on for the other ids. My goal is to obtain a histogram showing the distribution of A-occurences per id in data studio. The report for the MWE can be seen here: link
So far I created a calculated field called pageviews that evaluates the wanted occurences for each session via SUM(IF(hits.pagename="A",1,0)). Looking at a table showing id and pageviews I get the expected result:
table showing the number of occurences of page A for each id
However, the output of the calculated field is a metric, which might cause trouble for the following. In the next step I wanted to follow the procedure presented in this post. Therefore, I created another field bin to assign my sessions to bins according to the homepageviews as:
CASE
WHEN pageviews = 0 OR pageviews = 1 THEN "bin 1"
WHEN pageviews = 2 OR pageviews = 3 THEN "bin 2"
END
According to this bin-defintion I hope to obtain a histogram having 6 counts in bin 1 and 4 counts in bin 2. Well, in this particular example it will actually have 4 counts in bin one as ids 5 and 7 had "null" entries, but never mind. This won't happen in my real world table.
As you can see in the next image showing the same table as above, but now with a bin-column, this assignment works as well - each id is assigned the correct bin, but now the output field is a metric of type text. Therefore, the bar-chart won't let me use it (it needs it as dimension).
Assignment of each id to a bin
Somewhere I read the workaround to create a selfjoined blend, which outputs metrics as dimension. This works only by name: my field is now a dimension and i can use it as such for the bar-chart, but the bar chart won't load and shows a configuration error of the data source, which can be seen in this picture:
bar-chart of id-count over bin. In the configuration of the chart one can see that "bin" is now a dimension. The chart won't plot, however, as it shows a data configuration error (sorry for the German version of data studio).

SPSS - How to create a 'Totals' row (not a column)

I have a dataset like this:
Program Timely_Count Total_Count
PROG1 51,761 53,356
PROG2 232,371 235,769
PROG3 100,756 110,859
PROG4 25,713 36,309
PROG5 17,985 18,995
PROG6 24,673 24,732
I want to create a "Total" row (not a column) so when I save this into Excel I will have a table that looks like this:
Program Timely_Count Total_Count
PROG1 51,761 53,356
PROG2 232,371 235,769
PROG3 100,756 110,859
PROG4 25,713 36,309
PROG5 17,985 18,995
PROG6 24,673 24,732
TOTAL 453,259 480,020
I know I can use the AGGRAGATE function to add a TOTALS column, but that does not format the dataset the way I need for this report.
I also need this in syntax since it is run multiple times per day on multiple datasets. I have SPSS version 22. (If any of that helps.) –
first you aggregate, then add the aggregated results back to your original table.
First let's recreate your sample data:
data list list/Program (a20) Timely_Count Total_Count (2f8).
begin data
PROG1 51,761 53,356
PROG2 232,371 235,769
PROG3 100,756 110,859
PROG4 25,713 36,309
PROG5 17,985 18,995
PROG6 24,673 24,732
end data.
Now run this:
dataset name OrigData.
dataset declare tot.
aggregate /out='tot'/break = /Timely_Count Total_Count=sum(Timely_Count Total_Count).
add files /file=*/file=tot.
recode program (""="TOTAL").

Data Warehouse Design of Fact Tables

I'm pretty new to data warehouse design and am struggling with how to design the fact table given very similar, but somewhat different metrics. Lets say you were evaluating the below metrics, how would you break up the fact tables (in this case company is a subset of client). Would you be able to use one table for all of this or would each metric being measured warrant its own fact table or would each part of the metric being measured be its own column in one fact table?
Total company daily/monthly/yearly # of files processed
Total company daily/monthly/yearly file sizes processed
Total company daily/monthly/yearly # files errored
Total company daily/monthly/yearly # files failed
Total client daily/monthly/yearly # of files processed
Total client daily/monthly/yearly file sizes processed
Total client daily/monthly/yearly # files errored
Total client daily/monthly/yearly # files failed
By the looks of the measure names, I think you'll be served with a single fact table with a record for each file and a link back to a date_dim
create table date_dim (
date_sk int,
calendar_date date,
month_ordinal int,
month_name nvarchar,
Year int,
..etc you've got your own one of these ..
)
create table fact_file_measures (
date_sk,
file_sk, --ref the file_dim with additonal file info
company_sk, --ref the company_dim with the client details
processed int, --should always be one, optional depending on how your reporting team like to work
size_Kb decimal -- specifiy a size measurement, ambiguity is bad
error_count int -- 1 if file had error, 0 if fine
failed_count int -- 1 or file failed, 0 if fine
)
so now you should be able to construct queries to get everything you asked for
for example, for your monthly stats:
select
c.company_name,
c.client_name,
sum(f.file_count) total_files,
sum(f.size_Kb) total_files_size_Kb,
sum(f.file_count) total_files,
sum(f.file_count) total_files
from
fact_file_measure f
inner join dim_company c on f.company_sk = c.company_sk
inner join dim_date d on f.date_sk = d.date_sk
where
d.month = 'January' and d.year = "1984"
If you needed to have the side by side 'day/month/year' stuff, you can construct year and month fact tables to do the roll ups and join back via dim_date's month/year fields. (You could include month and year fields in the daily fact table, but these values may end up being miss-used by less experienced report builders) It all goes back to what your users actually want - design your fact tables to their requirements and don't be afraid to have separate fact tables - data warehouse is not about normalization, its about presenting the data in a way that it can be used.
Good luck

Reading Informix-SE audit trail log tables

INFORMIX-SQL 7.32 (SE):
I've created an audit trail "a_trx" for my transaction table to know who/when has added or updated rows in this table, with a snapshot of the rows contents. According to documentation, an audit table is created with the same schema of the table being audited, plus the following audit info header columns pre-fixed:
table a_trx
a_type char(2) {record type: aa = added, dd =deleted,
rr = before update image, ww = after update image.}
a_time integer {internal time value.}
a_process_id smallint {Process ID that changed record.}
a_usr_id smallint {User ID that changed record.}
a_rowid integer {Original rowid.}
[...] {Same columns as table being audited.}
So then I proceeded to generate a default perform screen for a_trx, but could not locate a_trx for my table selection. I aborted and ls'd the .dbs directory and did not see a_trx.dat or a_trx.idx, but found a_trx, which appears to be in .dat format, according to
my disk editor utility. Is there any other method for accessing this .dat clone or do I have to trick the engine by renaming it to a_trx.dat, create an .idx companion for it, tweak SYSTABLES, SYSCOLUMNS, etc. to be able to access this audit table like any other table?.. and what is the internal time value of a_time, number of seconds since 12/31/1899?
The audit logs are not C-ISAM files; they are plain log files. IIRC, they are created with '.aud' as a suffix. If you get to choose the suffix, then you would create it with a '.dat' suffix, making sure the name does not conflict with any table name.
You should be able to access them as if they were a table, but you would have to create a table (data file) and the index file to match the augmented schema, and then arrange for the '.aud' file to refer to the same location as the '.dat' file - presumably via a link or possibly a symbolic link. You can specify where the table is stored in the CREATE TABLE statement in SE.
The time is a Unix time stamp - the number of seconds since 1970-01-01T00:00:00Z.

Resources