How to use multiple data sources in Eleventy - eleventy
I would like to be able to call more than one json file in an eleventy template page (gallery.njk).
I've tried putting everything into a nested json file 'collections.json' but its not working and i'd rather have it separated out anyway for manageability purposes.
I'm trying something like this, but it's not working:
---
pagination:
data: "collection1", "collection2"
size: 1
alias: image
permalink: "/gallery/{{ image.title | slug }}/"
---
Have also tried:
data: collection1, collection2
data: [ collection1, collection2 ]
This is what does work, but it only gives me 1 collection obviously:
---
pagination:
data: collection1
size: 1
alias: image
permalink: "/gallery/{{ image.title | slug }}/"
---
Related
Benthos grok log parse
So I have this log and I was trying to parse it using benthos grok. What I need to do is return json 5 elements: • Timestamp • Connection direction (inbound/outbound) • Source IP • Destination IP • Source Port in json format of this log: <134>1 2023-01-21T17:18:05Z CHKPGWMGMT CheckPoint 16575 - [action:"Accept"; flags:"411908"; ifdir:"outbound"; ifname:"eth0"; logid:"0"; loguid:"{0x80c5f24,0x273f572f,0x1a6c6aae,0x5f835b6e}"; origin:"10.21.10.2"; originsicname:"cn=cp_mgmt,o=CHKPGWMGMT..f6b99b"; sequencenum:"4"; time:"1674314285"; version:"5"; __policy_id_tag:"product=VPN-1 & FireWall-1[db_tag={F7CAC520-C428-484E-8004-06A1FAC151A3};mgmt=CHKPGWMGMT;date=1667399823;policy_name=Standard]"; dst:"10.21.10.2"; inzone:"Local"; layer_name:"Network"; layer_uuid:"8a994dd3-993e-4c0c-92a1-a8630b153f4c"; match_id:"1"; parent_rule:"0"; rule_action:"Accept"; rule_uid:"102f52bf-da21-49cd-b2e2-6affe347215d"; outzone:"Local"; product:"VPN-1 & FireWall-1"; proto:"6"; s_port:"46540"; service:"1433"; service_id:"https"; src:"10.21.9.1"] input: type: file file: paths: [./intput.txt] codec: lines pipeline: processors: - grok: expressions: - '%{NGFWLOGFILE}' pattern_definitions: NGFWLOGFILE: '%{NOTSPACE:interfaceid} %{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:Letters} %{NOTSPACE:Mhm} %{NOTSPACE:Skaicius} %{NOTSPACE:AA} %{NOTSPACE:Action}' # - mapping: | # root.timestamp = this.timestamp # root.Action = this.Action output: stdout: {} #output: # label: "" # file: # path: "Output.txt" # codec: lines So I tried using grok to parse the log to json format and mapping to filter the part I want to get. The part there I got stuck is pattern_definitions how to extract data from the list which already has names at the log file or I should use some better approach to the task?
Grok translates to a regular expression under the covers, so I don't think it has any notion of lists and such. Try this: input: generate: count: 1 interval: 0s mapping: | root = """<134>1 2023-01-21T17:18:05Z CHKPGWMGMT CheckPoint 16575 - [action:"Accept"; flags:"411908"; ifdir:"outbound"; ifname:"eth0"; logid:"0"; loguid:"{0x80c5f24,0x273f572f,0x1a6c6aae,0x5f835b6e}"; origin:"10.21.10.2"; originsicname:"cn=cp_mgmt,o=CHKPGWMGMT..f6b99b"; sequencenum:"4"; time:"1674314285"; version:"5"; __policy_id_tag:"product=VPN-1 & FireWall-1[db_tag={F7CAC520-C428-484E-8004-06A1FAC151A3};mgmt=CHKPGWMGMT;date=1667399823;policy_name=Standard]"; dst:"10.21.10.2"; inzone:"Local"; layer_name:"Network"; layer_uuid:"8a994dd3-993e-4c0c-92a1-a8630b153f4c"; match_id:"1"; parent_rule:"0"; rule_action:"Accept"; rule_uid:"102f52bf-da21-49cd-b2e2-6affe347215d"; outzone:"Local"; product:"VPN-1 & FireWall-1"; proto:"6"; s_port:"46540"; service:"1433"; service_id:"https"; src:"10.21.9.1"]""" pipeline: processors: - grok: expressions: - "%{NGFWLOGFILE}" pattern_definitions: NGFWLOGFILE: |- %{NOTSPACE:interfaceid} %{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:Letters} %{NOTSPACE:Mhm} %{NOTSPACE:Skaicius} %{NOTSPACE:AA} \[%{GREEDYDATA}; ifdir:"%{DATA:connectionDirection}"; %{GREEDYDATA}; dst:"%{DATA:destinationIP}"; %{GREEDYDATA}; s_port:"%{DATA:sourcePort}"; %{GREEDYDATA}; src:"%{DATA:sourceIP}"\] output: stdout: {}
Eleventy, frontmatter data not being evaluated
I have a template that looks like this: --- date: "2016-01-01T06:00-06:00" value: "/{{ page.date | date: '%Y/%m/%d' }}/index.html" --- Value prints: {{ value }} <br/> But we expect: {{ page.date | date: '%Y/%m/%d' }}/index.html <br/> When I render the site then the site looks like this: Value prints: /{{ page.date | date: '%Y/%m/%d' }}/index.html But we expect: 2016/01/01/index.html I really want the value parameter to have the expected value. As far as I can tell, this sort of thing should work. I want to use this technique to calculate permalinks. My thinking is based on https://www.11ty.dev/docs/permalinks/ I'm running eleventy 0.12.1 Things I've tried: yaml, json and js frontmatter markdown template njk template literally copy pasting sample code from the docs At this point I think Eleventy might have a bug
At the moment of writing, eleventy doesn't support template syntax in any frontmatter fields except the permalink field: permalink: Change the output target of the current template. Normally, you cannot use template syntax to reference other variables in your data, but permalink is an exception. Source Instead, you can use computed data, which allows you to set frontmatter data based on other frontmatter fields. Something like this should work: date: "2016-01-01T06:00-06:00" eleventyComputed: value: "/{{ page.date | date: '%Y/%m/%d' }}/index.html"
Need guidance to use .map,.group,.pluck, etc. to make multi series line chart in ruby on rails app
I'm trying to display a multi series line chart using chartkick in a Ruby on Rails app. The chart should display paper_types and the weightfor each type during some time period. SCREENSHOT ADDED This is my latest try: <%= line_chart [ {name: #pappi.map{|p| [p.paper_type]}, data: #pappi.map{|t| [t.date, t.paper_weight] }, 'interpolateNulls':true} ] %> Where #pappi = Paper.all The code above outputs as the picture below, where every paper_type rounds up on one single line, instead of showing separate lines for each paper_type. What I'm looking for is a chart similar to the screenshot below, were each paper_type has it's own line. Can someone please help me with this so I can get the outcome I want?
I did not test this, only read the doc and concluded the following: line_chart expects you to give argument structured like this: (from the Javascript documentation) line_chart [ { name: 'Line1', data: { '2017-01-01' => 2, '2017-01-08' => 3 } }, { name: 'Line2', data: { '2017-01-01' => 1, '2017-01-08' => 7 } }, ] # try it in your view to make sure it works as described below This will create a chart with 2 lines (Line1 and Line2), horizontal axis will contain 2 values 2017-01-01 and 2017-01-08 and the vertical axis will be a scale, probably from 1 to 7 (min from data to max from data). Following this data structure in your context: Specs (correct me if I am wrong): one line for each different paper_type weight value for a given paper_type and a given date Object mapping to match the desired data structure: # controller all_paper_type_values = Paper.all.pluck(:paper_type).uniq #data_for_chart = all_paper_type_values.map do |paper_type| { name: paper_type, data: Paper.where(paper_type: paper_type).group_by_month(:created_at).sum(:weight) } end # view <%= line_chart(#data_for_chart) %> This is no scoped to any user / dates (period). You will have to include this in the code above. Try this and let me know if it fails.
What happens when you put any of these options in the rails console? Do you get multiple series of data? Have you tried? <%= line_chart [ name: paper.paper_type, data: current_user.papers.group(:paper_type).group_by_week(:created_at).count ] %>
Grafana not reading results properly
My problem is the following I have this text ID: Origem: 4 Mensagem: Parametro invalido: CHASSI_INVALIDO: chassi Gateway: 2 Versao: v20170130 Layout: BASERNS2 Data: 2017-04-10 10:00:04.592 And grafana (doesnt matter the panel) reads like this ID: Origem: 4 Mensagem: Parametro invalido: CHASSI_INVALIDO: chassi Gateway: 2 Versao: v20170130 Layout: BASERNS2 Data: 2017-04-10 10:00:04.592 How can I get for Grafana to show the result properly, with the 'new line'? Is that even possible? Thanks
Ruby: How can I read a CSV file that contains two headers in Ruby?
I have a ".CSV" file that I'm trying to parse using CSV in ruby. The file has two rows of headers though and I've never encountered this before and don't know how to handle it. Below is an example of the headers and rows. Row 1 "Institution ID","Institution","Game Date","Uniform Number","Last Name","First Name","Rushing","","","","","Passing","","","","","","Total Off.","","Receiving","","","Pass Int","","","Fumble Ret","","","Punting","","Punt Ret","","","KO Ret","","","Total TD","Off xpts","","","","Def xpts","","","","FG","","Saf","Points" Row 2 "","","","","","","Rushes","Gain","Loss","Net","TD","Att","Cmp","Int","Yards","TD","Conv","Plays","Yards","No.","Yards","TD","No.","Yards","TD","No.","Yards","TD","No.","Yards","No.","Yards","TD","No.","Yards","TD","","Kicks Att","Kicks Made","R/P Att","R/P Made","Kicks Att","Kicks Made","Int/Fum Att","Int/Fum Made","Att","Made" Row 3 "721","AirForce","09/01/12","19","BASKA","DAVID","","","","","","","","","","","","0","0","","","","","","","","","","2","85","","","","","","","","","","","","","","","","","","","0" There are no returns in the example above I just added them so it would be easier to read. Does CSV have methods available to handle this structure or will I have to write my own methods to handle this? Thanks!
It looks like your CSV file was produced from an Excel spreadsheet that has columns grouped like this: ... | Rushing | Passing | ... ... |Rushes|Gain|Loss|Net|TD|Att|Cmp|Int|Yards|TD|Conv| ... (Not sure if I restored the groups properly.) There is no standard tools to work with such kind of CSV files, AFAIK. You have to do the job manually. Read the first line, treat it as first header line. Read the second line, treat it as second header line. Read the third line, treat it as first data line. ...
I'd recommend using the smarter_csv gem, and manually provide the correct headers: require 'smarter_csv' options = {:user_provided_headers => ["Institution ID","Institution","Game Date","Uniform Number","Last Name","First Name", ... provide all headers here ... ], :headers_in_file => false} data = SmarterCSV.process(filename, options) data.pop # to ignore the first header line data.pop # to ignore the second header line # data now contains an array of hashes with your data Please check the GitHub page for the options, and examples. https://github.com/tilo/smarter_csv One option you should use is :user_provided_headers , and then simply specify the headers you want in an array. This way you can work around cases like this. You will have to do data.pop to ignore the header lines in the file.
You'll have to write your own logic. CSV is really just rows and columns, and by itself has no inherent idea of what each column or row really is, it's just raw data. Thus, CSV has no concept or awareness that it has two header rows, that's a human thing, so you'll need to build your own heuristics. Given that your data rows look like: "721","Air Force","09/01/12", When you start parsing your data, if the first column represents an integer, then, if you convert it to an int and if it's > 0 than you know you're dealing with a valid "row" and not a header.
Read the CSV in and skip the first line on output: arr_of_arrs = CSV.read("path/to/file.csv") arr_of_arrs[2..arr_of_arrs.length].each do |x| # operation here end
It's really easy to do this with CSV. Just watch to see what the current line number is that's been read, and loop until you've read the headers: require 'csv' CSV.foreach('test.csv') do |row| next unless $. > 2 puts "'" + row.join("', '") + "'" end When run this is what is output: '721', 'Air Force', '09/01/12', '19', 'BASKA', 'DAVID', '', '', '', '', '', '', '', '', '', '', '', '0', '0', '', '', '', '', '', '', '', '', '', '2', '85', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '0' $. is the line-number of the last line read from the file that's opened. So, this immediately loops until $. has read two lines.