I have a large complex XML file containing a pattern like the following
<?xml version="1.0" encoding="UTF-8"?>
<records>
<record>
<field1>field1</field1>
<field2>field2</field2>
<field2>field2</field2>
<field3>field3</field3>
<field4>field4</field4>
<field4>field4</field4>
</record>
<record>
<field1>field1</field1>
<field1>field1</field1>
<field3>field3</field3>
<field4>field4</field4>
<field4>field4</field4>
</record>
</records>
I would like to use xmlstarlet to convert it to tab delimited with repeated fields subdelimited with a semicolon, e.g.
field1\tfield2;field2\tfield3\tfield4;field4
field1;field1\t\tfield3\t\field4;field4
I can do what I need by collapsing repeated fields with a string processing routine before feeding the file to xmlstarlet but that feels hacky. Is there an elegant way to do it all in xmlstarlet?
It's been a while since you asked. Nevertheless...
Using xmlstarlet version 1.6.1 to extract and sort field names
to determine field order, and then produce tab-separated values:
xmlstarlet sel \
-N set="http://exslt.org/sets" -N str="http://exslt.org/strings" \
-T -t \
--var fldsep="'$(printf "\t")'" \
--var subsep="';'" \
--var allfld \
-m '*/record/*' \
-v 'name()' --nl \
-b \
-b \
--var uniqfld='set:distinct(str:split(normalize-space($allfld)))' \
-m '*/record' \
--var rec='.' \
-m '$uniqfld' \
--sort 'A:T:-' '.' \
-v 'substring($fldsep,1,position()!=1)' \
-m '$rec/*[local-name() = current()]' \
-v 'substring($subsep,1,position()!=1)' \
-v '.' \
-b \
-b \
--nl < file.xml
EDIT: --sort moved from $allfld to -m $uniqfld.
where:
all field names in input are collected in the $allfld variable
the exslt functions set:distinct
and str:split
are used to create a nodeset of unique field names from $allfld
the $uniqfld nodeset determines field output order
repeated fields are output in document order here but -m '$rec/*[…]'
accepts a --sort clause
the substring expression emits a separator when position() != 1
Given the following input, which is different from OP's,
<?xml version="1.0" encoding="UTF-8"?>
<records>
<record>
<field2>r1-f2A</field2>
<field2>r1-f2B</field2>
<field3>r1-f3</field3>
<field4>r1-f4A</field4>
<field4>r1-f4B</field4>
<field6>r1-f6</field6>
</record>
<record>
<field1>r2-f1A</field1>
<field1>r2-f1B</field1>
<field3/>
<field4>r2-f4A</field4>
<field4>r2-f4B</field4>
<field5>r2-f5</field5>
<field5>r2-f5</field5>
</record>
<record>
<field6>r3-f6</field6>
<field4>r3-f4</field4>
<field2>r3-f2B</field2>
<field2>r3-f2A</field2>
</record>
</records>
output becomes:
r1-f2A;r1-f2B r1-f3 r1-f4A;r1-f4B r1-f6
r2-f1A;r2-f1B r2-f4A;r2-f4B r2-f5;r2-f5
r3-f2B;r3-f2A r3-f4 r3-f6
or, piped through sed -n l to show non-printables,
\tr1-f2A;r1-f2B\tr1-f3\tr1-f4A;r1-f4B\t\tr1-f6$
r2-f1A;r2-f1B\t\t\tr2-f4A;r2-f4B\tr2-f5;r2-f5\t$
\tr3-f2B;r3-f2A\t\tr3-f4\t\tr3-f6$
Using a custom field output order things can be reduced to:
xmlstarlet sel -T -t \
--var fldsep="'$(printf "\t")'" \
--var subsep="';'" \
-m '*/record' \
--var rec='.' \
-m 'str:split("field6 field4 field2 field1 field3 field5")' \
-v 'substring($fldsep,1,position()!=1)' \
-m '$rec/*[local-name() = current()]' \
-v 'substring($subsep,1,position()!=1)' \
-v '.' \
-b \
-b \
--nl < file.xml
again emitting repeated fields in document order in absence of --sort.
Note that using an EXSLT function in a --var clause requires the
corresponding namespace to be declared explicitly with -N to avoid
a runtime error (why?).
To list the generated XSLT 1.0 / EXSLT code add -C before the -t option.
To make one-liners of the formatted xmlstarlet commands above -
stripping line continuation chars and leading whitespace - pipe
them through this sed command:
sed -e ':1' -e 's/^[[:blank:]]*//;/\\$/!b;$b;N;s/\\\n[[:blank:]]*//;b1'
To list elements in the input file xmlstarlet's el command can be
useful, -u for unique:
xmlstarlet el -u file.xml
Output:
records
records/record
records/record/field1
records/record/field2
records/record/field3
records/record/field4
records/record/field5
records/record/field6
You can use the following xmlstarlet command:
xmlstarlet sel -t -m "/records/record" -m "*[starts-with(local-name(),field)]" -i "position()=1" -v "." --else -i "local-name() = local-name(preceding-sibling::*[1])" -v "concat(';',.)" --else -v "concat('\t',.)" -b -b -b -n input.xml
In pseudo-code, it represents something like this
For-each /records/record
For-each element-name that starts with field
If it's the first element, output the item
Else
If check if the current element name equals the preceding one
Then output ;Item
Else output \tItem
b does mean "break" out of loop or if clause
n outputs a newline
Its output is
field1\tfield2;field2\tfield3\tfield4;field4
field1;field1\tfield3\tfield4;field4
The above code differentiates on the base of element names. If you want to differentiate based on element values instead, use the following command:
xmlstarlet sel -t -m "/records/record" -m "*[starts-with(local-name(),field)]" -i "position()=1" -v "." --else -i ". = preceding-sibling::*[1]" -v "concat(';',.)" --else -v "concat('\t',.)" -b -b -b -n a.xml
With your example XML the output is the same (because field names and field values are identical).
I'm looking for a way to list all publicly available versions of an image from Dockerhub. Is there a way this could be achieved?
Specifically, I'm interested in the openjdk:8-jdk-alpine images.
Dockerhub typically only lists the latest version of each image, and there are no linking to historic versions. For openjdk, it's currently 8u191-jdk-alpine3.8:
However, it possible to pull older versions if we know their image digest ID:
openjdk:8-jdk-alpine#sha256:1fd5a77d82536c88486e526da26ae79b6cd8a14006eb3da3a25eb8d2d682ccd6
openjdk:8-jdk-alpine#sha256:c5c705b462abab858066d412b3f871865684d8f837571c98b68e78c505dc7549
With some luck, I was able to find these digests for OpenJDK 8 (Java versions 1.8.0_171 and 1.8.0_151 respectively), by googling openjdk8 alpine digest and looking at github tickets, which included the image digest.
But, is there a systematic way for listing all publicly available digests?
Looking at docker search documentation, there's doesn't seem to be an option for listing the image version, only search by name.
You don't need digests to pull "old" images, you would rather use their tags (even if they are not displayed in Docker Hub).
I use the following command to retrieve tags of a particular image, parsing the output of https://registry.hub.docker.com/v1/repositories/$REPOSITORY/tags :
REPOSITORY=openjdk # can be "<registry>/<image_name>" ("google/cloud-sdk" for example)
wget -q https://registry.hub.docker.com/v1/repositories/$REPOSITORY/tags -O - | \
jq -r '.[].name'
Result for REPOSITORY=openjdk (1593 tags at the time of writing) looks like :
latest
10
10-ea
10-ea-32
10-ea-32-experimental
10-ea-32-jdk
10-ea-32-jdk-experimental
10-ea-32-jdk-slim
10-ea-32-jdk-slim-experimental
10-ea-32-jre
[...]
If you can't/don't want to install jq (tool to manipulate JSON), then you could use :
wget -q https://registry.hub.docker.com/v1/repositories/$REPOSITORY/tags -O - | \
sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | \
tr '}' '\n' | \
awk -F: '{print $3}'
(I'm pretty sure I got this command from another question, but I can't find where)
You can of course filter the output of this command and keep only tags you're interested in :
wget -q https://registry.hub.docker.com/v1/repositories/$REPOSITORY/tags -O - | \
jq -r '.[].name | select(match("^8.*jdk-alpine"))'
or :
wget -q https://registry.hub.docker.com/v1/repositories/$REPOSITORY/tags -O - | \
jq -r '.[].name' \
grep -E '^8.*jdk-alpine'
I try to locate one specific tag for a Docker image. How can I do it on the command line? I want to avoid downloading all the images and then removing the unneeded ones.
In the official Ubuntu release, https://registry.hub.docker.com/_/ubuntu/, there are several tags (release for it), while when I search it on the command line,
user#ubuntu:~$ docker search ubuntu | grep ^ubuntu
ubuntu Official Ubuntu base image 354
ubuntu-upstart Upstart is an event-based replacement for ... 7
ubuntufan/ping 0
ubuntu-debootstrap 0
Also in the help of command line search https://docs.docker.com/engine/reference/commandline/search/, no clue how it can work?
Is it possible in the docker search command?
If I use a raw command to search via the Docker registry API, then the information can be fetched:
$ curl https://registry.hub.docker.com//v1/repositories/ubuntu/tags | python -mjson.tool
[
{
"layer": "ef83896b",
"name": "latest"
},
.....
{
"layer": "463ff6be",
"name": "raring"
},
{
"layer": "195eb90b",
"name": "saucy"
},
{
"layer": "ef83896b",
"name": "trusty"
}
]
When using CoreOS, jq is available to parse JSON data.
So like you were doing before, looking at library/centos:
$ curl -s -S 'https://registry.hub.docker.com/v2/repositories/library/centos/tags/' | jq '."results"[]["name"]' |sort
"6"
"6.7"
"centos5"
"centos5.11"
"centos6"
"centos6.6"
"centos6.7"
"centos7.0.1406"
"centos7.1.1503"
"latest"
The cleaner v2 API is available now, and that's what I'm using in the example. I will build a simple script docker_remote_tags:
#!/usr/bin/bash
curl -s -S "https://registry.hub.docker.com/v2/repositories/library/$#/tags/" | jq '."results"[]["name"]' |sort
Enables:
$ ./docker_remote_tags library/centos
"6"
"6.7"
"centos5"
"centos5.11"
"centos6"
"centos6.6"
"centos6.7"
"centos7.0.1406"
"centos7.1.1503"
"latest"
Reference:
jq: https://stedolan.github.io/jq/ | apt-get install jq
I didn't like any of the solutions above because A) they required external libraries that I didn't have and didn't want to install. B) I didn't get all the pages.
The Docker API limits you to 100 items per request. This will loop over each "next" item and get them all (for Python it's seven pages; other may be more or less... It depends)
If you really want to spam yourself, remove | cut -d '-' -f 1 from the last line, and you will see absolutely everything.
url=https://registry.hub.docker.com/v2/repositories/library/redis/tags/?page_size=100 `# Initial url` ; \
( \
while [ ! -z $url ]; do `# Keep looping until the variable url is empty` \
>&2 echo -n "." `# Every iteration of the loop prints out a single dot to show progress as it got through all the pages (this is inline dot)` ; \
content=$(curl -s $url | python -c 'import sys, json; data = json.load(sys.stdin); print(data.get("next", "") or ""); print("\n".join([x["name"] for x in data["results"]]))') `# Curl the URL and pipe the output to Python. Python will parse the JSON and print the very first line as the next URL (it will leave it blank if there are no more pages) then continue to loop over the results extracting only the name; all will be stored in a variable called content` ; \
url=$(echo "$content" | head -n 1) `# Let's get the first line of content which contains the next URL for the loop to continue` ; \
echo "$content" | tail -n +2 `# Print the content without the first line (yes +2 is counter intuitive)` ; \
done; \
>&2 echo `# Finally break the line of dots` ; \
) | cut -d '-' -f 1 | sort --version-sort | uniq;
Sample output:
$ url=https://registry.hub.docker.com/v2/repositories/library/redis/tags/?page_size=100 `#initial url` ; \
> ( \
> while [ ! -z $url ]; do `#Keep looping until the variable url is empty` \
> >&2 echo -n "." `#Every iteration of the loop prints out a single dot to show progress as it got through all the pages (this is inline dot)` ; \
> content=$(curl -s $url | python -c 'import sys, json; data = json.load(sys.stdin); print(data.get("next", "") or ""); print("\n".join([x["name"] for x in data["results"]]))') `# Curl the URL and pipe the JSON to Python. Python will parse the JSON and print the very first line as the next URL (it will leave it blank if there are no more pages) then continue to loop over the results extracting only the name; all will be store in a variable called content` ; \
> url=$(echo "$content" | head -n 1) `#Let's get the first line of content which contains the next URL for the loop to continue` ; \
> echo "$content" | tail -n +2 `#Print the content with out the first line (yes +2 is counter intuitive)` ; \
> done; \
> >&2 echo `#Finally break the line of dots` ; \
> ) | cut -d '-' -f 1 | sort --version-sort | uniq;
...
2
2.6
2.6.17
2.8
2.8.6
2.8.7
2.8.8
2.8.9
2.8.10
2.8.11
2.8.12
2.8.13
2.8.14
2.8.15
2.8.16
2.8.17
2.8.18
2.8.19
2.8.20
2.8.21
2.8.22
2.8.23
3
3.0
3.0.0
3.0.1
3.0.2
3.0.3
3.0.4
3.0.5
3.0.6
3.0.7
3.0.504
3.2
3.2.0
3.2.1
3.2.2
3.2.3
3.2.4
3.2.5
3.2.6
3.2.7
3.2.8
3.2.9
3.2.10
3.2.11
3.2.100
4
4.0
4.0.0
4.0.1
4.0.2
4.0.4
4.0.5
4.0.6
4.0.7
4.0.8
32bit
alpine
latest
nanoserver
windowsservercore
If you want the bash_profile version:
function docker-tags () {
name=$1
# Initial URL
url=https://registry.hub.docker.com/v2/repositories/library/$name/tags/?page_size=100
(
# Keep looping until the variable URL is empty
while [ ! -z $url ]; do
# Every iteration of the loop prints out a single dot to show progress as it got through all the pages (this is inline dot)
>&2 echo -n "."
# Curl the URL and pipe the output to Python. Python will parse the JSON and print the very first line as the next URL (it will leave it blank if there are no more pages)
# then continue to loop over the results extracting only the name; all will be stored in a variable called content
content=$(curl -s $url | python -c 'import sys, json; data = json.load(sys.stdin); print(data.get("next", "") or ""); print("\n".join([x["name"] for x in data["results"]]))')
# Let's get the first line of content which contains the next URL for the loop to continue
url=$(echo "$content" | head -n 1)
# Print the content without the first line (yes +2 is counter intuitive)
echo "$content" | tail -n +2
done;
# Finally break the line of dots
>&2 echo
) | cut -d '-' -f 1 | sort --version-sort | uniq;
}
And simply call it: docker-tags redis
Sample output:
$ docker-tags redis
...
2
2.6
2.6.17
2.8
--trunc----
32bit
alpine
latest
nanoserver
windowsservercore
As far as I know, the CLI does not allow searching/listing tags in a repository.
But if you know which tag you want, you can pull that explicitly by adding a colon and the image name: docker pull ubuntu:saucy
This script (docker-show-repo-tags.sh) should work for any Docker enabled host that has curl, sed, grep, and sort. This was updated to reflect the fact the repository tag URLs changed.
This version correctly parses the "name": field without a JSON parser.
#!/bin/sh
# 2022-07-20
# Simple script that will display Docker repository tags
# using basic tools: curl, awk, sed, grep, and sort.
# Usage:
# $ docker-show-repo-tags.sh ubuntu centos
# $ docker-show-repo-tags.sh centos | cat -n
for Repo in "$#" ; do
URL="https://registry.hub.docker.com/v2/repositories/library/$Repo/tags/"
curl -sS "$URL" | \
/usr/bin/sed -Ee 's/("name":)"([^"]*)"/\n\1\2\n/g' | \
grep '"name":' | \
awk -F: '{printf("'$Repo':%s\n",$2)}'
done
This older version no longer works. Many thanks to #d9k for pointing this out!
#!/bin/sh
# WARNING: This no long works!
# Simple script that will display Docker repository tags
# using basic tools: curl, sed, grep, and sort.
#
# Usage:
# $ docker-show-repo-tags.sh ubuntu centos
for Repo in $* ; do
curl -sS "https://hub.docker.com/r/library/$Repo/tags/" | \
sed -e $'s/"tags":/\\\n"tags":/g' -e $'s/\]/\\\n\]/g' | \
grep '^"tags"' | \
grep '"library"' | \
sed -e $'s/,/,\\\n/g' -e 's/,//g' -e 's/"//g' | \
grep -v 'library:' | \
sort -fu | \
sed -e "s/^/${Repo}:/"
done
This older version no longer works. Many thanks to #viky for pointing this out!
#!/bin/sh
# WARNING: This no long works!
# Simple script that will display Docker repository tags.
#
# Usage:
# $ docker-show-repo-tags.sh ubuntu centos
for Repo in $* ; do
curl -s -S "https://registry.hub.docker.com/v2/repositories/library/$Repo/tags/" | \
sed -e $'s/,/,\\\n/g' -e $'s/\[/\\\[\n/g' | \
grep '"name"' | \
awk -F\" '{print $4;}' | \
sort -fu | \
sed -e "s/^/${Repo}:/"
done
This is the output for a simple example:
$ docker-show-repo-tags.sh centos | cat -n
1 centos:5
2 centos:5.11
3 centos:6
4 centos:6.10
5 centos:6.6
6 centos:6.7
7 centos:6.8
8 centos:6.9
9 centos:7.0.1406
10 centos:7.1.1503
11 centos:7.2.1511
12 centos:7.3.1611
13 centos:7.4.1708
14 centos:7.5.1804
15 centos:centos5
16 centos:centos5.11
17 centos:centos6
18 centos:centos6.10
19 centos:centos6.6
20 centos:centos6.7
21 centos:centos6.8
22 centos:centos6.9
23 centos:centos7
24 centos:centos7.0.1406
25 centos:centos7.1.1503
26 centos:centos7.2.1511
27 centos:centos7.3.1611
28 centos:centos7.4.1708
29 centos:centos7.5.1804
30 centos:latest
I wrote a command line tool to simplify searching Docker Hub repository tags, available in my PyTools GitHub repository. It's simple to use with various command line switches, but most basically:
./dockerhub_show_tags.py repo1 repo2
It's even available as a Docker image and can take multiple repositories:
docker run harisekhon/pytools dockerhub_show_tags.py centos ubuntu
DockerHub
repo: centos
tags: 5.11
6.6
6.7
7.0.1406
7.1.1503
centos5.11
centos6.6
centos6.7
centos7.0.1406
centos7.1.1503
repo: ubuntu
tags: latest
14.04
15.10
16.04
trusty
trusty-20160503.1
wily
wily-20160503
xenial
xenial-20160503
If you want to embed it in scripts, use -q / --quiet to get just the tags, like normal Docker commands:
./dockerhub_show_tags.py centos -q
5.11
6.6
6.7
7.0.1406
7.1.1503
centos5.11
centos6.6
centos6.7
centos7.0.1406
centos7.1.1503
The v2 API seems to use some kind of pagination, so that it does not return all the available tags. This is clearly visible in projects such as python (or library/python). Even after quickly reading the documentation, I could not manage to work with the API correctly (maybe it is the wrong documentation).
Then I rewrote the script using the v1 API, and it is still using jq:
#!/bin/bash
repo="$1"
if [[ "${repo}" != */* ]]; then
repo="library/${repo}"
fi
url="https://registry.hub.docker.com/v1/repositories/${repo}/tags"
curl -s -S "${url}" | jq '.[]["name"]' | sed 's/^"\(.*\)"$/\1/' | sort
The full script is available at: https://github.com/denilsonsa/small_scripts/blob/master/docker_remote_tags.sh
I've also written an improved version (in Python) that aggregates tags that point to the same version: https://github.com/denilsonsa/small_scripts/blob/master/docker_remote_tags.py
Add this function to your .zshrc file or run the command manually:
#usage list-dh-tags <repo>
#example: list-dh-tags node
function list-dh-tags(){
wget -q https://registry.hub.docker.com/v1/repositories/$1/tags -O - | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n' | awk -F: '{print $3}'
}
Thanks to this -> How can I list all tags for a Docker image on a remote registry?
For anyone stumbling across this in modern times, you can use Skopeo to retrieve an image's tags from the Docker registry:
$ skopeo list-tags docker://jenkins/jenkins \
| jq -r '.Tags[] | select(. | contains("lts-alpine"))' \
| sort --version-sort --reverse
lts-alpine
2.277.3-lts-alpine
2.277.2-lts-alpine
2.277.1-lts-alpine
2.263.4-lts-alpine
2.263.3-lts-alpine
2.263.2-lts-alpine
2.263.1-lts-alpine
2.249.3-lts-alpine
2.249.2-lts-alpine
2.249.1-lts-alpine
2.235.5-lts-alpine
2.235.4-lts-alpine
2.235.3-lts-alpine
2.235.2-lts-alpine
2.235.1-lts-alpine
2.222.4-lts-alpine
Reimplementation of the previous post, using Python over sed/AWK:
for Repo in $* ; do
tags=$(curl -s -S "https://registry.hub.docker.com/v2/repositories/library/$Repo/tags/")
python - <<EOF
import json
tags = [t['name'] for t in json.loads('''$tags''')['results']]
tags.sort()
for tag in tags:
print "{}:{}".format('$Repo', tag)
EOF
done
For a script that works with OAuth bearer tokens on Docker Hub, try this:
Listing the tags of a Docker image on a Docker hub through the HTTP API
You can use Visual Studio Code to provide autocomplete for available Docker images and tags. However, this requires that you type the first letter of a tag in order to see autocomplete suggestions.
For example, when writing FROM ubuntu it offers autocomplete suggestions like ubuntu, ubuntu-debootstrap and ubuntu-upstart. When writing FROM ubuntu:a it offers autocomplete suggestions, like ubuntu:artful and ubuntu:artful-20170511.1
I am trying to download a site (with permission) and grepping a particular text from that. The problem is I want to grep on the go without saving any files on local drive. Following command does not help.
wget --mirror site.com -O - | grep TEXT
wget command manual (man page) tells, the usage of the command should be:
wget [option]... [URL]...
in your case, it should be:
wget --mirror -O - site.com|grep TXT
You can use curl:
curl -s http://www.site.com | grep TEXT
how about this
wget -qO- site.com |grep TEXT
and
curl -vs site.com 2>&1 |grep TEXT
Hi I have not been able to setup backup to S3 using this guide:
http://www.cenolan.com/2008/12/how-to-incremental-daily-backups-amazon-s3-duplicity/
I get this error when I run the script:
Command line error: Expected 2 args, got 0
Enter 'duplicity --help' for help screen.
The duplicity log file:
2013-08-07_17:43:20: ... backing up filesystem
./duplicity-backup: line 60: --include=/home: No such file or directory
My script is like this:
!/bin/bash
# Set up some variables for logging
LOGFILE="/var/log/backup.log"
DAILYLOGFILE="/var/log/backup.daily.log"
HOST='hostname'
DATE=`date +%Y-%m-%d`
MAILADDR="user#domain.com"
# Clear the old daily log file
cat /dev/null > ${DAILYLOGFILE}
# Trace function for logging, don't change this
trace () {
stamp=`date +%Y-%m-%d_%H:%M:%S`
echo "$stamp: $*" >> ${DAILYLOGFILE}
}
# Export some ENV variables so you don't have to type anything
export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="xxx"
export PASSPHRASE="xx"
# Your GPG key
GPG_KEY=xxxx
# How long to keep backups for
OLDER_THAN="1M"
# The source of your backup
SOURCE=/
# The destination
# Note that the bucket need not exist
# but does need to be unique amongst all
# Amazon S3 users. So, choose wisely.
DEST="s3+http://xxxx"
FULL=
if [ $(date +%d) -eq 1 ]; then
FULL=full
fi;
trace "Backup for local filesystem started"
trace "... removing old backups"
duplicity remove-older-than ${OLDER_THAN} ${DEST} >> ${DAILYLOGFILE} 2>&1
trace "... backing up filesystem"
duplicity \
${FULL} \
--encrypt-key=${GPG_KEY} \
--sign-key=${GPG_KEY} \
--volsize=250 \
# --include=/vhosts \
# --include=/etc \
--include=/home \
--include=/root \
--exclude=/** \
${SOURCE} ${DEST} >> ${DAILYLOGFILE} 2>&1
trace "Backup for local filesystem complete"
trace "------------------------------------"
# Send the daily log file by email
cat "$DAILYLOGFILE" | mail -s "Duplicity Backup Log for $HOST - $DATE" $MAILADDR
# Append the daily log file to the main log file
cat "$DAILYLOGFILE" >> $LOGFILE
# Reset the ENV variables. Don't need them sitting around
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export PASSPHRASE=
Does anyone see what is wrong?
You should remove commented lines from script execution. From:
duplicity \
${FULL} \
--encrypt-key=${GPG_KEY} \
--sign-key=${GPG_KEY} \
--volsize=250 \
# --include=/vhosts \
# --include=/etc \
--include=/home \
--include=/root \
--exclude=/** \
${SOURCE} ${DEST} >> ${DAILYLOGFILE} 2>&1
to
duplicity \
${FULL} \
--encrypt-key=${GPG_KEY} \
--sign-key=${GPG_KEY} \
--volsize=250 \
--include=/home \
--include=/root \
--exclude=/** \
${SOURCE} ${DEST} >> ${DAILYLOGFILE} 2>&1