Regular expression to fetch the Jenkins node workDir - jenkins

I am trying to fetch the secret and workdir of a jenkins node in a script.
curl -L -s -u user:password -X GET https://cje.example.com/computer/jnlpAgentTest/slave-agent.jnlp
This gives
<jnlp codebase="https://cje.example.com/computer/jnlpAgentTest/" spec="1.0+">
<information>
<title>Agent for jnlpAgentTest</title>
<vendor>Jenkins project</vendor>
<homepage href="https://jenkins-ci.org/"/>
</information>
<security>
<all-permissions/>
</security>
<resources>
<j2se version="1.8+"/>
<jar href="https://cje.example.com/jnlpJars/remoting.jar"/>
</resources>
<application-desc main-class="hudson.remoting.jnlp.Main">
<argument>b8c80148ce36de10c9358384fac9e28fbba941055a9a6ab2277e75ddc29a8744</argument>
<argument>jnlpAgentTest</argument>
<argument>-workDir</argument>
<argument>/tmp/jnlpAgenttest</argument>
<argument>-internalDir</argument>
<argument>remoting</argument>
<argument>-url</argument>
<argument>https://cje.example.com/</argument>
</application-desc>
</jnlp>
Here I want to fetch the secret and workDir, secret can be fetched using this command
curl -L -s -u admin:password -H "Jenkins-Crumb:dac7ce5614f8cb32a6ce75153aaf2398" -X GET https://owood.ci.cloudbees.com/computer/jnlpAgentTest/slave-agent.jnlp | sed "s/.*<application-desc main-class=\"hudson.remoting.jnlp.Main\"><argument>\([a-z0-9]*\).*/\1/"
b8c80148ce36de10c9358384fac9e28fbba941055a9a6ab2277e75ddc29a8744
But I couldn't find a way to fetch workDir value, here it is /tmp/jnlpAgenttest which exists immediately after the tag -workDir

You can use
sed -n '/-workDir/{n;s/.*>\([^<>]*\)<.*/\1/p}'
sed -n -e '/-workDir/{n' -e 's/.*>\([^<>]*\)<.*/\1/p' -e '}'
See the online demo.
Details:
/-workDir/ - find the line with -workDir tag
n - reads the next line into pattern space discarding the previous one read
s/.*>\([^<>]*\)<.*/\1/ - matches any text up to >, then captures into Group 1 any zero or more chars other than < and >, then matches < and the rest of the string and replaces with Group 1 value
p - print the result of substitution.

Related

Set a curl output to a variable in a Dockerfile

basically i'm doing a curl and grepping some stuff.
But, i want set the output of this curl to a variable, to then use it on another curl.
e.g:
curl -u asd:asd http://zzz:123/aa/aa.aaa?cmd=ls | grep -B1 -E '<bbb>[4-7]\d{8,}' | grep yyy | tail -n 1 | sed -n -e 's/.*<xxx>\(.*\)<\/xxx>.*/\1/p')
but then I want set the output to a var and use it:
RUN aaa=$(previous curl) && curl -u asd:asd http://$aaa.com
tried with ${aaa}, with "$aaa", etc... didn't work. any solutions?
UPDATE:
something wrong is happening in previous curl 'cause doesn't return the value. probably for not doing the curl
I fear you will not be able to acheive this, because from my understanding RUN statement is to execute a command. To store value you'll have use SET.
For me the following workaround helped
RUN export aaa=$(curl -u asd:asd http://$aaa.com);echo aaa;
You can add the downsteam commands that will use the variable aaa towards right of the semicolon

Use xmlstarlet to convert XML containing repeated and missing fields to tab delimited

I have a large complex XML file containing a pattern like the following
<?xml version="1.0" encoding="UTF-8"?>
<records>
<record>
<field1>field1</field1>
<field2>field2</field2>
<field2>field2</field2>
<field3>field3</field3>
<field4>field4</field4>
<field4>field4</field4>
</record>
<record>
<field1>field1</field1>
<field1>field1</field1>
<field3>field3</field3>
<field4>field4</field4>
<field4>field4</field4>
</record>
</records>
I would like to use xmlstarlet to convert it to tab delimited with repeated fields subdelimited with a semicolon, e.g.
field1\tfield2;field2\tfield3\tfield4;field4
field1;field1\t\tfield3\t\field4;field4
I can do what I need by collapsing repeated fields with a string processing routine before feeding the file to xmlstarlet but that feels hacky. Is there an elegant way to do it all in xmlstarlet?
It's been a while since you asked. Nevertheless...
Using xmlstarlet version 1.6.1 to extract and sort field names
to determine field order, and then produce tab-separated values:
xmlstarlet sel \
-N set="http://exslt.org/sets" -N str="http://exslt.org/strings" \
-T -t \
--var fldsep="'$(printf "\t")'" \
--var subsep="';'" \
--var allfld \
-m '*/record/*' \
-v 'name()' --nl \
-b \
-b \
--var uniqfld='set:distinct(str:split(normalize-space($allfld)))' \
-m '*/record' \
--var rec='.' \
-m '$uniqfld' \
--sort 'A:T:-' '.' \
-v 'substring($fldsep,1,position()!=1)' \
-m '$rec/*[local-name() = current()]' \
-v 'substring($subsep,1,position()!=1)' \
-v '.' \
-b \
-b \
--nl < file.xml
EDIT: --sort moved from $allfld to -m $uniqfld.
where:
all field names in input are collected in the $allfld variable
the exslt functions set:distinct
and str:split
are used to create a nodeset of unique field names from $allfld
the $uniqfld nodeset determines field output order
repeated fields are output in document order here but -m '$rec/*[…]'
accepts a --sort clause
the substring expression emits a separator when position() != 1
Given the following input, which is different from OP's,
<?xml version="1.0" encoding="UTF-8"?>
<records>
<record>
<field2>r1-f2A</field2>
<field2>r1-f2B</field2>
<field3>r1-f3</field3>
<field4>r1-f4A</field4>
<field4>r1-f4B</field4>
<field6>r1-f6</field6>
</record>
<record>
<field1>r2-f1A</field1>
<field1>r2-f1B</field1>
<field3/>
<field4>r2-f4A</field4>
<field4>r2-f4B</field4>
<field5>r2-f5</field5>
<field5>r2-f5</field5>
</record>
<record>
<field6>r3-f6</field6>
<field4>r3-f4</field4>
<field2>r3-f2B</field2>
<field2>r3-f2A</field2>
</record>
</records>
output becomes:
r1-f2A;r1-f2B r1-f3 r1-f4A;r1-f4B r1-f6
r2-f1A;r2-f1B r2-f4A;r2-f4B r2-f5;r2-f5
r3-f2B;r3-f2A r3-f4 r3-f6
or, piped through sed -n l to show non-printables,
\tr1-f2A;r1-f2B\tr1-f3\tr1-f4A;r1-f4B\t\tr1-f6$
r2-f1A;r2-f1B\t\t\tr2-f4A;r2-f4B\tr2-f5;r2-f5\t$
\tr3-f2B;r3-f2A\t\tr3-f4\t\tr3-f6$
Using a custom field output order things can be reduced to:
xmlstarlet sel -T -t \
--var fldsep="'$(printf "\t")'" \
--var subsep="';'" \
-m '*/record' \
--var rec='.' \
-m 'str:split("field6 field4 field2 field1 field3 field5")' \
-v 'substring($fldsep,1,position()!=1)' \
-m '$rec/*[local-name() = current()]' \
-v 'substring($subsep,1,position()!=1)' \
-v '.' \
-b \
-b \
--nl < file.xml
again emitting repeated fields in document order in absence of --sort.
Note that using an EXSLT function in a --var clause requires the
corresponding namespace to be declared explicitly with -N to avoid
a runtime error (why?).
To list the generated XSLT 1.0 / EXSLT code add -C before the -t option.
To make one-liners of the formatted xmlstarlet commands above -
stripping line continuation chars and leading whitespace - pipe
them through this sed command:
sed -e ':1' -e 's/^[[:blank:]]*//;/\\$/!b;$b;N;s/\\\n[[:blank:]]*//;b1'
To list elements in the input file xmlstarlet's el command can be
useful, -u for unique:
xmlstarlet el -u file.xml
Output:
records
records/record
records/record/field1
records/record/field2
records/record/field3
records/record/field4
records/record/field5
records/record/field6
You can use the following xmlstarlet command:
xmlstarlet sel -t -m "/records/record" -m "*[starts-with(local-name(),field)]" -i "position()=1" -v "." --else -i "local-name() = local-name(preceding-sibling::*[1])" -v "concat(';',.)" --else -v "concat('\t',.)" -b -b -b -n input.xml
In pseudo-code, it represents something like this
For-each /records/record
For-each element-name that starts with field
If it's the first element, output the item
Else
If check if the current element name equals the preceding one
Then output ;Item
Else output \tItem
b does mean "break" out of loop or if clause
n outputs a newline
Its output is
field1\tfield2;field2\tfield3\tfield4;field4
field1;field1\tfield3\tfield4;field4
The above code differentiates on the base of element names. If you want to differentiate based on element values instead, use the following command:
xmlstarlet sel -t -m "/records/record" -m "*[starts-with(local-name(),field)]" -i "position()=1" -v "." --else -i ". = preceding-sibling::*[1]" -v "concat(';',.)" --else -v "concat('\t',.)" -b -b -b -n a.xml
With your example XML the output is the same (because field names and field values are identical).

Makefile multiline command does not work in Docker properly

I am trying to compile HTML from Markdown. My makefile looks like:
MD = pandoc \
--from markdown --standalone
# ...
$(MD_TARGETS):$(TARGET_DIR)/%.html: $(SOURCE_DIR)/%.md
mkdir -p $(#D); \
$(MD) --to html5 $< --output $#; \
sed -i '' -e '/href="./s/\.md/\.html/g' $#
When I run this on local machine everything works.
When I run the same in Docker I get the following error:
mkdir -p /project/docs; \
pandoc --from markdown --standalone --to html5 /project/source/changelog.md --output /project/docs/changelog.html; \
sed -i '' -e '/href="./s/\.md/\.html/g' /project/docs/changelog.html
sed: can't read : No such file or directory
makefile:85: recipe for target '/project/docs/changelog.html' failed
make: *** [/project/docs/changelog.html] Error 2
Consequent call of make gives the same error but with another file:
sed: can't read : No such file or directory
makefile:85: recipe for target '/project/docs/todo.html' failed
Obviously, make somehow tries sed earlier than HTML is done.
But I use multiline syntax ; \ of make so as to avoid using subshell.
I also tried && \ but neither of them works. What should I do?
Unfortunately you have been misled by your "obvious" conclusion Obviously, make somehow tries sed earlier than HTML is done :) That's not at all what the problem is. Review your error message more carefully:
sed: can't read : No such file or directory
Note the can't read :; there's supposed to be a filename there. It should say something like can't read foobar:. So clearly sed is trying to read a file with the name of empty string. Here's the line you're running:
sed -i '' -e '/href="./s/\.md/\.html/g' /project/docs/changelog.html
The clear culprit here is the empty string argument to -i. The sed man page describes the -i option as:
-i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if SUFFIX supplied)
Note how (a) the SUFFIX is optional, and (b) there's no space between the argument and its option. In your example you added a space, which causes sed to believe that the SUFFIX was not provided and to treat the next argument ('') as the filename to be operated on.
I'm not sure why it worked when you ran it from the command line: I have to assume that your command line version didn't include the empty string or else it didn't include a space between the -i and the ''.
Use one of:
sed -i'' -e '/href="./s/\.md/\.html/g' $#
or
sed --in-place= -e '/href="./s/\.md/\.html/g' $#

Generate JIRA release note through a jenkins job without plugins

I know that this is possible through the JIRA-JENKINS plugin. But I'm not an administrative user neither in JIRA nor Jenkins. Therefore I want to know is it possible to generate JIRA release note through a jenkin job without installing any plugins to JIRA or JENKINS?
Ok, I did it just now, here is my solution (that is a mix of several partial solution I have found googling):
In your deploy jobs, add a shell execution step at the end of the job and replace all parameters of the following script with correct values
version=<your_jira_version> ##(for example 1.0.71)
project_name=<your_jira_project_key> ##(for example PRJ)
jira_version_id=$(curl --silent -u <jira_user>:<jira_password> -X GET -H "Content-Type: application/json" "https://<your_jira_url>/rest/api/2/project/${project_name}/versions" | jq "map(select(.[\"name\"] == \"$version\")) | .[0] | .id" | sed -e 's/^"//' -e 's/"$//')
project_id=$(curl --silent -u <jira_user>:<jira_password> -X GET -H "Content-Type: application/json" "https://<your_jira_url>/rest/api/2/project/${project_name}" | jq .id | sed -e 's/^"//' -e 's/"$//')
release_notes_page="https://<your_jira_url>/secure/ReleaseNote.jspa?version=${jira_version_id}&styleName=Text&projectId=${project_id}"
release_notes=$(curl --silent -D- -u <jira_user>:<jira_password> -X GET -H "Content-Type: application/json" "$release_notes_page")
rm -rf releasenotes.txt
echo "$release_notes" | sed -n "/<textarea rows=\"40\" cols=\"120\">/,/<\/textarea>/p" | grep -v "textarea" > releasenotes.txt
You can use the maven-changes-plugin. You have to create a small maven project (doesn't need any sources) and include the plugin in the plugins section with the necessary configuration (see here: http://maven.apache.org/plugins/maven-changes-plugin/jira-report-mojo.html)
Then you create a Jenkins job, and just execute the maven goals you need (most probably just "mvn changes:jira-report").

How can I find a Docker image with a specific tag in Docker registry on the Docker command line?

I try to locate one specific tag for a Docker image. How can I do it on the command line? I want to avoid downloading all the images and then removing the unneeded ones.
In the official Ubuntu release, https://registry.hub.docker.com/_/ubuntu/, there are several tags (release for it), while when I search it on the command line,
user#ubuntu:~$ docker search ubuntu | grep ^ubuntu
ubuntu Official Ubuntu base image 354
ubuntu-upstart Upstart is an event-based replacement for ... 7
ubuntufan/ping 0
ubuntu-debootstrap 0
Also in the help of command line search https://docs.docker.com/engine/reference/commandline/search/, no clue how it can work?
Is it possible in the docker search command?
If I use a raw command to search via the Docker registry API, then the information can be fetched:
$ curl https://registry.hub.docker.com//v1/repositories/ubuntu/tags | python -mjson.tool
[
{
"layer": "ef83896b",
"name": "latest"
},
.....
{
"layer": "463ff6be",
"name": "raring"
},
{
"layer": "195eb90b",
"name": "saucy"
},
{
"layer": "ef83896b",
"name": "trusty"
}
]
When using CoreOS, jq is available to parse JSON data.
So like you were doing before, looking at library/centos:
$ curl -s -S 'https://registry.hub.docker.com/v2/repositories/library/centos/tags/' | jq '."results"[]["name"]' |sort
"6"
"6.7"
"centos5"
"centos5.11"
"centos6"
"centos6.6"
"centos6.7"
"centos7.0.1406"
"centos7.1.1503"
"latest"
The cleaner v2 API is available now, and that's what I'm using in the example. I will build a simple script docker_remote_tags:
#!/usr/bin/bash
curl -s -S "https://registry.hub.docker.com/v2/repositories/library/$#/tags/" | jq '."results"[]["name"]' |sort
Enables:
$ ./docker_remote_tags library/centos
"6"
"6.7"
"centos5"
"centos5.11"
"centos6"
"centos6.6"
"centos6.7"
"centos7.0.1406"
"centos7.1.1503"
"latest"
Reference:
jq: https://stedolan.github.io/jq/ | apt-get install jq
I didn't like any of the solutions above because A) they required external libraries that I didn't have and didn't want to install. B) I didn't get all the pages.
The Docker API limits you to 100 items per request. This will loop over each "next" item and get them all (for Python it's seven pages; other may be more or less... It depends)
If you really want to spam yourself, remove | cut -d '-' -f 1 from the last line, and you will see absolutely everything.
url=https://registry.hub.docker.com/v2/repositories/library/redis/tags/?page_size=100 `# Initial url` ; \
( \
while [ ! -z $url ]; do `# Keep looping until the variable url is empty` \
>&2 echo -n "." `# Every iteration of the loop prints out a single dot to show progress as it got through all the pages (this is inline dot)` ; \
content=$(curl -s $url | python -c 'import sys, json; data = json.load(sys.stdin); print(data.get("next", "") or ""); print("\n".join([x["name"] for x in data["results"]]))') `# Curl the URL and pipe the output to Python. Python will parse the JSON and print the very first line as the next URL (it will leave it blank if there are no more pages) then continue to loop over the results extracting only the name; all will be stored in a variable called content` ; \
url=$(echo "$content" | head -n 1) `# Let's get the first line of content which contains the next URL for the loop to continue` ; \
echo "$content" | tail -n +2 `# Print the content without the first line (yes +2 is counter intuitive)` ; \
done; \
>&2 echo `# Finally break the line of dots` ; \
) | cut -d '-' -f 1 | sort --version-sort | uniq;
Sample output:
$ url=https://registry.hub.docker.com/v2/repositories/library/redis/tags/?page_size=100 `#initial url` ; \
> ( \
> while [ ! -z $url ]; do `#Keep looping until the variable url is empty` \
> >&2 echo -n "." `#Every iteration of the loop prints out a single dot to show progress as it got through all the pages (this is inline dot)` ; \
> content=$(curl -s $url | python -c 'import sys, json; data = json.load(sys.stdin); print(data.get("next", "") or ""); print("\n".join([x["name"] for x in data["results"]]))') `# Curl the URL and pipe the JSON to Python. Python will parse the JSON and print the very first line as the next URL (it will leave it blank if there are no more pages) then continue to loop over the results extracting only the name; all will be store in a variable called content` ; \
> url=$(echo "$content" | head -n 1) `#Let's get the first line of content which contains the next URL for the loop to continue` ; \
> echo "$content" | tail -n +2 `#Print the content with out the first line (yes +2 is counter intuitive)` ; \
> done; \
> >&2 echo `#Finally break the line of dots` ; \
> ) | cut -d '-' -f 1 | sort --version-sort | uniq;
...
2
2.6
2.6.17
2.8
2.8.6
2.8.7
2.8.8
2.8.9
2.8.10
2.8.11
2.8.12
2.8.13
2.8.14
2.8.15
2.8.16
2.8.17
2.8.18
2.8.19
2.8.20
2.8.21
2.8.22
2.8.23
3
3.0
3.0.0
3.0.1
3.0.2
3.0.3
3.0.4
3.0.5
3.0.6
3.0.7
3.0.504
3.2
3.2.0
3.2.1
3.2.2
3.2.3
3.2.4
3.2.5
3.2.6
3.2.7
3.2.8
3.2.9
3.2.10
3.2.11
3.2.100
4
4.0
4.0.0
4.0.1
4.0.2
4.0.4
4.0.5
4.0.6
4.0.7
4.0.8
32bit
alpine
latest
nanoserver
windowsservercore
If you want the bash_profile version:
function docker-tags () {
name=$1
# Initial URL
url=https://registry.hub.docker.com/v2/repositories/library/$name/tags/?page_size=100
(
# Keep looping until the variable URL is empty
while [ ! -z $url ]; do
# Every iteration of the loop prints out a single dot to show progress as it got through all the pages (this is inline dot)
>&2 echo -n "."
# Curl the URL and pipe the output to Python. Python will parse the JSON and print the very first line as the next URL (it will leave it blank if there are no more pages)
# then continue to loop over the results extracting only the name; all will be stored in a variable called content
content=$(curl -s $url | python -c 'import sys, json; data = json.load(sys.stdin); print(data.get("next", "") or ""); print("\n".join([x["name"] for x in data["results"]]))')
# Let's get the first line of content which contains the next URL for the loop to continue
url=$(echo "$content" | head -n 1)
# Print the content without the first line (yes +2 is counter intuitive)
echo "$content" | tail -n +2
done;
# Finally break the line of dots
>&2 echo
) | cut -d '-' -f 1 | sort --version-sort | uniq;
}
And simply call it: docker-tags redis
Sample output:
$ docker-tags redis
...
2
2.6
2.6.17
2.8
--trunc----
32bit
alpine
latest
nanoserver
windowsservercore
As far as I know, the CLI does not allow searching/listing tags in a repository.
But if you know which tag you want, you can pull that explicitly by adding a colon and the image name: docker pull ubuntu:saucy
This script (docker-show-repo-tags.sh) should work for any Docker enabled host that has curl, sed, grep, and sort. This was updated to reflect the fact the repository tag URLs changed.
This version correctly parses the "name": field without a JSON parser.
#!/bin/sh
# 2022-07-20
# Simple script that will display Docker repository tags
# using basic tools: curl, awk, sed, grep, and sort.
# Usage:
# $ docker-show-repo-tags.sh ubuntu centos
# $ docker-show-repo-tags.sh centos | cat -n
for Repo in "$#" ; do
URL="https://registry.hub.docker.com/v2/repositories/library/$Repo/tags/"
curl -sS "$URL" | \
/usr/bin/sed -Ee 's/("name":)"([^"]*)"/\n\1\2\n/g' | \
grep '"name":' | \
awk -F: '{printf("'$Repo':%s\n",$2)}'
done
This older version no longer works. Many thanks to #d9k for pointing this out!
#!/bin/sh
# WARNING: This no long works!
# Simple script that will display Docker repository tags
# using basic tools: curl, sed, grep, and sort.
#
# Usage:
# $ docker-show-repo-tags.sh ubuntu centos
for Repo in $* ; do
curl -sS "https://hub.docker.com/r/library/$Repo/tags/" | \
sed -e $'s/"tags":/\\\n"tags":/g' -e $'s/\]/\\\n\]/g' | \
grep '^"tags"' | \
grep '"library"' | \
sed -e $'s/,/,\\\n/g' -e 's/,//g' -e 's/"//g' | \
grep -v 'library:' | \
sort -fu | \
sed -e "s/^/${Repo}:/"
done
This older version no longer works. Many thanks to #viky for pointing this out!
#!/bin/sh
# WARNING: This no long works!
# Simple script that will display Docker repository tags.
#
# Usage:
# $ docker-show-repo-tags.sh ubuntu centos
for Repo in $* ; do
curl -s -S "https://registry.hub.docker.com/v2/repositories/library/$Repo/tags/" | \
sed -e $'s/,/,\\\n/g' -e $'s/\[/\\\[\n/g' | \
grep '"name"' | \
awk -F\" '{print $4;}' | \
sort -fu | \
sed -e "s/^/${Repo}:/"
done
This is the output for a simple example:
$ docker-show-repo-tags.sh centos | cat -n
1 centos:5
2 centos:5.11
3 centos:6
4 centos:6.10
5 centos:6.6
6 centos:6.7
7 centos:6.8
8 centos:6.9
9 centos:7.0.1406
10 centos:7.1.1503
11 centos:7.2.1511
12 centos:7.3.1611
13 centos:7.4.1708
14 centos:7.5.1804
15 centos:centos5
16 centos:centos5.11
17 centos:centos6
18 centos:centos6.10
19 centos:centos6.6
20 centos:centos6.7
21 centos:centos6.8
22 centos:centos6.9
23 centos:centos7
24 centos:centos7.0.1406
25 centos:centos7.1.1503
26 centos:centos7.2.1511
27 centos:centos7.3.1611
28 centos:centos7.4.1708
29 centos:centos7.5.1804
30 centos:latest
I wrote a command line tool to simplify searching Docker Hub repository tags, available in my PyTools GitHub repository. It's simple to use with various command line switches, but most basically:
./dockerhub_show_tags.py repo1 repo2
It's even available as a Docker image and can take multiple repositories:
docker run harisekhon/pytools dockerhub_show_tags.py centos ubuntu
DockerHub
repo: centos
tags: 5.11
6.6
6.7
7.0.1406
7.1.1503
centos5.11
centos6.6
centos6.7
centos7.0.1406
centos7.1.1503
repo: ubuntu
tags: latest
14.04
15.10
16.04
trusty
trusty-20160503.1
wily
wily-20160503
xenial
xenial-20160503
If you want to embed it in scripts, use -q / --quiet to get just the tags, like normal Docker commands:
./dockerhub_show_tags.py centos -q
5.11
6.6
6.7
7.0.1406
7.1.1503
centos5.11
centos6.6
centos6.7
centos7.0.1406
centos7.1.1503
The v2 API seems to use some kind of pagination, so that it does not return all the available tags. This is clearly visible in projects such as python (or library/python). Even after quickly reading the documentation, I could not manage to work with the API correctly (maybe it is the wrong documentation).
Then I rewrote the script using the v1 API, and it is still using jq:
#!/bin/bash
repo="$1"
if [[ "${repo}" != */* ]]; then
repo="library/${repo}"
fi
url="https://registry.hub.docker.com/v1/repositories/${repo}/tags"
curl -s -S "${url}" | jq '.[]["name"]' | sed 's/^"\(.*\)"$/\1/' | sort
The full script is available at: https://github.com/denilsonsa/small_scripts/blob/master/docker_remote_tags.sh
I've also written an improved version (in Python) that aggregates tags that point to the same version: https://github.com/denilsonsa/small_scripts/blob/master/docker_remote_tags.py
Add this function to your .zshrc file or run the command manually:
#usage list-dh-tags <repo>
#example: list-dh-tags node
function list-dh-tags(){
wget -q https://registry.hub.docker.com/v1/repositories/$1/tags -O - | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n' | awk -F: '{print $3}'
}
Thanks to this -> How can I list all tags for a Docker image on a remote registry?
For anyone stumbling across this in modern times, you can use Skopeo to retrieve an image's tags from the Docker registry:
$ skopeo list-tags docker://jenkins/jenkins \
| jq -r '.Tags[] | select(. | contains("lts-alpine"))' \
| sort --version-sort --reverse
lts-alpine
2.277.3-lts-alpine
2.277.2-lts-alpine
2.277.1-lts-alpine
2.263.4-lts-alpine
2.263.3-lts-alpine
2.263.2-lts-alpine
2.263.1-lts-alpine
2.249.3-lts-alpine
2.249.2-lts-alpine
2.249.1-lts-alpine
2.235.5-lts-alpine
2.235.4-lts-alpine
2.235.3-lts-alpine
2.235.2-lts-alpine
2.235.1-lts-alpine
2.222.4-lts-alpine
Reimplementation of the previous post, using Python over sed/AWK:
for Repo in $* ; do
tags=$(curl -s -S "https://registry.hub.docker.com/v2/repositories/library/$Repo/tags/")
python - <<EOF
import json
tags = [t['name'] for t in json.loads('''$tags''')['results']]
tags.sort()
for tag in tags:
print "{}:{}".format('$Repo', tag)
EOF
done
For a script that works with OAuth bearer tokens on Docker Hub, try this:
Listing the tags of a Docker image on a Docker hub through the HTTP API
You can use Visual Studio Code to provide autocomplete for available Docker images and tags. However, this requires that you type the first letter of a tag in order to see autocomplete suggestions.
For example, when writing FROM ubuntu it offers autocomplete suggestions like ubuntu, ubuntu-debootstrap and ubuntu-upstart. When writing FROM ubuntu:a it offers autocomplete suggestions, like ubuntu:artful and ubuntu:artful-20170511.1

Resources