How can I export some variables out of an sh block so they can be used in later stages?
The following does not give me any errors but the values are never available as environment variables in later stages.
steps {
sh """
ASSUME_ROLE_RESPONSE=\$(aws sts assume-role --role-arn "arn:aws:iam::${env.NON_PROD_ACCOUNT_ID}:role/${env.AWS_ROLE}" --role-session-name "${env.AWS_ROLE_SESSION}" --duration-seconds 3600)
${env.ACCESS_KEY_ID}=\$(echo \$ASSUME_ROLE_RESPONSE | jq --raw-output '.Credentials.AccessKeyId')
${env.SECRET_ACCESS_KEY}=\$(echo \$ASSUME_ROLE_RESPONSE | jq --raw-output '.Credentials.SecretAccessKey')
${env.SESSION_TOKEN}=\$(echo \$ASSUME_ROLE_RESPONSE | jq --raw-output '.Credentials.SessionToken')
echo "AWS_ACCESS_KEY_ID=${ACCESS_KEY_ID},AWS_SECRET_ACCESS_KEY=${SECRET_ACCESS_KEY},AWS_SESSION_TOKEN=${SESSION_TOKEN}"
printenv | sort
"""
}
I have got this working but I cannot say it is elegant, if someone has a better\cleaner answer I would happily accept it.
Here is my solution:
stage("Authenticate To Non-Prod Account") {
steps {
script {
aws_credentials = sh(script: """
ASSUME_ROLE_RESPONSE=\$(aws sts assume-role --role-arn "arn:aws:iam::${env.NON_PROD_ACCOUNT_ID}:role/${env.AWS_ROLE}" --role-session-name "${env.AWS_ROLE_SESSION}" --duration-seconds 3600)
ACCESS_KEY_ID=\$(echo \$ASSUME_ROLE_RESPONSE | jq --raw-output '.Credentials.AccessKeyId')
SECRET_ACCESS_KEY=\$(echo \$ASSUME_ROLE_RESPONSE | jq --raw-output '.Credentials.SecretAccessKey')
SESSION_TOKEN=\$(echo \$ASSUME_ROLE_RESPONSE | jq --raw-output '.Credentials.SessionToken')
echo "AWS_ACCESS_KEY_ID=\$ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY=\$SECRET_ACCESS_KEY,AWS_SESSION_TOKEN=\$SESSION_TOKEN"
""", returnStdout: true)
env.ACCESS_KEY_ID = aws_credentials.split(',')[0].split('=')[1].trim()
env.AWS_SECRET_KEY = aws_credentials.split(',')[1].split('=')[1].trim()
env.SESSION_TOKEN = aws_credentials.split(',')[2].split('=')[1].trim()
}
}
}
I am answering as I have read a heap of posts which have suggested ideas that didnt work for me so this, for now, is the best option I have for authentication to AWS and ensuring the credentials are available in subsequent stages.
To export vars out of sh try using this:
env.var = sh (returnStdout: true, script: ''' SOME SH COMMAND ''').trim()
This will export your bash values to groovy variables in fact it can push the vars to the environment.
I am trying to execute ansible playbook in all EC2 AWS instances using Jenkinsfile function and assume-role.
But I am getting below error.
Obtained devops/JenkinsfileDynamic from git git#bitbucket.org:tui-uk-dev/cng-airflow-dags.git
Running in Durability level: MAX_SURVIVABILITY
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 33: illegal string body character after dollar sign;
solution: either escape a literal dollar sign "\$5" or bracket the value expression "${5}" # line 33, column 134.
SION_TOKEN=${AWS_SESSION_TOKEN} AWS_DEFA
^
Jenkinsfile:-
def Host_Verification2() {
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', credentialsId: 'cant_be_disclosed']]) {
sh '''
aws sts assume-role --role-arn "arn:aws:iam::12345678901:role/cant_role_jenkins" --role-session-name "connect" > assume-role-output.txt
export AWS_ACCESS_KEY_ID=`cat assume-role-output.txt | jq -c '.Credentials.AccessKeyId' | tr -d '"' | tr -d ' '`
export AWS_SECRET_ACCESS_KEY=`cat assume-role-output.txt | jq -c '.Credentials.SecretAccessKey' | tr -d '"' | tr -d ' '`
export AWS_SESSION_TOKEN=`cat assume-role-output.txt | jq -c '.Credentials.SessionToken' | tr -d '"' | tr -d ' '`
rm assume-role-output.txt
sshagent(credentials: ['tuiuki-cng-dev']) {
sh '''
cd acm/
sudo AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}" AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}" AWS_SESSION_TOKEN="${AWS_SESSION_TOKEN}" inventory/ec2.py --list --refresh-cache
sudo AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}" AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}" AWS_SESSION_TOKEN="${AWS_SESSION_TOKEN}" AWS_DEFAULT_REGION="eu-central-1" ansible-playbook -i inventory/ec2.py plays/emr/find.yml
'''
}
'''
}
}
Like the exception is saying:
solution: either escape a literal dollar sign "\$5" or bracket the value expression "${5}"
Try this:
sudo AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}" AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}" ansible-playbook -i inventory/ec2.py plays/emr/findplaybooks.yml
sudo AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN} ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i inventory/ec2.py --limit "tag_Name_cluster" plays/emr/find.yml --private-key=${SSH_KEY} -u hadoop
I am trying to create a Jenkins pipeline where I need to execute multiple shell commands and use the result of one command in the next command or so. I found that wrapping the commands in a pair of three single quotes ''' can accomplish the same. However, I am facing issues while using pipe to feed output of one command to another command. For example
stage('Test') {
sh '''
echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r '.public_url'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r '.code'`
echo $RESULT
'''
}
Commands with pipe are not working properly. Here is the jenkins console output:
+ echo Executing Tests
Executing Tests
+ curl -s http://localhost:4040/api/tunnels/command_line
+ jq -r .public_url
+ URL=null
+ echo null
null
+ curl -sPOST https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=null
I tried entering all these commands in the jenkins snippet generator for pipeline and it gave the following output:
sh ''' echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r \'.public_url\'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r \'.code\'`
echo $RESULT
'''
Notice the escaped single quotes in the commands jq -r \'.public_url\' and jq -r \'.code\'. Using the code this way solved the problem
UPDATE: : After a while even that started to give problems. There were certain commands executing prior to these commands. One of them was grunt serve and the other was ./ngrok http 9000. I added some delay after each of these commands and it solved the problem for now.
The following scenario shows a real example that may need to use multiline shell commands. Which is, say you are using a plugin like Publish Over SSH and you need to execute a set of commands in the destination host in a single SSH session:
stage ('Prepare destination host') {
sh '''
ssh -t -t user#host 'bash -s << 'ENDSSH'
if [[ -d "/path/to/some/directory/" ]];
then
rm -f /path/to/some/directory/*.jar
else
sudo mkdir -p /path/to/some/directory/
sudo chmod -R 755 /path/to/some/directory/
sudo chown -R user:user /path/to/some/directory/
fi
ENDSSH'
'''
}
Special Notes:
The last ENDSSH' should not have any characters before it. So it
should be at the starting position of a new line.
use ssh -t -t if you have sudo within the remote shell command
I split the commands with &&
node {
FOO = world
stage('Preparation') { // for display purposes
sh "ls -a && pwd && echo ${FOO}"
}
}
The example outputs:
- ls -a (the files in your workspace
- pwd (location workspace)
- echo world
Problem statement:
Can sign repos from inside normal terminal (also inside docker). From jenkins job, repo creation/signing fails. Job hangs.
Configuration:
Jenkins spawns docker container to create/sign deb repository.
Private and public keys all present.
gpg-agent installed on the docker container to sign the packages.
~/.gnupg/gpg.conf file has "use-agent" enabled
Progress:
Can start gpg-agent using jenkins on the docker container.
Can use gpg-preset-passphrase to cache passphrase.
Can use [OUTSIDE JENKINS]
reprepro --ask-passphrase -Vb . includedeb ${_repo_name} ${_pkg_location}
to fetch the passphrase from gpg-agent and sign the repo.
Problem:
from inside a jenkins job, the command "reprepro --ask-passphrase -Vb ..." hangs.
Code:
starting gpg-agent:
GPGAGENT=/usr/bin/gpg-agent
GNUPG_PID_FILE=${GNUPGHOME}/gpg-agent-info
GNUPG_CFG=${GNUPGHOME}/gpg.conf
GNUPG_CFG=${GNUPGHOME}/gpg-agent.conf
function start_gpg_agent {
GPG_TTY=$(tty)
export GPG_TTY
if [ -r "${GNUPG_PID_FILE}" ]
then
source "${GNUPG_PID_FILE}" count=$(ps lax | grep "${GPGAGENT}" | grep "$SSH_AGENT_PID" | wc -l)
if [ $count -eq 0 ]
then
if ! ${GPGAGENT} 2>/dev/null then
$GPGAGENT --debug-all --options ${BASE_PATH}/sign/gpg-agent.options \
--daemon --enable-ssh-support \
--allow-preset-passphrase --write-env-file ${GNUPG_PID_FILE}
if [[ $? -eq 0 ]]
then
echo "INFO::agent started"
else
echo "INFO::Agent could not be started. Exit."
exit -101
fi
fi
fi
else
$GPGAGENT --debug-all --options ${BASE_PATH}/sign/gpg-agent.options \
--daemon --allow-preset-passphrase --write-env-file ${GNUPG_PID_FILE}
fi
}
options file:
default-cache-ttl 31536000
default-cache-ttl-ssh 31536000
max-cache-ttl 31536000
max-cache-ttl-ssh 31536000
enable-ssh-support
debug-all
saving passphrase.
/usr/lib/gnupg2/gpg-preset-passphrase -v --preset --passphrase ${_passphrase} ${_fp}
finally (for completion), sign repo:
reprepro --ask-passphrase -Vb . includedeb ${_repo_name} ${_pkg_location}
I try to locate one specific tag for a Docker image. How can I do it on the command line? I want to avoid downloading all the images and then removing the unneeded ones.
In the official Ubuntu release, https://registry.hub.docker.com/_/ubuntu/, there are several tags (release for it), while when I search it on the command line,
user#ubuntu:~$ docker search ubuntu | grep ^ubuntu
ubuntu Official Ubuntu base image 354
ubuntu-upstart Upstart is an event-based replacement for ... 7
ubuntufan/ping 0
ubuntu-debootstrap 0
Also in the help of command line search https://docs.docker.com/engine/reference/commandline/search/, no clue how it can work?
Is it possible in the docker search command?
If I use a raw command to search via the Docker registry API, then the information can be fetched:
$ curl https://registry.hub.docker.com//v1/repositories/ubuntu/tags | python -mjson.tool
[
{
"layer": "ef83896b",
"name": "latest"
},
.....
{
"layer": "463ff6be",
"name": "raring"
},
{
"layer": "195eb90b",
"name": "saucy"
},
{
"layer": "ef83896b",
"name": "trusty"
}
]
When using CoreOS, jq is available to parse JSON data.
So like you were doing before, looking at library/centos:
$ curl -s -S 'https://registry.hub.docker.com/v2/repositories/library/centos/tags/' | jq '."results"[]["name"]' |sort
"6"
"6.7"
"centos5"
"centos5.11"
"centos6"
"centos6.6"
"centos6.7"
"centos7.0.1406"
"centos7.1.1503"
"latest"
The cleaner v2 API is available now, and that's what I'm using in the example. I will build a simple script docker_remote_tags:
#!/usr/bin/bash
curl -s -S "https://registry.hub.docker.com/v2/repositories/library/$#/tags/" | jq '."results"[]["name"]' |sort
Enables:
$ ./docker_remote_tags library/centos
"6"
"6.7"
"centos5"
"centos5.11"
"centos6"
"centos6.6"
"centos6.7"
"centos7.0.1406"
"centos7.1.1503"
"latest"
Reference:
jq: https://stedolan.github.io/jq/ | apt-get install jq
I didn't like any of the solutions above because A) they required external libraries that I didn't have and didn't want to install. B) I didn't get all the pages.
The Docker API limits you to 100 items per request. This will loop over each "next" item and get them all (for Python it's seven pages; other may be more or less... It depends)
If you really want to spam yourself, remove | cut -d '-' -f 1 from the last line, and you will see absolutely everything.
url=https://registry.hub.docker.com/v2/repositories/library/redis/tags/?page_size=100 `# Initial url` ; \
( \
while [ ! -z $url ]; do `# Keep looping until the variable url is empty` \
>&2 echo -n "." `# Every iteration of the loop prints out a single dot to show progress as it got through all the pages (this is inline dot)` ; \
content=$(curl -s $url | python -c 'import sys, json; data = json.load(sys.stdin); print(data.get("next", "") or ""); print("\n".join([x["name"] for x in data["results"]]))') `# Curl the URL and pipe the output to Python. Python will parse the JSON and print the very first line as the next URL (it will leave it blank if there are no more pages) then continue to loop over the results extracting only the name; all will be stored in a variable called content` ; \
url=$(echo "$content" | head -n 1) `# Let's get the first line of content which contains the next URL for the loop to continue` ; \
echo "$content" | tail -n +2 `# Print the content without the first line (yes +2 is counter intuitive)` ; \
done; \
>&2 echo `# Finally break the line of dots` ; \
) | cut -d '-' -f 1 | sort --version-sort | uniq;
Sample output:
$ url=https://registry.hub.docker.com/v2/repositories/library/redis/tags/?page_size=100 `#initial url` ; \
> ( \
> while [ ! -z $url ]; do `#Keep looping until the variable url is empty` \
> >&2 echo -n "." `#Every iteration of the loop prints out a single dot to show progress as it got through all the pages (this is inline dot)` ; \
> content=$(curl -s $url | python -c 'import sys, json; data = json.load(sys.stdin); print(data.get("next", "") or ""); print("\n".join([x["name"] for x in data["results"]]))') `# Curl the URL and pipe the JSON to Python. Python will parse the JSON and print the very first line as the next URL (it will leave it blank if there are no more pages) then continue to loop over the results extracting only the name; all will be store in a variable called content` ; \
> url=$(echo "$content" | head -n 1) `#Let's get the first line of content which contains the next URL for the loop to continue` ; \
> echo "$content" | tail -n +2 `#Print the content with out the first line (yes +2 is counter intuitive)` ; \
> done; \
> >&2 echo `#Finally break the line of dots` ; \
> ) | cut -d '-' -f 1 | sort --version-sort | uniq;
...
2
2.6
2.6.17
2.8
2.8.6
2.8.7
2.8.8
2.8.9
2.8.10
2.8.11
2.8.12
2.8.13
2.8.14
2.8.15
2.8.16
2.8.17
2.8.18
2.8.19
2.8.20
2.8.21
2.8.22
2.8.23
3
3.0
3.0.0
3.0.1
3.0.2
3.0.3
3.0.4
3.0.5
3.0.6
3.0.7
3.0.504
3.2
3.2.0
3.2.1
3.2.2
3.2.3
3.2.4
3.2.5
3.2.6
3.2.7
3.2.8
3.2.9
3.2.10
3.2.11
3.2.100
4
4.0
4.0.0
4.0.1
4.0.2
4.0.4
4.0.5
4.0.6
4.0.7
4.0.8
32bit
alpine
latest
nanoserver
windowsservercore
If you want the bash_profile version:
function docker-tags () {
name=$1
# Initial URL
url=https://registry.hub.docker.com/v2/repositories/library/$name/tags/?page_size=100
(
# Keep looping until the variable URL is empty
while [ ! -z $url ]; do
# Every iteration of the loop prints out a single dot to show progress as it got through all the pages (this is inline dot)
>&2 echo -n "."
# Curl the URL and pipe the output to Python. Python will parse the JSON and print the very first line as the next URL (it will leave it blank if there are no more pages)
# then continue to loop over the results extracting only the name; all will be stored in a variable called content
content=$(curl -s $url | python -c 'import sys, json; data = json.load(sys.stdin); print(data.get("next", "") or ""); print("\n".join([x["name"] for x in data["results"]]))')
# Let's get the first line of content which contains the next URL for the loop to continue
url=$(echo "$content" | head -n 1)
# Print the content without the first line (yes +2 is counter intuitive)
echo "$content" | tail -n +2
done;
# Finally break the line of dots
>&2 echo
) | cut -d '-' -f 1 | sort --version-sort | uniq;
}
And simply call it: docker-tags redis
Sample output:
$ docker-tags redis
...
2
2.6
2.6.17
2.8
--trunc----
32bit
alpine
latest
nanoserver
windowsservercore
As far as I know, the CLI does not allow searching/listing tags in a repository.
But if you know which tag you want, you can pull that explicitly by adding a colon and the image name: docker pull ubuntu:saucy
This script (docker-show-repo-tags.sh) should work for any Docker enabled host that has curl, sed, grep, and sort. This was updated to reflect the fact the repository tag URLs changed.
This version correctly parses the "name": field without a JSON parser.
#!/bin/sh
# 2022-07-20
# Simple script that will display Docker repository tags
# using basic tools: curl, awk, sed, grep, and sort.
# Usage:
# $ docker-show-repo-tags.sh ubuntu centos
# $ docker-show-repo-tags.sh centos | cat -n
for Repo in "$#" ; do
URL="https://registry.hub.docker.com/v2/repositories/library/$Repo/tags/"
curl -sS "$URL" | \
/usr/bin/sed -Ee 's/("name":)"([^"]*)"/\n\1\2\n/g' | \
grep '"name":' | \
awk -F: '{printf("'$Repo':%s\n",$2)}'
done
This older version no longer works. Many thanks to #d9k for pointing this out!
#!/bin/sh
# WARNING: This no long works!
# Simple script that will display Docker repository tags
# using basic tools: curl, sed, grep, and sort.
#
# Usage:
# $ docker-show-repo-tags.sh ubuntu centos
for Repo in $* ; do
curl -sS "https://hub.docker.com/r/library/$Repo/tags/" | \
sed -e $'s/"tags":/\\\n"tags":/g' -e $'s/\]/\\\n\]/g' | \
grep '^"tags"' | \
grep '"library"' | \
sed -e $'s/,/,\\\n/g' -e 's/,//g' -e 's/"//g' | \
grep -v 'library:' | \
sort -fu | \
sed -e "s/^/${Repo}:/"
done
This older version no longer works. Many thanks to #viky for pointing this out!
#!/bin/sh
# WARNING: This no long works!
# Simple script that will display Docker repository tags.
#
# Usage:
# $ docker-show-repo-tags.sh ubuntu centos
for Repo in $* ; do
curl -s -S "https://registry.hub.docker.com/v2/repositories/library/$Repo/tags/" | \
sed -e $'s/,/,\\\n/g' -e $'s/\[/\\\[\n/g' | \
grep '"name"' | \
awk -F\" '{print $4;}' | \
sort -fu | \
sed -e "s/^/${Repo}:/"
done
This is the output for a simple example:
$ docker-show-repo-tags.sh centos | cat -n
1 centos:5
2 centos:5.11
3 centos:6
4 centos:6.10
5 centos:6.6
6 centos:6.7
7 centos:6.8
8 centos:6.9
9 centos:7.0.1406
10 centos:7.1.1503
11 centos:7.2.1511
12 centos:7.3.1611
13 centos:7.4.1708
14 centos:7.5.1804
15 centos:centos5
16 centos:centos5.11
17 centos:centos6
18 centos:centos6.10
19 centos:centos6.6
20 centos:centos6.7
21 centos:centos6.8
22 centos:centos6.9
23 centos:centos7
24 centos:centos7.0.1406
25 centos:centos7.1.1503
26 centos:centos7.2.1511
27 centos:centos7.3.1611
28 centos:centos7.4.1708
29 centos:centos7.5.1804
30 centos:latest
I wrote a command line tool to simplify searching Docker Hub repository tags, available in my PyTools GitHub repository. It's simple to use with various command line switches, but most basically:
./dockerhub_show_tags.py repo1 repo2
It's even available as a Docker image and can take multiple repositories:
docker run harisekhon/pytools dockerhub_show_tags.py centos ubuntu
DockerHub
repo: centos
tags: 5.11
6.6
6.7
7.0.1406
7.1.1503
centos5.11
centos6.6
centos6.7
centos7.0.1406
centos7.1.1503
repo: ubuntu
tags: latest
14.04
15.10
16.04
trusty
trusty-20160503.1
wily
wily-20160503
xenial
xenial-20160503
If you want to embed it in scripts, use -q / --quiet to get just the tags, like normal Docker commands:
./dockerhub_show_tags.py centos -q
5.11
6.6
6.7
7.0.1406
7.1.1503
centos5.11
centos6.6
centos6.7
centos7.0.1406
centos7.1.1503
The v2 API seems to use some kind of pagination, so that it does not return all the available tags. This is clearly visible in projects such as python (or library/python). Even after quickly reading the documentation, I could not manage to work with the API correctly (maybe it is the wrong documentation).
Then I rewrote the script using the v1 API, and it is still using jq:
#!/bin/bash
repo="$1"
if [[ "${repo}" != */* ]]; then
repo="library/${repo}"
fi
url="https://registry.hub.docker.com/v1/repositories/${repo}/tags"
curl -s -S "${url}" | jq '.[]["name"]' | sed 's/^"\(.*\)"$/\1/' | sort
The full script is available at: https://github.com/denilsonsa/small_scripts/blob/master/docker_remote_tags.sh
I've also written an improved version (in Python) that aggregates tags that point to the same version: https://github.com/denilsonsa/small_scripts/blob/master/docker_remote_tags.py
Add this function to your .zshrc file or run the command manually:
#usage list-dh-tags <repo>
#example: list-dh-tags node
function list-dh-tags(){
wget -q https://registry.hub.docker.com/v1/repositories/$1/tags -O - | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n' | awk -F: '{print $3}'
}
Thanks to this -> How can I list all tags for a Docker image on a remote registry?
For anyone stumbling across this in modern times, you can use Skopeo to retrieve an image's tags from the Docker registry:
$ skopeo list-tags docker://jenkins/jenkins \
| jq -r '.Tags[] | select(. | contains("lts-alpine"))' \
| sort --version-sort --reverse
lts-alpine
2.277.3-lts-alpine
2.277.2-lts-alpine
2.277.1-lts-alpine
2.263.4-lts-alpine
2.263.3-lts-alpine
2.263.2-lts-alpine
2.263.1-lts-alpine
2.249.3-lts-alpine
2.249.2-lts-alpine
2.249.1-lts-alpine
2.235.5-lts-alpine
2.235.4-lts-alpine
2.235.3-lts-alpine
2.235.2-lts-alpine
2.235.1-lts-alpine
2.222.4-lts-alpine
Reimplementation of the previous post, using Python over sed/AWK:
for Repo in $* ; do
tags=$(curl -s -S "https://registry.hub.docker.com/v2/repositories/library/$Repo/tags/")
python - <<EOF
import json
tags = [t['name'] for t in json.loads('''$tags''')['results']]
tags.sort()
for tag in tags:
print "{}:{}".format('$Repo', tag)
EOF
done
For a script that works with OAuth bearer tokens on Docker Hub, try this:
Listing the tags of a Docker image on a Docker hub through the HTTP API
You can use Visual Studio Code to provide autocomplete for available Docker images and tags. However, this requires that you type the first letter of a tag in order to see autocomplete suggestions.
For example, when writing FROM ubuntu it offers autocomplete suggestions like ubuntu, ubuntu-debootstrap and ubuntu-upstart. When writing FROM ubuntu:a it offers autocomplete suggestions, like ubuntu:artful and ubuntu:artful-20170511.1