Push Docker image to Nexus 3 - docker

After starting a Sonatype Nexus 3 image (command 1) I tried to create a repo and push one test image (command 2) to that repo but got an error 405 (error 1)
command 1
$ docker run -d -p 8081:8081 --name nexus sonatype/nexus3:3.14.0
command 2
$ docker push 127.0.0.1:8081/repository/test2/image-test:0.1
error 1
error parsing HTTP 405 response body: invalid character '<' looking for beginning of value: "\n<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <title>405 - Nexus Repository Manager</title>\n <meta http-equiv=\"Content-Type\" content=\"text/html; charset=UTF-8\"/>\n\n\n <!--[if lt IE 9]>\n <script>(new Image).src=\"http://127.0.0.1:8081/favicon.ico?3.14.0-04\"</script>\n <![endif]-->\n <link rel=\"icon\" type=\"image/png\" href=\"http://127.0.0.1:8081/favicon-32x32.png?3.14.0-04\" sizes=\"32x32\">\n <link rel=\"mask-icon\" href=\"http://127.0.0.1:8081/safari-pinned-tab.svg?3.14.0-04\" color=\"#5bbad5\">\n <link rel=\"icon\" type=\"image/png\" href=\"http://127.0.0.1:8081/favicon-16x16.png?3.14.0-04\" sizes=\"16x16\">\n <link rel=\"shortcut icon\" href=\"http://127.0.0.1:8081/favicon.ico?3.14.0-04\">\n <meta name=\"msapplication-TileImage\" content=\"http://127.0.0.1:8081/mstile-144x144.png?3.14.0-04\">\n <meta name=\"msapplication-TileColor\" content=\"#00a300\">\n\n <link rel=\"stylesheet\" type=\"text/css\" href=\"http://127.0.0.1:8081/static/css/nexus-content.css?3.14.0-04\"/>\n</head>\n<body>\n<div class=\"nexus-header\">\n \n <div class=\"product-logo\">\n <img src=\"http://127.0.0.1:8081/static/images/nexus.png?3.14.0-04\" alt=\"Product logo\"/>\n </div>\n <div class=\"product-id\">\n <div class=\"product-id__line-1\">\n <span class=\"product-name\">Nexus Repository Manager</span>\n </div>\n <div class=\"product-id__line-2\">\n <span class=\"product-spec\">OSS 3.14.0-04</span>\n </div>\n </div>\n \n</div>\n\n<div class=\"nexus-body\">\n <div class=\"content-header\">\n <img src=\"http://127.0.0.1:8081/static/rapture/resources/icons/x32/exclamation.png?3.14.0-04\" alt=\"Exclamation point\" aria-role=\"presentation\"/>\n <span class=\"title\">Error 405</span>\n <span class=\"description\">Method Not Allowed</span>\n </div>\n <div class=\"content-body\">\n <div class=\"content-section\">\n HTTP method POST is not supported by this URL\n </div>\n </div>\n</div>\n</body>\n</html>\n\n"

Explication
After some research I found out that the nexus3 docker repositories are designed to work with individual port for each repository (hosted, group or proxy).
https://issues.sonatype.org/browse/NEXUS-9960
Solution
So I destroyed my previous docker container because I didn't have any relative info on it and launched the same command but with an extra port enabled.
$ docker run -d -p 8081:8081 --name nexus sonatype/nexus3:3.14.0
Updated: need to open port 8082 for docker
$ docker run -d -p 8081:8081 -p 8082:8082 --name nexus sonatype/nexus3:3.14.0
So when you make a new docker repository you need to define at least a http connector port, that I defined in the image as 8082.
After that you have to login to the service with the default admin account (admin admin123)
$ docker login 127.0.0.1:8082
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /home/user/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Then tried to upload the new tag to that url and it wworked.
$ docker push 127.0.0.1:8082/repository/test2/image-test:0.1
The push refers to repository [127.0.0.1:8082/repository/test2/image-test]
cd76d43ec36e: Pushed
8ad8344c7fe3: Pushed
b28ef0b6fef8: Pushed
0.1: digest: sha256:315f00bd7986508cb0984130bbe3f7f26b2ec477122c9bf7459b0b64e443a232 size: 948
Extra - Dockerfile
So because I needed to create a custom nexus3 docker image for my production environment I started the Dockerfile like this:
FROM sonatype/nexus3:3.14.0
ENV NEXUS_DATA = /nexus-data/
EXPOSE 8090-8099
I will be using the ports from 8090 to 8099 to specify different docker image repositories instead of 8022, but in case I needed more ports I could just change the valors or add a new range of ports.
Hope it was useful!!

Nexus Documentation Says:
Sharing an image can be achieved by publishing it to a hosted repository. This is completely private and requires you to tag and push the image. When tagging an image, you can use the image identifier (imageId). It is listed when showing the list of all images with docker images. Syntax and an example (using imageId) for creating a tag are:
docker tag <imageId or imageName> <nexus-hostname>:<repository-port>/<image>:<tag>
docker tag af340544ed62 nexus.example.com:18444/hello-world:mytag
Once the tag, which can be equivalent to a version, is created successfully, you can confirm its creation with docker images and issue the push with the syntax:
docker push <nexus-hostname>:<repository-port>/<image>:<tag>
Note that the port needs to be the repository connector port configured for the hosted repository to which you want to push to. You can not push to a repository group or a proxy repository.
hope It help you!

Related

Docker in Docker dind connecting to ddev container within compose network from host maschine

I'm currently working on setting up a ddev typo3 webpage running in a ubuntu dind container to get around the installation requirements for ddev on windows.
I have previously tested connecting to an nginx container within the dind container, which worked as expected. With this, nginx was served on localhost:80 on the host.
#host
docker run --rm -it --privileged -p 80:8080 ubuntu-dind
#container
docker run -it --rm -d -p 8080:80 --name web nginx
After successfully setting up and starting ddev, the following containers are now running inside my dind container:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cf695c8e2ed9 drud/ddev-router:v1.21.4-built "/app/docker-entrypo…" 3 minutes ago Up 3 minutes (healthy) 127.0.0.1:80->80/tcp, 127.0.0.1:443->443/tcp, 127.0.0.1:8025-8026->8025-8026/tcp, 127.0.0.1:8036-8037->8036-8037/tcp ddev-router
6d26ecd91adf drud/ddev-webserver:20230207_fix_nvm-recruiting-built "/start.sh" 3 minutes ago Up 3 minutes (healthy) 8025/tcp, 127.0.0.1:32772->80/tcp, 127.0.0.1:32771->443/tcp ddev-recruiting-web
5e10c98eb2e7 phpmyadmin:5 "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 80/tcp ddev-recruiting-dba
8e3a5254605d drud/ddev-dbserver-mariadb-10.4:v1.21.4-recruiting-built "/docker-entrypoint.…" 3 minutes ago Up 3 minutes (healthy) 127.0.0.1:32768->3306/tcp ddev-recruiting-db
68f7527750ab drud/ddev-ssh-agent:v1.21.4-built "/entry.sh ssh-agent" 4 minutes ago Up 4 minutes (healthy) ddev-ssh-agent
The next step would now be to connect to http://recruiting.ddev.site:8036/ the site is reachable from within the dind container, but im unsure of how to connect to this address from the host.
I have fired up my dind container as follows:
docker run --rm -it --privileged -v ${PWD}/project:/usr/src/project -p 80:8036 dindu
Attempting to map port 8036 of the container to port 80 on the host.
Testing to connect to port 8036 inside the container reaches:
root#57fcae15337a:/usr/src/project# curl 127.0.0.1:8036
<!DOCTYPE html>
<html>
<head>
<title>503: No ddev back-end site available</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>503: No ddev back-end site available.</h1>
<p>This is the ddev-router container: There is no back-end webserver at the URL you specified. You may want to use "ddev start" to start the site.</p>
</body>
</html>
And only gets an ERR_EMPTY_RESPONSE from the host, so it's there have to be some additional steps im missing. I don't believe my problem is ddev specific and has more to do with me being somewhat inexperienced with docker networking.
How do I forward an address, like http://recruiting.ddev.site, instead of a simple port to the host machine?

SSL(curl) connection error in ElasticSearch setup

Have setup a 3-node Elasticsearch cluster using docker-compose. Followed below steps:
On one of the master nodes, es11, gets below error, however same curl command works fine on other 2 nodes i.e. es12, es13:
Error:
curl -X GET 'https://localhost:9316'
curl: (35) Encountered end of file
Below error in logs:
"stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [es13][SOMEIP:9316][internal:cluster/coordination/join]",
"Caused by: org.elasticsearch.transport.ConnectTransportException: [es11][SOMEIP:9316] handshake failed. unexpected remote node {es13}{SOMEVALUE}{SOMEVALUE
"at org.elasticsearch.transport.TransportService.lambda$connectionValidator$6(TransportService.java:468) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:95) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.transport.TransportService.lambda$handshake$9(TransportService.java:577) ~[elasticsearch-7.17.6.jar:7.17.6]",
https://localhost:9316 on browser gives site can't be reached error as well.It seems SSL certificate as created in step 4 below is having some issues in es11.
Any leads please? OR If I repeat step 4, do i need to copy the certs again to es12 & es13?
Below elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
Ports as defined in all 3 nodes docker-compose.yml
environment:
- node.name=es11
- transport.port=9316
ports:
- 9216:9200
- 9316:9316
Initialize a docker swarm. On ES11 run docker swarm init. Follow the instructions to join 12 and 13 to the swarm.
Create an overlay network docker network create -d overlay --attachable elastic
If necessary, bring down the current cluster and remove all the associated volumes by running docker-compose down -v
Create SSL certificates for ES with docker-compose -f create-certs.yml run --rm create_certs
Copy the certs for es12 and 13 to the respective servers
Use this busybox to create the overlay network on 12 and 13 sudo docker run -itd --name containerX --net [network name] busybox
Configure certs on 12 and 13 with docker-compose -f config-certs.yml run --rm config_certs
Start the cluster with docker-compose up -d on each server
Set the passwords for the built-in ES accounts by logging into the cluster docker exec -it es11 sh then running bin/elasticsearch-setup-passwords interactive --url localhost:9316
(as per your https://discuss.elastic.co thread)
you cannot talk HTTP to the transport protocol port, which you have defined in transport.port. you need to talk to port 9200 in the container, which you have mapped to 9216 outside the container
the transport port runs a binary protocol that is not HTTP accessible

Airflow pull docker image from private google container repository

I am using the https://github.com/puckel/docker-airflow image to run Airflow. I had to add pip install docker in order for it to support DockerOperator.
Everything seems ok, but I can't figure out how to pull an image from a private google docker container repository.
I tried adding the connection in the admin section type of google cloud conenction and running the docker operator as.
t2 = DockerOperator(
task_id='docker_command',
image='eu.gcr.io/project/image',
api_version='2.3',
auto_remove=True,
command="/bin/sleep 30",
docker_url="unix://var/run/docker.sock",
network_mode="bridge",
docker_conn_id="google_con"
)
But always get an error...
[2019-11-05 14:12:51,162] {{taskinstance.py:1047}} ERROR - No Docker
registry URL provided
I also tried the docker_conf_option
t2 = DockerOperator(
task_id='docker_command',
image='eu.gcr.io/project/image',
api_version='2.3',
auto_remove=True,
command="/bin/sleep 30",
docker_url="unix://var/run/docker.sock",
network_mode="bridge",
dockercfg_path="/usr/local/airflow/config.json",
)
I get the following error:
[2019-11-06 13:59:40,522] {{docker_operator.py:194}} INFO - Starting
docker container from image
eu.gcr.io/project/image
[2019-11-06 13:59:40,524] {{taskinstance.py:1047}} ERROR -
('Connection aborted.', FileNotFoundError(2, 'No such file or
directory'))
I also tried using only dockercfg_path="config.json" and got the same error.
I can't really use Bash Operator to try to docker login as it does not recognize docker command...
What am I missing?
line 1: docker: command not found
t3 = BashOperator(
task_id='print_hello',
bash_command='docker login -u _json_key - p /usr/local/airflow/config.json eu.gcr.io'
)
airflow.hooks.docker_hook.DockerHook is using docker_default connection where one isn't configured.
Now in your first attempt, you set google_con for docker_conn_id and the error thrown is showing that host (i.e registry name) isn't configured.
Here are a couple of changes to do:
image argument passed in DockerOperator should be set to image tag without registry name prefixing it.
DockerOperator(api_version='1.21',
# docker_url='tcp://localhost:2375', #Set your docker URL
command='/bin/ls',
image='image',
network_mode='bridge',
task_id='docker_op_tester',
docker_conn_id='google_con',
dag=dag,
# added this to map to host path in MacOS
host_tmp_dir='/tmp',
tmp_dir='/tmp',
)
provide registry name, username and password for the underlying DockerHook to authenticate to Docker in your google_con connection.
You can obtain long lived credentials for authentication from a service account key. For username, use _json_key and in password field paste in the contents of the json key file.
Here are logs from running my task:
[2019-11-16 20:20:46,874] {base_task_runner.py:110} INFO - Job 443: Subtask docker_op_tester [2019-11-16 20:20:46,874] {dagbag.py:88} INFO - Filling up the DagBag from /Users/r7/OSS/airflow/airflow/example_dags/example_docker_operator.py
[2019-11-16 20:20:47,054] {base_task_runner.py:110} INFO - Job 443: Subtask docker_op_tester [2019-11-16 20:20:47,054] {cli.py:592} INFO - Running <TaskInstance: docker_sample.docker_op_tester 2019-11-14T00:00:00+00:00 [running]> on host 1.0.0.127.in-addr.arpa
[2019-11-16 20:20:47,074] {logging_mixin.py:89} INFO - [2019-11-16 20:20:47,074] {local_task_job.py:120} WARNING - Time since last heartbeat(0.01 s) < heartrate(5.0 s), sleeping for 4.989537 s
[2019-11-16 20:20:47,088] {logging_mixin.py:89} INFO - [2019-11-16 20:20:47,088] {base_hook.py:89} INFO - Using connection to: id: google_con. Host: gcr.io/<redacted-project-id>, Port: None, Schema: , Login: _json_key, Password: XXXXXXXX, extra: {}
[2019-11-16 20:20:48,404] {docker_operator.py:209} INFO - Starting docker container from image alpine
[2019-11-16 20:20:52,066] {logging_mixin.py:89} INFO - [2019-11-16 20:20:52,066] {local_task_job.py:99} INFO - Task exited with return code 0
I know the question is about GCR but it's worth noting that other container registries may expect the config in a different format.
For example Gitlab expects you to pass the fully qualified image name to the DAG and only put the Gitlab container registry host name in the connection:
DockerOperator(
task_id='docker_command',
image='registry.gitlab.com/group/project/image:tag',
api_version='auto',
docker_conn_id='gitlab_registry',
)
The set up your gitlab_registry connection like:
docker://gitlab+deploy-token-1234:ABDCtoken1234#registry.gitlab.com
Based on recent Cloud Composer documentation, it's recommended to use KubernetesPodOperator instead, like this:
from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator
KubernetesPodOperator(
task_id='docker_op_tester',
name='docker_op_tester',
dag=dag,
namespace="default",
image="eu.gcr.io/project/image",
cmds=["ls"]
)
Further to #Tamlyn's answer, we can also skip the creation of connection (docker_conn_id) from airflow and use it with gitlab as under
On your development machine :
https://gitlab.com/yourgroup/yourproject/-/settings/repository (create a token here and get details for logging in)
docker login registry.gitlab.com (on the machine to login to docker from the machine to push the image to docker - enter your gitlab credentials when prompted)
docker build -t registry.gitlab.com/yourgroup/yourproject . && docker push registry.gitlab.com/yourgroup/yourproject (builds and pushes to your project repo's container registry)
On your airflow machine :
https://gitlab.com/yourgroup/yourproject/-/settings/repository (you can use the above created token for logging in)
docker login registry.gitlab.com (to login to docker from the machine to pull the image from docker, this skips the need for creating a docker registry connection - enter your gitlab credentials when prompted = this generates ~/.docker/config.json which is required Reference from docker docs )
In your dag :
dag = DAG(
"dag_id",
default_args = default_args,
schedule_interval = "15 1 * * *"
)
docker_trigger = DockerOperator(
task_id = "task_id",
api_version = "auto",
network_mode = "bridge",
image = "registry.gitlab.com/yourgroup/yourproject",
auto_remove = True, # use if required
force_pull = True, # use if required
xcom_all = True, # use if required
# tty = True, # turning this on screws up the log rendering
# command = "", # use if required
environment = { # use if required
"envvar1": "envvar1value",
"envvar2": "envvar2value",
},
dag = dag,
)
this works with Ubuntu 20.04.2 LTS (tried and tested) with airflow installed on the instance
You will need to instal Cloud SDK in your workstation which includes the gcloud command-line tool.
After installing Cloud SDK and Docker version 18.03 or newer
According to their documentation to pull from Container Registry, use the command:
docker pull [HOSTNAME]/[PROJECT-ID]/[IMAGE]:[TAG]
or
docker pull [HOSTNAME]/[PROJECT-ID]/[IMAGE]#[IMAGE_DIGEST]
where:
[HOSTNAME] is listed under Location in the console. It's one of four
options: gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io.
[PROJECT-ID] is your Google Cloud Platform Console project ID.
[IMAGE] is the image's name in Container Registry.
[TAG] is the tag applied to the image. In a registry, tags are unique
to an image.
[IMAGE_DIGEST] is the sha256 hash value of the image contents. In the
console, click on the specific image to see its metadata. The digest
is listed as the Image digest.
To get the pull command for a specific image:
Click on the name of an image to go to the specific registry.
In the registry, check the box next to the version of the image that
you want to pull.
Click SHOW PULL COMMAND on the top of the page.
Copy the pull command, which identifies the image using either the
tag or the digest
*Also check that you have push and pull permissions from the registry.
**Configured Docker to use gcloud as a credential helper, or are using another authentication method. To use gcloud as the credential helper, run the command:
gcloud auth configure-docker

Installing Zookeeper on offline openshift

I've an Openshift Origin cluster running offline on 3 Centos 7 vm. It's working fine, I've a registry where I push my images like this :
docker login -u <username> -e <any_email_address> -p <token_value> <registry_ip>:<port>
Login is successful, then :
oc tag <image-id> <docker-registry-IP>:<port>/<project-name>/<image>
So, for nginx for example :
oc tag 49011ce3b713 172.30.222.111:5000/test/nginx
Then I push it to the internal registry :
docker push 172.30.222.111:5000/test/nginx
And finaly :
oc new-app nginx --name="nginx"
With nginx, everything is working fine, now my problem :
I'm actually wanting to put Zookeeper on it, so : I do the same steps than above, I also install "jboss/base-jdk:7" which is a dependancy of Zookeeper, problem is :
docker push 172.30.222.111:5000/test/jboss/base-jdk:7
Giving :
[root#master 994089]# docker push 172.30.222.111:5000/test/jboss/base-jdk:7
The push refers to a repository [172.30.222.111:5000/test/jboss/base-jdk]
c4c6a9114a05: Layer already exists
3bf2c105669b: Layer already exists
85c6e373d858: Layer already exists
dc1e2dcdc7b6: Layer already exists
Received unexpected HTTP status: 500 Internal Server Error
The problem seems to be the "/" here jboss**/**base-jdk:7
I also tried to push just like this :
docker push 172.30.222.111:5000/test/base-jdk:7
This is working , but Zookeeper is looking for exactly "jboss/base-jdk:7", and not just "base-jdk:7"
Finally, I'm blocked here, when trying this command : oc new-app zookeeper --name="zookeeper" --loglevel=8 --insecure-registry --allow-missing-images
I0628 14:31:54.009713 53407 dockerimagelookup.go:92] checking local Docker daemon for "jboss/base-jdk:7"
I0628 14:31:54.030546 53407 dockerimagelookup.go:380] partial match on "172.30.222.111:5000/test/base-jdk:7" with 0.375000
I0628 14:31:54.030571 53407 dockerimagelookup.go:346] exact match on "jboss/base-jdk:7"
I0628 14:31:54.030578 53407 dockerimagelookup.go:107] Found local docker image match "172.30.222.111:5000/test/base-jdk:7" with score 0.375000
I0628 14:31:54.030589 53407 dockerimagelookup.go:107] Found local docker image match "jboss/base-jdk:7" with score 0.000000
I0628 14:31:54.032799 53407 componentresolvers.go:59] Error from resolver: [can't look up Docker image "jboss/base-jdk:7": Internal error occurred: Get http://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 10.253.158.90:53: no such host]
I0628 14:31:54.032831 53407 dockerimagelookup.go:169] Added missing image match for jboss/base-jdk:7
F0628 14:31:54.032882 53407 helpers.go:110] error: can't look up Docker image "jboss/base-jdk:7": Internal error occurred: Get http://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 10.253.158.90:53: no such host
We can see that 172.30.222.111:5000/test/base-jdk:7 is found but it's not exactly what the command is looking for so it doesn't use it...
So, if you have any idea how to solve this ! :)
Resolved by upgrading to Openshift 1.5.1, previous was 1.3.1.

how do I resolve docker issues with ice login?

I am using use the ice command line interface for IBM Container Services, and I am seeing a couple of different problems from a couple of different boxes I am testing with. Here is one example:
[root#cds-legacy-monitor ~]# ice --verbose login --org chrisr#ca.ibm.com --space dev --user chrisr#ca.ibm.com --registry registry-ice.ng.bluemix.net
#2015-11-26 01:38:26.092288 - Namespace(api_key=None, api_url=None, cf=False, cloud=False, host=None, local=False, org='chrisr#ca.ibm.com', psswd=None, reg_host='registry-ice.ng.bluemix.net', skip_docker=False, space='dev', subparser_name='login', user='chrisr#ca.ibm.com', verbose=True)
#2015-11-26 01:38:26.092417 - Executing: cf login -u chrisr#ca.ibm.com -o chrisr#ca.ibm.com -s dev -a https://api.ng.bluemix.net
API endpoint: https://api.ng.bluemix.net`
Password>
Authenticating...
OK
Targeted org chrisr#ca.ibm.com
Targeted space dev
API endpoint: https://api.ng.bluemix.net (API version: 2.40.0)
User: chrisr#ca.ibm.com
Org: chrisr#ca.ibm.com
Space: dev
#2015-11-26 01:38:32.186204 - cf exit level: 0
#2015-11-26 01:38:32.186340 - config.json path: /root/.cf/config.json
#2015-11-26 01:38:32.186640 - Bearer: <long string omitted>
#2015-11-26 01:38:32.186697 - cf login succeeded. Can access: https://api-ice.ng.bluemix.net/v3/containers
Authentication with container cloud service at https://api-ice.ng.bluemix.net/v3/containers completed successfully
You can issue commands now to the container service
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
#2015-11-26 01:38:32.187317 - using bearer token
#2015-11-26 01:38:32.187350 - config.json path: /root/.cf/config.json
#2015-11-26 01:38:32.187489 - Bearer: <long pw string omitted>
#2015-11-26 01:38:32.187517 - Org Guid: dae00d7c-1c3d-4bfd-a207-57a35a2fb42b
#2015-11-26 01:38:32.187551 - docker login -u bearer -p '<long pw string omitted>' -e a#b.c registry-ice.ng.bluemix.net
FATA[0012] Error response from daemon: </html>
#2015-11-26 01:38:44.689721 - docker call exit level: 256
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
#2015-11-26 01:38:44.689842 - Exit err level = 2**
On the other box, it also fails, but the final error is slightly different.
#2015-11-26 01:44:48.916034 - docker login -u bearer -p '<long pw string omitted>' -e a#b.c registry-ice.ng.bluemix.net
Error response from daemon: Unexpected status code [502] : <html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
#2015-11-26 01:45:02.582753 - docker call exit level: 256
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
#2015-11-26 01:45:02.582868 - Exit err level = 2
Any thoughts on what might be causing these issues?
The errors are referring the same problem, ice isn't finding any docker env locally.
It doesn't prevent working remotely on Bluemix but without a local docker env ice cannot work with local containers

Resources