Bluemix can not list docker images - docker

I've successfully logged in bluemix container service via command ice login with following output:
Authentication with container cloud service at https://api-ice.ng.bluemix.net/v1.0/containers completed successfully
You can issue commands now to the container service
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
Login Succeeded
And ice ps -a works as well, but when issuing ice images, it failed with:
$ ice --verbose images
#2015-05-07 13:59:29.221306 - Namespace(cloud=False, local=False, subparser_name='images', verbose=True)
#2015-05-07 13:59:29.221370 - request url: https://api-ice.ng.bluemix.net/v1.0/containers/images/json
#2015-05-07 13:59:30.012412 - Return code: 404 Return reason: NOT FOUND
#2015-05-07 13:59:30.012439 - Req-ID: a382f2f79d54b157
#2015-05-07 13:59:30.012451 - Exit err level = 1
Here's the command line version:
$ ice version
ICE CLI Version : 2.0.1 000 2015-03-26T19:51:27
Notice that ice images works last week.
is there anything changed in the server side?

Try login to ice with this
ice login -a https://api.ng.bluemix.net -H https://api-ice.ng.bluemix.net/v2/containers -R registry-ice.ng.bluemix.net
This is what I get when running ice --verbose images
bash-3.2$ ice --verbose images
#2015-05-08 14:54:49.692386 - Namespace(cloud=False, local=False, subparser_name='images', verbose=True)
#2015-05-08 14:54:49.692455 - request url: https://api-ice.ng.bluemix.net/v2/containers/images/json
#2015-05-08 14:54:49.692466 - using bearer token and space id
#2015-05-08 14:54:49.692482 - config.json path: /Users/stanli/.cf/config.json
It seems that your ice command was pointing to v1 of the api.
-Stan

Related

Error Pulling image from ACR in Azure Function

Please am trying to deploy an image in Azure Container Registry(ACR) on my Function App, but am not able to do it.
I pushed the latest image from pc to the ACR after creating it. Also the admin under access key is enable. Please advise how to resolve this. The log result can be found below.
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
2020-11-02T15:05:03.452Z INFO - Pulling image: myacrqa.azurecr.io/myimage:v1
2020-11-02T15:05:03.462Z ERROR - DockerApiException: Docker API responded with status code=InternalServerError, response={"message":"Get https://ifckpacrqa.azurecr.io/v2/: read tcp IP:17045->IP:443: read: connection reset by peer"}
2020-11-02T15:05:03.462Z ERROR - Pulling docker image myacrqa.azurecr.io/offlinekpqa:v1 failed:
2020-11-02T15:05:03.462Z INFO - Pulling image from Docker hub: myacrqa.azurecr.io/offlinekpqa:v1
2020-11-02T15:05:03.470Z ERROR - DockerApiException: Docker API responded with status code=InternalServerError, response={"message":"Get https://ifckpacrqa.azurecr.io/v2/: read tcp IP:17047->IP:443: read: connection reset by peer"}
2020-11-02T15:05:03.471Z ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
2020-11-02T15:05:08.847Z INFO - Stopping site ifc-kp-ml-qa because it failed during startup.
2020-11-02T15:10:01.023Z INFO - Starting container for site
2020-11-02T15:10:01.024Z INFO - docker run -d -p 8356:8081 --name ifc-kp-func_app_0_22558c6b_msiProxy -e WEBSITE_CORS_ALLOWED_ORIGINS=https://functions.azure.com,https://functions-staging.azure.com,https://functions-next.azure.com,https://storage.z13.web.core.windows.net -e WEBSITE_CORS_SUPPORT_CREDENTIALS=False -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=IFC-KP-ML-QA -e WEBSITE_AUTH_ENABLED=True -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=xxxx -e WEBSITE_INSTANCE_ID=65631c3af46c684539e2d9f55e37247be307daaa00f59cdf3231284117e30b40 appsvc/msitokenservice:2007200210
2020-11-02T15:10:01.025Z INFO - Logging is not enabled for this container.
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
2020-11-02T15:10:04.362Z INFO - Pulling image: ifckpacrqa.azurecr.io/offlinekpqa:v1
2020-11-02T15:10:04.372Z ERROR - DockerApiException: Docker API responded with status code=InternalServerError, response={"message":"Get https://myacrqa.azurecr.io/v2/: read tcp IP:17489->IP:443: read: connection reset by peer"}
2020-11-02T15:10:04.373Z ERROR - Pulling docker image myacrqa.azurecr.io/myimage:v1 failed:
2020-11-02T15:10:04.373Z INFO - Pulling image from Docker hub: myacrqa.azurecr.io/myimage:v1
2020-11-02T15:10:04.398Z ERROR - DockerApiException: Docker API responded with status code=InternalServerError, response={"message":"Get https://ifckpacrqa.azurecr.io/v2/: read tcp 10.168.216.12:17491->52.168.114.2:443: read: connection reset by peer"}
2020-11-02T15:10:04.401Z ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
2020-11-02T15:10:09.853Z INFO - Stopping site ifc-kp-ml-qa because it failed during startup.
2020-11-02T15:15:02.120Z INFO - Starting container for site
2020-11-02T15:15:02.121Z INFO - docker run -d -p 7603:8081 --name ifc-kp-ml-qa_0_969b061e_msiProxy -e WEBSITE_CORS_ALLOWED_ORIGINS=https://functions.azure.com,https://functions-staging.azure.com,https://functions-next.azure.com,https://storage.z13.web.core.windows.net -e WEBSITE_CORS_SUPPORT_CREDENTIALS=False -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=IFC-KP-ML-QA -e WEBSITE_AUTH_ENABLED=True -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=ifc-kp-ml-qa.aseqa.ifc.org -e WEBSITE_INSTANCE_ID=65631c3af46c684539e2d9f55e37247be307daaa00f59cdf3231284117e30b40 appsvc/msitokenservice:2007200210
2020-11-02T15:15:02.122Z INFO - Logging is not enabled for this container.
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
2020-11-02T15:15:04.469Z INFO - Pulling image: myacrqa.azurecr.io/myimage:v1
2020-11-02T15:15:04.479Z ERROR - DockerApiException: Docker API responded with status code=InternalServerError, response={"message":"Get https://myacrqa.azurecr.io/v2/: read tcp IP:17953->IP:443: read: connection reset by peer"}
2020-11-02T15:15:04.479Z ERROR - Pulling docker image myacrqa.azurecr.io/myimage:v1 failed:
2020-11-02T15:15:04.479Z INFO - Pulling image from Docker hub: myacrqa.azurecr.io/myimage:v1
2020-11-02T15:15:04.487Z ERROR - DockerApiException: Docker API responded with status code=InternalServerError, response={"message":"Get https://myacrqa.azurecr.io/v2/: read tcp IP:17955->IP:443: read: connection reset by peer"}
2020-11-02T15:15:04.490Z ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
2020-11-02T15:15:09.926Z INFO - Stopping site ifc-kp-ml-qa because it failed during startup.
To pull the docker images from the ACR or other private registry, you need to set the environment variables like this:
You can set these environment variables in the Azure portal, function settings. Or use the Azure CLI command az functionapp create with the paramteres:
--deployment-container-image-name
--docker-registry-server-password
--docker-registry-server-user

Airflow pull docker image from private google container repository

I am using the https://github.com/puckel/docker-airflow image to run Airflow. I had to add pip install docker in order for it to support DockerOperator.
Everything seems ok, but I can't figure out how to pull an image from a private google docker container repository.
I tried adding the connection in the admin section type of google cloud conenction and running the docker operator as.
t2 = DockerOperator(
task_id='docker_command',
image='eu.gcr.io/project/image',
api_version='2.3',
auto_remove=True,
command="/bin/sleep 30",
docker_url="unix://var/run/docker.sock",
network_mode="bridge",
docker_conn_id="google_con"
)
But always get an error...
[2019-11-05 14:12:51,162] {{taskinstance.py:1047}} ERROR - No Docker
registry URL provided
I also tried the docker_conf_option
t2 = DockerOperator(
task_id='docker_command',
image='eu.gcr.io/project/image',
api_version='2.3',
auto_remove=True,
command="/bin/sleep 30",
docker_url="unix://var/run/docker.sock",
network_mode="bridge",
dockercfg_path="/usr/local/airflow/config.json",
)
I get the following error:
[2019-11-06 13:59:40,522] {{docker_operator.py:194}} INFO - Starting
docker container from image
eu.gcr.io/project/image
[2019-11-06 13:59:40,524] {{taskinstance.py:1047}} ERROR -
('Connection aborted.', FileNotFoundError(2, 'No such file or
directory'))
I also tried using only dockercfg_path="config.json" and got the same error.
I can't really use Bash Operator to try to docker login as it does not recognize docker command...
What am I missing?
line 1: docker: command not found
t3 = BashOperator(
task_id='print_hello',
bash_command='docker login -u _json_key - p /usr/local/airflow/config.json eu.gcr.io'
)
airflow.hooks.docker_hook.DockerHook is using docker_default connection where one isn't configured.
Now in your first attempt, you set google_con for docker_conn_id and the error thrown is showing that host (i.e registry name) isn't configured.
Here are a couple of changes to do:
image argument passed in DockerOperator should be set to image tag without registry name prefixing it.
DockerOperator(api_version='1.21',
# docker_url='tcp://localhost:2375', #Set your docker URL
command='/bin/ls',
image='image',
network_mode='bridge',
task_id='docker_op_tester',
docker_conn_id='google_con',
dag=dag,
# added this to map to host path in MacOS
host_tmp_dir='/tmp',
tmp_dir='/tmp',
)
provide registry name, username and password for the underlying DockerHook to authenticate to Docker in your google_con connection.
You can obtain long lived credentials for authentication from a service account key. For username, use _json_key and in password field paste in the contents of the json key file.
Here are logs from running my task:
[2019-11-16 20:20:46,874] {base_task_runner.py:110} INFO - Job 443: Subtask docker_op_tester [2019-11-16 20:20:46,874] {dagbag.py:88} INFO - Filling up the DagBag from /Users/r7/OSS/airflow/airflow/example_dags/example_docker_operator.py
[2019-11-16 20:20:47,054] {base_task_runner.py:110} INFO - Job 443: Subtask docker_op_tester [2019-11-16 20:20:47,054] {cli.py:592} INFO - Running <TaskInstance: docker_sample.docker_op_tester 2019-11-14T00:00:00+00:00 [running]> on host 1.0.0.127.in-addr.arpa
[2019-11-16 20:20:47,074] {logging_mixin.py:89} INFO - [2019-11-16 20:20:47,074] {local_task_job.py:120} WARNING - Time since last heartbeat(0.01 s) < heartrate(5.0 s), sleeping for 4.989537 s
[2019-11-16 20:20:47,088] {logging_mixin.py:89} INFO - [2019-11-16 20:20:47,088] {base_hook.py:89} INFO - Using connection to: id: google_con. Host: gcr.io/<redacted-project-id>, Port: None, Schema: , Login: _json_key, Password: XXXXXXXX, extra: {}
[2019-11-16 20:20:48,404] {docker_operator.py:209} INFO - Starting docker container from image alpine
[2019-11-16 20:20:52,066] {logging_mixin.py:89} INFO - [2019-11-16 20:20:52,066] {local_task_job.py:99} INFO - Task exited with return code 0
I know the question is about GCR but it's worth noting that other container registries may expect the config in a different format.
For example Gitlab expects you to pass the fully qualified image name to the DAG and only put the Gitlab container registry host name in the connection:
DockerOperator(
task_id='docker_command',
image='registry.gitlab.com/group/project/image:tag',
api_version='auto',
docker_conn_id='gitlab_registry',
)
The set up your gitlab_registry connection like:
docker://gitlab+deploy-token-1234:ABDCtoken1234#registry.gitlab.com
Based on recent Cloud Composer documentation, it's recommended to use KubernetesPodOperator instead, like this:
from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator
KubernetesPodOperator(
task_id='docker_op_tester',
name='docker_op_tester',
dag=dag,
namespace="default",
image="eu.gcr.io/project/image",
cmds=["ls"]
)
Further to #Tamlyn's answer, we can also skip the creation of connection (docker_conn_id) from airflow and use it with gitlab as under
On your development machine :
https://gitlab.com/yourgroup/yourproject/-/settings/repository (create a token here and get details for logging in)
docker login registry.gitlab.com (on the machine to login to docker from the machine to push the image to docker - enter your gitlab credentials when prompted)
docker build -t registry.gitlab.com/yourgroup/yourproject . && docker push registry.gitlab.com/yourgroup/yourproject (builds and pushes to your project repo's container registry)
On your airflow machine :
https://gitlab.com/yourgroup/yourproject/-/settings/repository (you can use the above created token for logging in)
docker login registry.gitlab.com (to login to docker from the machine to pull the image from docker, this skips the need for creating a docker registry connection - enter your gitlab credentials when prompted = this generates ~/.docker/config.json which is required Reference from docker docs )
In your dag :
dag = DAG(
"dag_id",
default_args = default_args,
schedule_interval = "15 1 * * *"
)
docker_trigger = DockerOperator(
task_id = "task_id",
api_version = "auto",
network_mode = "bridge",
image = "registry.gitlab.com/yourgroup/yourproject",
auto_remove = True, # use if required
force_pull = True, # use if required
xcom_all = True, # use if required
# tty = True, # turning this on screws up the log rendering
# command = "", # use if required
environment = { # use if required
"envvar1": "envvar1value",
"envvar2": "envvar2value",
},
dag = dag,
)
this works with Ubuntu 20.04.2 LTS (tried and tested) with airflow installed on the instance
You will need to instal Cloud SDK in your workstation which includes the gcloud command-line tool.
After installing Cloud SDK and Docker version 18.03 or newer
According to their documentation to pull from Container Registry, use the command:
docker pull [HOSTNAME]/[PROJECT-ID]/[IMAGE]:[TAG]
or
docker pull [HOSTNAME]/[PROJECT-ID]/[IMAGE]#[IMAGE_DIGEST]
where:
[HOSTNAME] is listed under Location in the console. It's one of four
options: gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io.
[PROJECT-ID] is your Google Cloud Platform Console project ID.
[IMAGE] is the image's name in Container Registry.
[TAG] is the tag applied to the image. In a registry, tags are unique
to an image.
[IMAGE_DIGEST] is the sha256 hash value of the image contents. In the
console, click on the specific image to see its metadata. The digest
is listed as the Image digest.
To get the pull command for a specific image:
Click on the name of an image to go to the specific registry.
In the registry, check the box next to the version of the image that
you want to pull.
Click SHOW PULL COMMAND on the top of the page.
Copy the pull command, which identifies the image using either the
tag or the digest
*Also check that you have push and pull permissions from the registry.
**Configured Docker to use gcloud as a credential helper, or are using another authentication method. To use gcloud as the credential helper, run the command:
gcloud auth configure-docker

Installing Zookeeper on offline openshift

I've an Openshift Origin cluster running offline on 3 Centos 7 vm. It's working fine, I've a registry where I push my images like this :
docker login -u <username> -e <any_email_address> -p <token_value> <registry_ip>:<port>
Login is successful, then :
oc tag <image-id> <docker-registry-IP>:<port>/<project-name>/<image>
So, for nginx for example :
oc tag 49011ce3b713 172.30.222.111:5000/test/nginx
Then I push it to the internal registry :
docker push 172.30.222.111:5000/test/nginx
And finaly :
oc new-app nginx --name="nginx"
With nginx, everything is working fine, now my problem :
I'm actually wanting to put Zookeeper on it, so : I do the same steps than above, I also install "jboss/base-jdk:7" which is a dependancy of Zookeeper, problem is :
docker push 172.30.222.111:5000/test/jboss/base-jdk:7
Giving :
[root#master 994089]# docker push 172.30.222.111:5000/test/jboss/base-jdk:7
The push refers to a repository [172.30.222.111:5000/test/jboss/base-jdk]
c4c6a9114a05: Layer already exists
3bf2c105669b: Layer already exists
85c6e373d858: Layer already exists
dc1e2dcdc7b6: Layer already exists
Received unexpected HTTP status: 500 Internal Server Error
The problem seems to be the "/" here jboss**/**base-jdk:7
I also tried to push just like this :
docker push 172.30.222.111:5000/test/base-jdk:7
This is working , but Zookeeper is looking for exactly "jboss/base-jdk:7", and not just "base-jdk:7"
Finally, I'm blocked here, when trying this command : oc new-app zookeeper --name="zookeeper" --loglevel=8 --insecure-registry --allow-missing-images
I0628 14:31:54.009713 53407 dockerimagelookup.go:92] checking local Docker daemon for "jboss/base-jdk:7"
I0628 14:31:54.030546 53407 dockerimagelookup.go:380] partial match on "172.30.222.111:5000/test/base-jdk:7" with 0.375000
I0628 14:31:54.030571 53407 dockerimagelookup.go:346] exact match on "jboss/base-jdk:7"
I0628 14:31:54.030578 53407 dockerimagelookup.go:107] Found local docker image match "172.30.222.111:5000/test/base-jdk:7" with score 0.375000
I0628 14:31:54.030589 53407 dockerimagelookup.go:107] Found local docker image match "jboss/base-jdk:7" with score 0.000000
I0628 14:31:54.032799 53407 componentresolvers.go:59] Error from resolver: [can't look up Docker image "jboss/base-jdk:7": Internal error occurred: Get http://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 10.253.158.90:53: no such host]
I0628 14:31:54.032831 53407 dockerimagelookup.go:169] Added missing image match for jboss/base-jdk:7
F0628 14:31:54.032882 53407 helpers.go:110] error: can't look up Docker image "jboss/base-jdk:7": Internal error occurred: Get http://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 10.253.158.90:53: no such host
We can see that 172.30.222.111:5000/test/base-jdk:7 is found but it's not exactly what the command is looking for so it doesn't use it...
So, if you have any idea how to solve this ! :)
Resolved by upgrading to Openshift 1.5.1, previous was 1.3.1.

Push docker image to Google Container Registry failure on Mac

I was trying to upload my image to Google Container Registry, but it return some error and I don't know how to troubleshooting.
$> gcloud docker -- push asia.gcr.io/dtapi-1314/web
The push refers to a repository [asia.gcr.io/dtapi-1314/web]
53ccd4e59f47: Retrying in 1 second
32ca8635750d: Retrying in 1 second
e5363ba7dd4d: Retrying in 1 second
d575d439624a: Retrying in 1 second
5c1cba20b78d: Retrying in 1 second
7198e99c156d: Waiting
6ca37046de16: Waiting
b8f2f07b3eab: Waiting
16681562a534: Waiting
92ea1d98cb79: Waiting
97ca462ad9ee: Waiting
unable to decode token response: read tcp 10.0.2.10:54718->74.125.23.82:443: read: connection reset by peer
I checked permission on my Mac.
$> gsutil acl get gs://asia.artifacts.dtapi-1314.appspot.com
It returned a list of correct permission.
I'd tested push on the cloud console, it works.
Does anyone have clue?
Thanks a lot if anyone could help. :)
Other troubleshooting
gcloud auth login
gcloud docker -- login -p $(gcloud auth print-access-token) -u _token https://asia.gcr.io
gsutil acl get gs://asia.artifacts.{%PROJECT_ID}.appspot.com
Add insecure-registry to dockerd startup command.
--insecure-registry asia.gcr.io
Might be the same cause
gcloud docker -- pull google/python
The error was
Error response from daemon: Get https://registry-1.docker.io/v2/google/python/manifests/latest: read tcp 10.0.2.15:37762->52.45.33.149:443: read: connection reset by peer
docker server log
DEBU[0499] Increasing token expiration to: 60 seconds
ERRO[0500] Error trying v2 registry: Get https://registry-1.docker.io/....../python/manifests/latest: read tcp 10.0.2.15:37762->52.45.33.149:443: read: connection reset by peer
ERRO[0500] Attempting next endpoint for pull after error: Get https://registry-1.docker.io/....../python/manifests/latest: read tcp 10.0.2.15:37762->52.45.33.149:443: read: connection reset by peer
DEBU[0500] Skipping v1 endpoint https://index.docker.io because v2 registry was detected
ERRO[0500] Handler for POST /v1.24/images/create returned error: Get https://registry-1.docker.io/....../python/manifests/latest: read tcp 10.0.2.15:37762->52.45.33.149:443: read: connection reset by peer
Environment
MacOS: 10.11.6
Docker Toolbox (on MAC)
Docker 1.12.3 (Git commit: 6b644ec, Built: Wed Oct 26 23:26:11 2016)
The root cause was stupid, but I'd like to update this for anyone who see this question. I found when I attached my computer to company's WIFI. Then It would work (Still some reset). The cable network of my company is mysterious broken to Google Container Registry. The cable network works for all other services (google/youtube/mobile services) we used but broken to Google Container Registry.
Seems like a permission issue. Try running
gcloud auth login
I remember running into a similar issue and this helped.

how do I resolve docker issues with ice login?

I am using use the ice command line interface for IBM Container Services, and I am seeing a couple of different problems from a couple of different boxes I am testing with. Here is one example:
[root#cds-legacy-monitor ~]# ice --verbose login --org chrisr#ca.ibm.com --space dev --user chrisr#ca.ibm.com --registry registry-ice.ng.bluemix.net
#2015-11-26 01:38:26.092288 - Namespace(api_key=None, api_url=None, cf=False, cloud=False, host=None, local=False, org='chrisr#ca.ibm.com', psswd=None, reg_host='registry-ice.ng.bluemix.net', skip_docker=False, space='dev', subparser_name='login', user='chrisr#ca.ibm.com', verbose=True)
#2015-11-26 01:38:26.092417 - Executing: cf login -u chrisr#ca.ibm.com -o chrisr#ca.ibm.com -s dev -a https://api.ng.bluemix.net
API endpoint: https://api.ng.bluemix.net`
Password>
Authenticating...
OK
Targeted org chrisr#ca.ibm.com
Targeted space dev
API endpoint: https://api.ng.bluemix.net (API version: 2.40.0)
User: chrisr#ca.ibm.com
Org: chrisr#ca.ibm.com
Space: dev
#2015-11-26 01:38:32.186204 - cf exit level: 0
#2015-11-26 01:38:32.186340 - config.json path: /root/.cf/config.json
#2015-11-26 01:38:32.186640 - Bearer: <long string omitted>
#2015-11-26 01:38:32.186697 - cf login succeeded. Can access: https://api-ice.ng.bluemix.net/v3/containers
Authentication with container cloud service at https://api-ice.ng.bluemix.net/v3/containers completed successfully
You can issue commands now to the container service
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
#2015-11-26 01:38:32.187317 - using bearer token
#2015-11-26 01:38:32.187350 - config.json path: /root/.cf/config.json
#2015-11-26 01:38:32.187489 - Bearer: <long pw string omitted>
#2015-11-26 01:38:32.187517 - Org Guid: dae00d7c-1c3d-4bfd-a207-57a35a2fb42b
#2015-11-26 01:38:32.187551 - docker login -u bearer -p '<long pw string omitted>' -e a#b.c registry-ice.ng.bluemix.net
FATA[0012] Error response from daemon: </html>
#2015-11-26 01:38:44.689721 - docker call exit level: 256
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
#2015-11-26 01:38:44.689842 - Exit err level = 2**
On the other box, it also fails, but the final error is slightly different.
#2015-11-26 01:44:48.916034 - docker login -u bearer -p '<long pw string omitted>' -e a#b.c registry-ice.ng.bluemix.net
Error response from daemon: Unexpected status code [502] : <html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
#2015-11-26 01:45:02.582753 - docker call exit level: 256
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
#2015-11-26 01:45:02.582868 - Exit err level = 2
Any thoughts on what might be causing these issues?
The errors are referring the same problem, ice isn't finding any docker env locally.
It doesn't prevent working remotely on Bluemix but without a local docker env ice cannot work with local containers

Resources