Error creating Docker container in Bluemix - docker

To create a Docker container in Bluemix we need to install container plug-ins and container extension. After installing container extension Docker should be running but it show error as :
root#oc0608248400 Desktop]# cf ic login
** Retrieving client certificates from IBM Containers
** Storing client certificates in /root/.ice/certs
Successfully retrieved client certificates
** Checking local docker configuration
Not OK
Docker local daemon may not be running. You can still run IBM Containers on the cloud
There are two ways to use the CLI with IBM Containers:
Option 1) This option allows you to use `cf ic` for managing containers on IBM Containers while still using the docker CLI directly to manage your local docker host.
Leverage this Cloud Foundry IBM Containers plugin without affecting the local docker environment:
Example Usage:
cf ic ps
cf ic images
Option 2) Leverage the docker CLI directly. In this shell, override local docker environment to connect to IBM Containers by setting these variables, copy and paste the following:
Notice: only commands with an asterisk(*) are supported within this option
export DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443
export DOCKER_CERT_PATH=/root/.ice/certs
export DOCKER_TLS_VERIFY=1
Example Usage:
docker ps
docker images
exec: "docker": executable file not found in $PATH
Please suggest what should I go next.

the error is already telling you what to do:
exec: "docker": executable file not found in $PATH
means to find the executable docker.
Thus the following should tell you where it is located and that would needed to be append to the PATH environment variable.
dockerpath=$(dirname `find / -name docker -type f -perm /a+x 2>/dev/null`)
export PATH="$PATH:$dockerpath"
What this will do is search the root of the filesystem for a file, named 'docker', and has the executable bit set while ignoring error messages and returns the absolute path to the file as $dockerpath. Then it exports this temporarily.

The problem seems to be that your docker daemon isn't running.
Try running:
sudo docker restart
If you've just installed docker you may need to reboot your machine first.

Related

Jenkins Docker plugin volume/mount what syntax to use

I have a linux vm on which I installed docker. I have several docker containers with the different programs I have to use. Here's my architecture:
Everything is working fine except for the red box.
What I am trying to do is to dynamically provide a jenkins docker-in-docker agent with the cloud functionality in order to build my docker images and push them to the docker registry I set up.
I have been looking for documentation to create a docker in docker container and I found this:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
This article states that in order to avoid problems with my main docker installation I have to create a volume:
-v /var/run/docker.sock:/var/run/docker.sock
I tested my image locally and I have no problem to run
docker run -d -v --name test /var/run/docker.sock:/var/run/docker.sock
docker exec -it test /bin/bash
docker run hello-world
The container is using the linux vm docker installation to build and run the docker images so everything is fine.
However, I face problems when it comes to the jenkins docker cloud configuration.
From what I gather, since the #826 build, the docker jenkins plugin has change its syntax for volumes.
This is the configuration I tried:
And the error message I have when trying to launch the agent:
Reason: Template provisioning failed.
com.github.dockerjava.api.exception.BadRequestException: {"message":"create
/var/run/docker.sock: \"/var/run/docker.sock\" includes invalid characters for a local
volume name, only \"[a-zA-Z0-9][a-zA-Z0-9_.-]\" are allowed. If you intended to pass a
host directory, use absolute path"}
I also tried that configuration:
Reason: Template provisioning failed.
com.github.dockerjava.api.exception.BadRequestException: {"message":"invalid mount config for type \"volume\": invalid mount path: './var/run/docker.sock' mount path must be absolute"}
I do not get what that means as on my linux vm the docker.sock absolute path is /var/run/docker.sock, and it is the same path inside the docker in docker I ran locally...
I tried to check the source code to find what I did wrong but it's unclear what the code is doing for me (https://github.com/jenkinsci/docker-plugin/blob/master/src/main/java/com/nirima/jenkins/plugins/docker/DockerTemplateBase.java, from row 884 onward), I also tried with backslashes, etc. Nothing worked.
Has anyone any idea what is the expected syntax in that configuration panel for setting up a simple volume?
Change the configuration to this:
type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock
it is not a volume, it is a bind type.
This worked for me
type=bind,source=/sys/fs/cgroup,target=/sys/fs/cgroup,readonly

docker: error during connect: In the default daemon configuration on Windows, the docker client must be run with elevated privileges to connect

I'm trying to run Docker in process isolation mode on Windows Server 2019 (Docker Desktop does not work here, my VPS does not support Hyper-V).
I run this in PowerShell (all in Administrator mode)
docker run -it --isolation=process mcr.microsoft.com/windows/servercore:ltsc2019 cmd.exe /c ping 127.0.0.1 -t
Then I get error:
docker: error during connect: In the default daemon configuration on Windows, the docker client must be run with elevated privileges to connect.: Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/create: open //./pipe/docker_engine: The system cannot find the file specified.
See 'docker run --help'.
I ran command & 'C:\Program Files\Docker\DockerCli.exe' -SwitchDaemon, as suggested here: Docker cannot start on Windows
However, DockerCli.exe does not exist in a clean Docker install:
As suggested here I tried copying the file DockerCli.exe from my local Windows 10 Docker Desktop installation and reran, but then I get:
Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'Docker.Core, Version=3.0.0.50646, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified.
at Docker.Cli.MainBackendCli.Run(IReadOnlyCollection`1 args)
at Docker.Cli.MainBackendCli.Main(String[] args)
Regardless, copying files from Docker Desktop does not feel like the right approach.
I then ran dockerd in PowerShell since that's the only other executable in that folder:
Since I'm a newbie, I'm not sure if I just started a container and if so, which one, I just see start., but no idea where that comes from or how I can configure it.
UPDATE 1
Based on Peter Wishart's suggestion I tried uninstall-Package -Name docker, but then I get
uninstall-Package : No package found for 'docker'. At line:1 char:1
+ uninstall-Package -Name docker
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Microsoft.Power...ninstallPackage:UninstallPackage) [Uninstall-Package]
, Exception
+ FullyQualifiedErrorId : NoMatchFound,Microsoft.PowerShell.PackageManagement.Cmdlets.UninstallPackage
Here's the full code of what I tried:
PS C:\Users\Administrator> uninstall-Package -Name docker
uninstall-Package : No package found for 'docker'.
At line:1 char:1
+ uninstall-Package -Name docker
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Microsoft.Power...ninstallPackage:UninstallPackage) [Uninstall-Package]
, Exception
+ FullyQualifiedErrorId : NoMatchFound,Microsoft.PowerShell.PackageManagement.Cmdlets.UninstallPackage
PS C:\Users\Administrator> docker
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
Options:
--config string Location of client config files (default
"C:\\Users\\Administrator\\.docker")
-c, --context string Name of the context to use to connect to the
daemon (overrides DOCKER_HOST env var and
default context set with "docker context use")
-D, --debug Enable debug mode
-H, --host list Daemon socket(s) to connect to
-l, --log-level string Set the logging level
("debug"|"info"|"warn"|"error"|"fatal")
(default "info")
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA (default
"C:\\Users\\Administrator\\.docker\\ca.pem")
--tlscert string Path to TLS certificate file (default
"C:\\Users\\Administrator\\.docker\\cert.pem")
--tlskey string Path to TLS key file (default
"C:\\Users\\Administrator\\.docker\\key.pem")
--tlsverify Use TLS and verify the remote
-v, --version Print version information and quit
Management Commands:
app* Docker Application (Docker Inc., v0.8.0)
builder Manage builds
cluster* Manage Mirantis Container Cloud clusters (Mirantis Inc., v1.9.0)
config Manage Docker configs
container Manage containers
context Manage contexts
image Manage images
manifest Manage Docker image manifests and manifest lists
network Manage networks
node Manage Swarm nodes
plugin Manage plugins
registry* Manage Docker registries (Docker Inc., 0.1.0)
secret Manage Docker secrets
service Manage services
stack Manage Docker stacks
swarm Manage Swarm
system Manage Docker
trust Manage trust on Docker images
volume Manage volumes
Commands:
attach Attach local standard input, output, and error streams to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
login Log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save one or more images to a tar archive (streamed to STDOUT by default)
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
version Show the Docker version information
wait Block until one or more containers stop, then print their exit codes
Run 'docker COMMAND --help' for more information on a command.
To get more help with docker, check out our guides at https://docs.docker.com/go/guides/
PS C:\Users\Administrator> Get-PackageProvider -ListAvailable
Name Version DynamicOptions
---- ------- --------------
DockerMsftProvider 1.0.0.8 Update
msi 3.0.0.0 AdditionalArguments
msu 3.0.0.0
NuGet 2.8.5.208 Destination, ExcludeVersion, Scope, SkipDependencies, Headers, FilterOnTag...
PowerShellGet 1.0.0.1 PackageManagementProvider, Type, Scope, AllowClobber, SkipPublisherCheck, ...
Programs 3.0.0.0 IncludeWindowsInstaller, IncludeSystemComponent
PS C:\Users\Administrator> Get-Package -Name Docker -ProviderName DockerMsftProvider
Name Version Source ProviderName
---- ------- ------ ------------
docker 20.10.0 DockerDefault DockerMsftProvider
PS C:\Users\Administrator> Install-Package -Name docker -ProviderName DockerMsftProvider
The package(s) come(s) from a package source that is not marked as trusted.
Are you sure you want to install software from 'DockerDefault'?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"): y
PS C:\Users\Administrator> Install-Package -Name docker -ProviderName DockerMsftProvider
The package(s) come(s) from a package source that is not marked as trusted.
Are you sure you want to install software from 'DockerDefault'?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"): A
PS C:\Users\Administrator> uninstall-Package -Name docker
WARNING: Docker Service is not available.
uninstall-Package : The property 'Status' cannot be found on this object. Verify that the property exists.
At line:1 char:1
+ uninstall-Package -Name docker
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (Microsoft.Power...ninstallPackage:UninstallPackage) [Uninstall-Package],
Exception
+ FullyQualifiedErrorId : PropertyNotFoundStrict,Microsoft.PowerShell.PackageManagement.Cmdlets.UninstallPackage
PS C:\Users\Administrator>
The pipe access that the error message mentions is a (probably unrelated) issue when docker client is run by non-admin users (see here).
I think the most likely explanation is that the docker service has failed to start.
When you ran dockerd you were actually starting an instance of the daemon - and the line API listen on //./pipe/docker_engine means that the system service hadn't started previously - as the instance you started could create the pipe.
If you stop the running dockerd instance and run:
Get-Service docker | Restart-Service
Get-WinEvent -logname application | where ProviderName -eq docker | sort TimeCreated
You should be able to compare the log output with your manual start of dockerd, and see if any errors are blocking the service from starting.
If the event log records API listen on //./pipe/docker_engine then Get-Service docker should show the service as running, and your docker commands should be ok.
[Edit]
Looks like the uninstall of docker was failing because the service doesn't exist.
Yet, the install is succeeding except for the service installation.
You can re-register the service with &'C:\Program Files\Docker\dockerd.exe' --register-service
Maybe this will fail if the VPS provider is somehow stopping services from being registered?
Another option is to run docker interactively in one shell with &'C:\Program Files\Docker\dockerd.exe' --run-service, and run your docker commands in another shell.
To resolve the issue, I just ran & 'C:\Program Files\Docker\Docker\DockerCli.exe' -SwitchDaemon in PowerShell and quit the Docker desktop and Run Docker Desktop as administrator.
Now open command prompt or powershell and run docker images or dockers ps command. It should work.
Go to PowerShell > run as administrator and run this code:
cd "C:\Program Files\Docker\Docker"
./DockerCli.exe -SwitchDaemon
To resolve the issue, I just ran Two Steps in PowerShell.
First,
cd 'C:\Program Files\Docker\Docker
and then,
/DockerCli.exe' -SwitchDaemon
After that, install the WSL Kernel on your machine, restart the WSL machine, then it is resolved.
This solution helped me
https://github.com/docker/for-win/issues/5919#issuecomment-658815006
cd C:\Program Files\Docker\Docker\resources dockerd.exe
Now execute
'docker version' on a new powershell as administrator
Ok so i followed 2 steps and my problem was resolved:
update wsl kernel, seamingly it is a must because docker needs it and it is not installed by default. so just close all docker realated stuff and run:
wsl --update
command in cmd .
I did what other guys said here:
C:\Program Files\Docker\Docker\DockerCli.exe -SwitchDaemon
and voila!
My solution was removing last edition and installing 4.9.1
One other thing to check is whether you have started the Desktop Client for Windows and accepted the License Agreement. I had restored from an image that hadn't had Docker started and was seeing this error. Nothing worked until I accepted the license.
I checked these boxes and it worked for me:

dockerd --max-concurrent-downloads 1 command not found [duplicate]

I'm working with a poor internet connection and trying to pull and run a image.
I wanted to download one layer at a time and per documentation tried adding a flat --max-concurrent-downloads like so:
docker run --rm -p 8787:8787 -e PASSWORD=blah --max-concurrent-downloads=1 rocker/verse
But this gives an error:
unknown flag: --max-concurrent-downloads See 'docker run --help'.
I tried typing docker run --help and interestingly did not see the option --max-concurrent-downloads.
I'm using Docker Toolbox since I'm on a old Mac.
Over here under l there's an option for --max-concurrent-downloads however this doesn't appear on my terminal when typing docker run --help
How can I change the default of downloading 3 layers at a time to just one?
From the official documentation: (https://docs.docker.com/engine/reference/commandline/pull/#concurrent-downloads)
You can pass --max-concurrent-downloads during a pull operation.
You can set --max-concurrent-downloads with the dockerd command.
If you're using the docker Desktop GUI for Mac or Windows:
You can edit the .json file directly in docker engine settings:
This setting needs to be passed to dockerd when starting the daemon, not to the docker client CLI. The dockerd process is running inside of a VM with docker-machine (and other docker desktop environments).
With docker-machine that is used in toolbox, you typically pass the engine flags on the docker-machine create command line, e.g.
docker-machine create --engine-opt max-concurrent-downloads=1
Once you have a created machine, you can follow the steps from these answers to modify the config of an already running machine, mainly:
SSH into your local docker VM.
note: if 'default' is not the name of your docker machine then substitute 'default' with your docker machine name $
docker-machine ssh default
Open Docker profile $ sudo vi /var/lib/boot2docker/profile
Then in that profile, you would add your --engine-opt max-concurrent-downloads=1.
Newer versions of docker desktop (along with any Linux install) make this much easier with a configuration menu daemon -> advanced where you can specify your daemon.json entries like:
{
"max-concurrent-downloads": 1
}

Can I pass --max-concurrent-downloads as a flag?

I'm working with a poor internet connection and trying to pull and run a image.
I wanted to download one layer at a time and per documentation tried adding a flat --max-concurrent-downloads like so:
docker run --rm -p 8787:8787 -e PASSWORD=blah --max-concurrent-downloads=1 rocker/verse
But this gives an error:
unknown flag: --max-concurrent-downloads See 'docker run --help'.
I tried typing docker run --help and interestingly did not see the option --max-concurrent-downloads.
I'm using Docker Toolbox since I'm on a old Mac.
Over here under l there's an option for --max-concurrent-downloads however this doesn't appear on my terminal when typing docker run --help
How can I change the default of downloading 3 layers at a time to just one?
From the official documentation: (https://docs.docker.com/engine/reference/commandline/pull/#concurrent-downloads)
You can pass --max-concurrent-downloads during a pull operation.
You can set --max-concurrent-downloads with the dockerd command.
If you're using the docker Desktop GUI for Mac or Windows:
You can edit the .json file directly in docker engine settings:
This setting needs to be passed to dockerd when starting the daemon, not to the docker client CLI. The dockerd process is running inside of a VM with docker-machine (and other docker desktop environments).
With docker-machine that is used in toolbox, you typically pass the engine flags on the docker-machine create command line, e.g.
docker-machine create --engine-opt max-concurrent-downloads=1
Once you have a created machine, you can follow the steps from these answers to modify the config of an already running machine, mainly:
SSH into your local docker VM.
note: if 'default' is not the name of your docker machine then substitute 'default' with your docker machine name $
docker-machine ssh default
Open Docker profile $ sudo vi /var/lib/boot2docker/profile
Then in that profile, you would add your --engine-opt max-concurrent-downloads=1.
Newer versions of docker desktop (along with any Linux install) make this much easier with a configuration menu daemon -> advanced where you can specify your daemon.json entries like:
{
"max-concurrent-downloads": 1
}

How to list the published container images in the Google Container Registry using gcloud or another CLI

Is there a gcloud API or other command line interface (CLI) to access the list of published container images in the private Google Container Registry? (That is the container registry inside a Google Cloud Platform project)
gcloud container does not seem to help:
$ gcloud container
Usage: gcloud container [optional flags] <group | command>
group may be clusters | operations
command may be get-server-config
Deploy and manage clusters of machines for running containers.
flags:
--zone ZONE, -z ZONE The compute zone (e.g. us-central1-a) for the cluster
global flags:
Run `gcloud -h` for a description of flags available to all commands.
command groups:
clusters Deploy and teardown Google Container Engine clusters.
operations Get and list operations for Google Container Engine
clusters.
commands:
get-server-config Get Container Engine server config.
I also don't want to use gcloud docker to list images because this wants to connect to a particular docker daemon that I don't have. Unless there is a way to tell gcloud docker to connect to a remote public docker daemon that can read the private containers pushed to the registry through my project.
We just released a new command to list the images in your repository! You can try it out with:
gcloud alpha container images list --repository=gcr.io/$MYREPOSITORY
If you want to see the specific tags for an image you can use:
gcloud alpha container images list-tags gcr.io/$MYREPOSITORY/$MYIMAGE
The answer given by Robert Bailey is good for certain tasks, but might be missing what you specifically want to do. Nonetheless, your comments in reply to his answer are not so much faults of his answer as of your own understanding of what the commands which "fail" actually mean to do.
As far as your second comment,
Using docker I get the following error (for the reasons mentioned
above; I also edited the question): Cannot connect to the Docker daemon. Is the docker daemon running on this host?
This is a result of the docker daemon not running. Check if it's running via ps aux | grep docker. You can refer to the Docker documentation to determine how to properly install and run it.
As far as your first comment,
Using curl I get: {"errors":[{"code":"DENIED","message":"Failed to read tags for repository '<my_project>/<my_image>'"}]}. I have to
authenticate somehow to access the images in a private registry. I
don't want to use docker because that means I have to have a docker
daemon available. I only want to see if a container image with a
particular version is in the Container Registry. So what I need is an
API to the Container Registry in the Google Developer Console.
You wouldn't be able to curl the image unless it was public, as mentioned in Robert's latest comment, or unless you somehow provided some great oauth headers during the curl's invocation.
You should use gcloud docker to attempt to list the images in the registry, as you would for other docker registries. The gcloud container command group is the wrong one for your desired task. You can see below an output from gcloud version 96.0.0 (latest as of this comment) for the docker command group:
$ gcloud docker
Usage: docker [OPTIONS] COMMAND [arg...]
docker daemon [ --help | ... ]
docker [ --help | -v | --version ]
A self-sufficient runtime for containers.
Options:
--config=~/.docker Location of client config files
-D, --debug=false Enable debug mode
--disable-legacy-registry=false Do not contact legacy registries
-H, --host=[] Daemon socket(s) to connect to
-h, --help=false Print usage
-l, --log-level=info Set the logging level
--tls=false Use TLS; implied by --tlsverify
--tlscacert=~/.docker/ca.pem Trust certs signed only by this CA
--tlscert=~/.docker/cert.pem Path to TLS certificate file
--tlskey=~/.docker/key.pem Path to TLS key file
--tlsverify=false Use TLS and verify the remote
-v, --version=false Print version information and quit
Commands:
attach Attach to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on a container or image
kill Kill a running container
load Load an image from a tar archive or STDIN
login Register or log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
network Manage Docker networks
pause Pause all processes within a container
port List port mappings or a specific mapping for the CONTAINER
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart a container
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save an image(s) to a tar archive
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop a running container
tag Tag an image into a repository
top Display the running processes of a container
unpause Unpause all processes within a container
version Show the Docker version information
volume Manage Docker volumes
wait Block until a container stops, then print its exit code
Run 'docker COMMAND --help' for more information on a command.
You should use gcloud docker search gcr.io/project-id to check which images are in the repository. gcloud has your credentials, so it can talk to the private registry as long as you're authenticated as an appropriate user on the project.
Finally, as an added resource: The Cloud Platform docs have a whole article about working with Google Container Registry.
If you know the project that is hosting the images (e.g. google-containers) you can list images with
gcloud docker search gcr.io/google_containers
For an individual image (e.g. the pause image in the google-containers project), you can check the versions with
curl https://gcr.io/v2/google-containers/pause/tags/list
I've just found a far simpler way to check for specific images. Once you have authenticated gcloud, use it to generate access tokens for reading from your private registry:
curl -u "oauth2accesstoken:$(gcloud auth print-access-token)" https://gcr.io/v2/<projectName>/<imageName>/tags/list
My best solution so far without having a local docker available and without being able to connect to a remote docker (this would still require at least the local docker client but not the local daemon running), is to SSH into a Container Cluster instance that runs docker and have my search done there and getting the result in my original script:
gcloud compute ssh <container_cluster_instance> -C "sudo gcloud docker search ..."
Of course, to avoid all verbose output (like SSH/Terminal welcome messages) I use some arguments to silent the execution a bit:
gcloud compute ssh --ssh-flag="-q" "$INSTANCE_NAME" -o LogLevel=quiet -C "sudo gcloud docker search ..."

Resources