Docker pull or Docker run doesn't actually do anything - docker

I'm trying to run something called Traildash via it's docker container on a VM via chef (once I get it running I'll move it to an AWS instance). So I've installed docker onto the VM and so I tell chef to run
docker run -i -d -p 80:80 \
appliedtrust/traildash
or even
docker pull appliedtrust/traildash
on the VM and all it does is:
Unable to find image 'appliedtrust/traildash' locally
Pulling repository appliedtrust/traildash
2015/03/16 12:40:38 Get https://index.docker.io/v1/repositories/appliedtrust/traildash/images: x509: certificate is valid for ssl7302.cloudflare.com, *.archeagemall.co
m, *.astrubbank.com, *.billhr2847.com, *.dallasjuniorforum.org, *.goudportal.nl, *.habbinfo.info, *.hoistandcrane.com, *.jlfresno.org, *.jlknoxville.org, *.jlsantabarbara.org, *.jl
wichita.org, *.jrleagueabilene.com, *.okaygoods.com, *.pbajf.org, *.stansberryonline.com, *.unfairmovie.com, *.usepnd.com, *.vaccineinjuryhelpcenter.com, archeagemall.com, astrubba
nk.com, billhr2847.com, dallasjuniorforum.org, goudportal.nl, habbinfo.info, hoistandcrane.com, jlfresno.org, jlknoxville.org, jlsantabarbara.org, jlwichita.org, jrleagueabilene.co
m, okaygoods.com, pbajf.org, stansberryonline.com, unfairmovie.com, usepnd.com, vaccineinjuryhelpcenter.com, not index.docker.io
and then nothing, the container won't actually start nor do I see any files pulled unless docker pulls the files into a different directory?
What do I do to get this running?

You doing everything right. But if you running it outside of EC2 (with IAM Role setted up), you have to explicitly pass AWS creds and optionally other parameters. For more information take a look at https://github.com/AppliedTrust/traildash#quickstart

Related

Problem running docker image from my own public repo from docker hub

I have a repo in docker hub named shaktidocker and it is public. I have a image in the repo.
When i am trying to run that image from my local docker development host using next command:
docker run -P -d shaktidocker/docker-spring-boot-demo
It gives me below error:
e75c891fa5403b0bb6ed1aa3b5e6a6760d4707219ecaff22727632cca741fa25
/usr/bin/docker-current: Error response from daemon: linux spec user:
unable to find user shaktidocker: no matching entries in passwd file.
When I am trying t run a different image from different public repo, it works perfectly fine.
Please, advise
The Dockerfile you used most likely contains the line:
USER shaktidocker
This is defining the Linux user inside the container to run commands, not your user ID on docker hub. Most likely you want to delete this line from your Dockerfile, rebuild, push, and pull your image, before trying to run it again.
It looks like for some reason when you want to start your container by default the name of repository is used as default username to run the container. This username does not exist in the underlying system hence container cannot start.
You can try to define a user with which you will start the image:
docker run -P -d --user nobody shaktidocker/docker-spring-boot-demo
This way you should be able to start your container.

Docker IBM Websphere Base 9 for windows - admin console not working (Docker on Windows 10)

enter image description here
I have built a docker image of IBM WAS 9 Base for Windows. My image is named as was9_new. After the image is successfully built, I use docker run command as follows :
docker run --name was_test -h was_test -p 9043:9043 -p 9443:9443 -d was9_new
It returns as output a container ID, and then exits
After that when I try to open the admin console -
https://localhost:9043/ibm/console/login.do?action=secure
I get an error
This site cannot be reached
localhost refused to connect
Is it because after the docker run command outputs a container id, it exits?
Or something else needs to be done to make the admin console work.
I have referred to instructions here - https://hub.docker.com/r/ibmcom/websphere-traditional/
The only difference is, I have built my own image for windows
Printing the container ID and returning to the shell is normal behavior because you specified -d which runs the container in the background. You should be able to see your container with docker ps.
How long after startup did you wait to try to access the admin console? WAS Base can take several minutes to start up depending on system load and other factors, but docker printing the ID only means the container was created, not that it has finished initializing.
Check that 9043 is the adminhost_secure port, or try using just http:// instead of https:// in the admin console URL.
Can you enter the container with docker exec -it was_test bash, and attempt to access the URL from within the container? wget https://localhost:9043/ibm/console. If you get a message about not trusting the certificate, the server is accepting connections, but for some reason docker isn't forwarding your browser's requests into the container.
These steps should help you narrow down whether it is WAS, or docker, that is not cooperating.

Jenkins not getting launched after docker start

I started my Jenkins Docker image that I had saved earlier.
docker start -ai <my_container_ID>
I can see that jenkins has started in console but it doesn't get launched:
screenshot
For the first time, I had started it using docker run command after which Jenkins got launched on browser and I also added some jobs in it and did docker commit.
Any help would be appreciated!
From your comments I see that you started a new container from jenkins image, made some changes and then did docker commit for creating a new image based on that.
In order to run a new container from that image you need to use docker run using the image's hash return by the docker commit command. Example:
$ docker commit cc79f8ec407d #hash of the container you want to commit
sha256:227efd2e30a9033e6ce288084c6452aa5a5112974ea833b559429a9ae78697a8 # new image hash return by docker commit
$ docker run 227efd2e30a9033e6ce288084c6452aa5a5112974ea833b559429a9ae78697a8 # hash of the new image
But bare in mind that when you run this new container from that image jenkins might not consider it as a new installation because the initialization process was already done before the commiting.
The easiest way to do that is to just point your browser to the IP address of your new Virtual Machine Docker host.
You could monkey around with ifconfig or ipconfig to figure it out, but thankfully Docker Toolbox comes with a handy command line option by utilizing docker-machine:
docker-machine ip default
That’s the IP of your host and where your web services will be listening! Granted this IP is on your local machine and not externally reachable. If you want outside services to hit your machine, you’ll need to set up port forwarding.
Now try docker run -p 8080:8080 --name=jenkins-master jenkins

Jenkins mesosphere/jenkins-dind:0.3.1 and proxy

All,
I am using DCOS and the associated Jenkins.
My company is having a proxy for any external traffic.
Jenkins is running properly and can access the internal network as well as any external network.
I can get jobs to curl a URL on internet if I set the HTTP proxy. I can pass this proxy to mesosphere/jenkins-dind:0.3.1 container as environment variable however, I can't run any docker pull or docker run while being in docker in docker mode.
I managed to reproduce the issue on one of the agent box.
sudo docker run hello-world
Hello from Docker!
This works!!
However, sudo docker run --privileged mesosphere/jenkins-dind:0.3.1 wrapper.sh "docker run hello-world" will fail with
docker: Error while pulling image: Get https://index.docker.io/v1/repositories/library/hello-world/images: x509: certificate is valid for FG3K6C3A13800607, not index.docker.io.
This is typically showing that the docker daemon is not having access to the proxy.
Do you know how to ensure that the dind is getting access to the proxy settings?
Antoine
This error can also manifest itself if the Docker daemon is unauthenticated against your registry but it looks like you're running against the public image, so that's not likely to be the problem.
You could try creating a new Parameter to the Jenkins node (see the instructions here for an example for how to set an environment variable called DOCKER_EXTRA_OPTS: https://docs.mesosphere.com/1.8/usage/service-guides/jenkins/advanced-configuration/).
In this case, we want to do the same (with Name env) but with the contents of Value set to something like HTTP_PROXY=http://proxy.example.com:80/.

How to list the published container images in the Google Container Registry using gcloud or another CLI

Is there a gcloud API or other command line interface (CLI) to access the list of published container images in the private Google Container Registry? (That is the container registry inside a Google Cloud Platform project)
gcloud container does not seem to help:
$ gcloud container
Usage: gcloud container [optional flags] <group | command>
group may be clusters | operations
command may be get-server-config
Deploy and manage clusters of machines for running containers.
flags:
--zone ZONE, -z ZONE The compute zone (e.g. us-central1-a) for the cluster
global flags:
Run `gcloud -h` for a description of flags available to all commands.
command groups:
clusters Deploy and teardown Google Container Engine clusters.
operations Get and list operations for Google Container Engine
clusters.
commands:
get-server-config Get Container Engine server config.
I also don't want to use gcloud docker to list images because this wants to connect to a particular docker daemon that I don't have. Unless there is a way to tell gcloud docker to connect to a remote public docker daemon that can read the private containers pushed to the registry through my project.
We just released a new command to list the images in your repository! You can try it out with:
gcloud alpha container images list --repository=gcr.io/$MYREPOSITORY
If you want to see the specific tags for an image you can use:
gcloud alpha container images list-tags gcr.io/$MYREPOSITORY/$MYIMAGE
The answer given by Robert Bailey is good for certain tasks, but might be missing what you specifically want to do. Nonetheless, your comments in reply to his answer are not so much faults of his answer as of your own understanding of what the commands which "fail" actually mean to do.
As far as your second comment,
Using docker I get the following error (for the reasons mentioned
above; I also edited the question): Cannot connect to the Docker daemon. Is the docker daemon running on this host?
This is a result of the docker daemon not running. Check if it's running via ps aux | grep docker. You can refer to the Docker documentation to determine how to properly install and run it.
As far as your first comment,
Using curl I get: {"errors":[{"code":"DENIED","message":"Failed to read tags for repository '<my_project>/<my_image>'"}]}. I have to
authenticate somehow to access the images in a private registry. I
don't want to use docker because that means I have to have a docker
daemon available. I only want to see if a container image with a
particular version is in the Container Registry. So what I need is an
API to the Container Registry in the Google Developer Console.
You wouldn't be able to curl the image unless it was public, as mentioned in Robert's latest comment, or unless you somehow provided some great oauth headers during the curl's invocation.
You should use gcloud docker to attempt to list the images in the registry, as you would for other docker registries. The gcloud container command group is the wrong one for your desired task. You can see below an output from gcloud version 96.0.0 (latest as of this comment) for the docker command group:
$ gcloud docker
Usage: docker [OPTIONS] COMMAND [arg...]
docker daemon [ --help | ... ]
docker [ --help | -v | --version ]
A self-sufficient runtime for containers.
Options:
--config=~/.docker Location of client config files
-D, --debug=false Enable debug mode
--disable-legacy-registry=false Do not contact legacy registries
-H, --host=[] Daemon socket(s) to connect to
-h, --help=false Print usage
-l, --log-level=info Set the logging level
--tls=false Use TLS; implied by --tlsverify
--tlscacert=~/.docker/ca.pem Trust certs signed only by this CA
--tlscert=~/.docker/cert.pem Path to TLS certificate file
--tlskey=~/.docker/key.pem Path to TLS key file
--tlsverify=false Use TLS and verify the remote
-v, --version=false Print version information and quit
Commands:
attach Attach to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on a container or image
kill Kill a running container
load Load an image from a tar archive or STDIN
login Register or log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
network Manage Docker networks
pause Pause all processes within a container
port List port mappings or a specific mapping for the CONTAINER
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart a container
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save an image(s) to a tar archive
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop a running container
tag Tag an image into a repository
top Display the running processes of a container
unpause Unpause all processes within a container
version Show the Docker version information
volume Manage Docker volumes
wait Block until a container stops, then print its exit code
Run 'docker COMMAND --help' for more information on a command.
You should use gcloud docker search gcr.io/project-id to check which images are in the repository. gcloud has your credentials, so it can talk to the private registry as long as you're authenticated as an appropriate user on the project.
Finally, as an added resource: The Cloud Platform docs have a whole article about working with Google Container Registry.
If you know the project that is hosting the images (e.g. google-containers) you can list images with
gcloud docker search gcr.io/google_containers
For an individual image (e.g. the pause image in the google-containers project), you can check the versions with
curl https://gcr.io/v2/google-containers/pause/tags/list
I've just found a far simpler way to check for specific images. Once you have authenticated gcloud, use it to generate access tokens for reading from your private registry:
curl -u "oauth2accesstoken:$(gcloud auth print-access-token)" https://gcr.io/v2/<projectName>/<imageName>/tags/list
My best solution so far without having a local docker available and without being able to connect to a remote docker (this would still require at least the local docker client but not the local daemon running), is to SSH into a Container Cluster instance that runs docker and have my search done there and getting the result in my original script:
gcloud compute ssh <container_cluster_instance> -C "sudo gcloud docker search ..."
Of course, to avoid all verbose output (like SSH/Terminal welcome messages) I use some arguments to silent the execution a bit:
gcloud compute ssh --ssh-flag="-q" "$INSTANCE_NAME" -o LogLevel=quiet -C "sudo gcloud docker search ..."

Resources