Trying to get a private repo running on my EC2 instance so my other docker hosts created by docker-machine can pull from the private repo. I've disabled SSL and have put up a firewall to compensate that allows my test server(the one I'm trying to pull on) to connect to my main EC2 instance (the private repo). So far I can push to the private repo where it's hosted on my main EC2 instance (was getting an EOF error before disabling SSL) but I get the following error when I run this on my text server:
docker pull ec2-xx-xx-xxx-xxx.us-west-2.compute.amazonaws.com:5000/scoredeploy
this is the error it spits out:
Error response from daemon: Get https://ec2-xx-xx-xxx-xxx.us-west-2.compute.amazonaws.com:5000/v1/_ping: EOF
Googling this error on yields results of people having similar issues, but without any fixes.
Anybody have any idea of what's going on here?
You might need to set the --insecure-registry <registry-ip>:5000 flag on the docker daemon's startup command on your non-docker-registry machine. In your case: --insecure-registry ec2-xx-xx-xxx-xxx.us-west-2.compute.amazonaws.com:5000
If you want to use your already-running docker machine, this should help you out setting the flag: https://docs.docker.com/registry/insecure/#/deploying-a-plain-http-registry
If you're using boot2docker, the file location and format is slightly different. Give this a shot if this is the case: http://www.developmentalmadness.com/2016/03/09/docker-configure-insecure-registry-in-boot2docker/
I've had issues with my docker machines not saving this setting on reboots. If you run into that issue, I'd recommend you make a new machine including the flag --engine-insecure-registry <registry-ip>:5000 in the docker-machine create command.
Best of luck!
Related
Trying to create sitecore 10 image using Docker on Windows 10 Enterprise locally but getting unhealthy containers. Please help me out as I have tried various steps that was updated in the forums.
Getting below errors:
Creating network "sitecore-xp0_default" with the default driver
Creating sitecore-xp0_solr_1 ... done
Creating sitecore-xp0_mssql_1 ... done
Creating sitecore-xp0_id_1 ... done
Creating sitecore-xp0_solr-init_1 ... done
Creating sitecore-xp0_xconnect_1 ... done
Creating sitecore-xp0_cm_1 ... done
ERROR: for cortexprocessingworker Container "992574e988e3" is unhealthy.
ERROR: for xdbautomationworker Container "992574e988e3" is unhealthy.
ERROR: for xdbsearchworker Container "992574e988e3" is unhealthy.
ERROR: for traefik Container "933b548fc2f9" is unhealthy.
ERROR: Encountered errors while bringing up the project.
Checked the following things:
docker-compose stop on Powershell.
docker-compose down on Powershell.
iisreset /stop on Powershell to make sure that the required ports are free.
docker-compose up -d on Powershell.
Stopped, removed the container and executed the command docker-compose.exe up --detach multiple times but no luck.
Check the .env file and make sure SITECORE_LICENSE has a value.
You may need to run the init.ps1 file.
Based on the logs now provided in the comments above, my suggestion would be to check the collection SQL connection string, to the shardsmanager database.
You can inspect the SQL container in docker for Windows and find the IP address of the SQL server. Connect to that using ssms and try connecting with the creds you have in current string.
Edit: looking again at the exception, it looks like it can't find the SQL server. Yet the CM server appears to not have a problem finding the same server. So compare the web/master/core connection string to the collection one. I'm guessing the SQL server portion will be different?
I'm trying to use TestContainers to run JUnit tests.
However, I'm getting a InternalServerErrorException: Status 500: {"message":"Get https://registry-1.docker.io/v2/: Forbidden"} error.
Please note, that I am on a secure network.
I can replicate this by doing docker pull testcontainers/ryuk on the command line.
$ docker pull testcontainers/ryuk
Using default tag: latest
Error response from daemon: Get https://registry-1.docker.io/v2/: Forbidden
However, I need it to pull from our nexus service: https://nexus.company.com/18443.
Inside the docker-compose file, I'm already using the correct nexus image path. (Verified by manually starting it with docker-compose. However TestContainers also pulls in additional images which are outside the docker-compose file. It is these images that are causing the failure.
I'd be glad for either a Docker Desktop or TestContainers configuration change that would fix this for me.
Note: I've already tried adding the host URL for nexus to the Docker Engine JSON configuration on the dashboard, with no change to the resulting error when doing docker pull.
Since the version 1.15.1 Testcontainers allow to automatically append prefixes to all docker images. In case your private registry is configured as a docker hub mirror this functionality should help with the mentioned issue.
Quote from the documentation:
You can then configure Testcontainers to apply the prefix registry.mycompany.com/mirror/ to every image that it tries to pull from Docker Hub. This can be done in one of two ways:
Setting environment variables TESTCONTAINERS_HUB_IMAGE_NAME_PREFIX=registry.mycompany.com/mirror/
Via config file, setting hub.image.name.prefix in either:
the ~/.testcontainers.properties file in your user home directory, or
a file named testcontainers.properties on the classpath
Basically set the same prefix you did for the images in your docker-compose file.
If you're stuck with older versions for some reason, a deprecated solution would be to override just the ryuk.container.image property. Read about it here.
The process is described on this page:
Add the following to your Docker daemon config:
{
"registry-mirrors": ["https://nexus.company.com:18443"]
}
Make sure to restart the daemon to apply the changes.
I have a problem with Docker that seems to happen when I change the machine type of a Google Compute Platform VM instance. Images that were fine fail to run, fail to delete, and fail to pull, all with various obscure messages about missing keys (this on Linux), duplicate or missing layers, and others I don't recall.
The errors don't always happen. One that occurred just now, with an image that ran a couple hundred times yesterday on the same setup, though before a restart, was:
$ docker run --rm -it mbloore/model:conda4.3.1-aq0.1.9
docker: Error response from daemon: layer does not exist.
$ docker pull mbloore/model:conda4.3.1-aq0.1.9
conda4.3.1-aq0.1.9: Pulling from mbloore/model
Digest: sha256:4d203b18fd57f9d867086cc0c97476750b42a86f32d8a9f55976afa59e699b28
Status: Image is up to date for mbloore/model:conda4.3.1-aq0.1.9
$ docker rmi mbloore/model:conda4.3.1-aq0.1.9
Error response from daemon: unrecognized image ID sha256:8315bb7add4fea22d760097bc377dbc6d9f5572bd71e98911e8080924724554e
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$
So it thinks it has no images, but the Docker folders are full of files, and it does know some hashes. It looks like some index has been damaged.
I restarted that instance, and then Docker seemed to be normal again without any special action on my part.
The only workarounds I have found so far are to restart and hope, or to delete several large Docker directories, and recreate them empty. Then after a restart and pull and run works again. But I'm now not sure that it always will.
I am running with Docker version 17.05.0-ce on Debian 9. My images were built with Docker version 17.03.2-ce on Amazon Linux, and are based on the official Ubuntu image.
Has anyone had this kind of problem, or know a way to reset the state of Docker without deleting almost everything?
Two points:
1) It seems that changing the VM had nothing to do with it. On some boots Docker worked, on others not, with no change in configuration or contents.
2) At Google's suggestion I installed Stackdriver monitoring and logging agents, and I haven't had a problem through seven restarts so far.
My first guess is that there is a race condition on startup, and adding those agents altered it in my favour. Of course, I'd like to have a real fix, but for now I don't have the time to pursue the problem.
I'm trying to follow the Docker Get Started guide. Currently I'm at part 4. Everything up until the point
docker stack deploy -c docker-compose.yml getstartedlab
worked well. However, after trying to deploy the services, when I run docker stack ps getstartedlab, I see that the swarm manager keeps trying to restart the containers, since every time they get the error "No such image: username/get-st…" and have their state as "Rejected 6 seconds ago" etc.
I tried to search for solutions a bit but surprisingly it seems that nobody encountered this error before whatsoever. The issue here and a similar section in the Get Started guide talks about situations where one wants to pull from a private registry. However, throughout the tutorial I've been working with the default public registry. All previous steps (e.g. launching the swarm locally, without using virtualbox) worked fine.
Versions:
Docker version 18.02.0-ce, build fc4de447b5
Virtualbox 5.2.8 r120774
System Kernel: 4.14.25-1-MANJARO
Any idea what might have been the problem?
Surprisingly passing in the flag --with-registry-auth worked even though my repo is apparently on Docker Hub. Not sure what the problem was but maybe the claim that one would only need this flag if they're using a private registry is a bit inaccurate then.
I've been trying to set up an Continuous Delivery server with Bamboo. I've got everything going nicely up to the deployment. Bamboo builds and tests my C# project as it should.
Then I created a "deployment plan", installed docker and added the server capability to use docker, set up the docker tasks to build and deploy to dockerHub.
When I try to deploy, I get this error:
An error occurred trying to connect: Post http: //127.0.0.1:2375/v1.22/build ?buildargs=%7B%7D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&forcerm=1&memory=0&memswap=0&rm=1&shmsize=0&t=srgskiri%2Fresttest&ulimits=null : dial tcp 127.0.0.1:2375: connectex: No connection could be made because the target machine actively refused it.
01-mrt-2016 13:19:03 Failing task since return code of [C:\Program Files\Docker Toolbox\docker.exe build --force-rm=true --tag="srgskiri/resttest" C:\Users\Srg\bamboo-home\xml-data\build-dir\2129921-2195457] was 1 while expected 0
Now I think that it means that the bamboo 'object' that is calling the command to build, can't communicate with my docker engine/container.
First I thought it was because I didn't have docker-machine running, so I started it and ran the deploy, and still got this error.
This is what I have:
Server capability: path to docker
Docker task: building into an Image
Is there something I'm missing?
PS: Docker works perfectly on its own, both with docker UI or docker terminal. It's bamboo that can't interact with docker.
UPDATE: I didn't mention this, but I ran Bamboo in a Console, not as a service. Maybe thats the problem, that bamboo can't access docker out of console. I can't try this myself now because I can't install bamboo as a service. Keeps hanging if I try to start it as a service.
Will ask the bamboo support about it.
I figured it out... If u work on a Windows, Bamboo has to start the docker-machine itself.
So you have to add Command tasks to:
1) create a docker-machine (if u don't have any yet)
2) start it (if you start docker in bamboo, you can't access it in Windows and vice-versa)
only then you are able to use Docker in Bamboo on Windows.
I feel silly now
-EDIT- To use the Docker tasks after starting the docker-machine, you must also specify the Environment variables for the tasks (like DOCKER_TLS_VERIFY=1)
Otherwise you'll get the error mentioned above.