Anchore CLI stuck at "not_analyzing" and other things - docker

When I try to anchore-cli image add ... it gives me a failure of
Error: failed post url=http://engine-catalog:8228/v1/images
HTTP Code: 500
Detail: {'error_codes': []}
I then do a docker-compose ps and I see aevolume_engine-catalog_1 /docker-entrypoint.sh anch ... Up (unhealthy)
I try to fix the above with a docker-compose up -d but it just says everything is up to date.
So I have to restart my computer, then run docker-compose up -d again, and it starts everything up.
I then run the anchore-cli image add ... again, but it gets stuck on Status: not_analyzed
Waiting 5.0 seconds for next retry. It does this for about 10 minutes, and then it says Error: Requested image not found in system ... I'm then stuck back at square 1.
Anyone know what is wrong here? I'm using anchore-cli, version 0.4.1

Related

Get full text for warning in docker service update

When running the following command:
docker service update captain-captain --force
I am briefly seeing a warning:
no suitable node (scheduling constraints not satisfied on 2 nodes; host-mo…".
But I can't see the full text to understand this properly. Nor is there a task ID e.g. mqo2k39bax94y6fiq7boxxtge which I've seen in the past for similar warnings/errors, which I can inspect with docker inspect mqo2k39bax94y6fiq7boxxtge.
The warning does disappear after a short time and the update seems to complete OK, so it's clearly not fatal, but I want to understand a bit more about why it is showing in the first place.
The key was to add --detach to unblock the terminal (so that it doesn't wait for the task to finish):
docker service update captain-captain --force --detach
Then quickly (while the truncated message would still be showing had the terminal no been unblocked:
docker service ps captain-captain
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
1ejhe98ozrdn captain-captain.1 caprover/caprover:1.10.1 Ready Pending 2 seconds ago "no suitable node (host-mode p…"
o9y4cfwlsqy7 \_ captain-captain.1 caprover/caprover:1.10.1 mercian-31 Shutdown Running 2 seconds ago
Then quickly grab the ID of the task showing the error, and inspect it before the error is resolved:
docker inspect 1ejhe98ozrdn
Within that output you'll find the full error, in this case:
"Err": "no suitable node (scheduling constraints not satisfied on 2 nodes; host-mode port already in use on 1 node)"
Quite why docker can't just show the full error message in the first place, without having to just through these hoops, I'm not sure.
As an aside, docker seems not to be stopping the old instance before it schedules the new one even though we're definitely using stop-first.
Credit for this answer goes here.

How to fix this annoying docker error? (failed to register layer)

This error appears when I try to press ANY docker image.
This is a Fresh installation of docker in 5.0.21-rt14-MANJARO
Unable to find image 'ubuntu:16.04' locally
16.04: Pulling from library/ubuntu
35b42117c431: Extracting [==================================================>] 43.84MB/43.84MB
ad9c569a8d98: Download complete
293b44f45162: Download complete
0c175077525d: Download complete
docker: failed to register layer: Error processing tar file(exit status 1): Error cleaning up after pivot: remove /.pivot_root336598748: device or resource busy.
See 'docker run --help'.
I had the same error with the 5.0.xxx Kernel. Switching back to 4.19.59-1-MANJARO solved the problem...
EDIT:
you might try:
sudo tee /etc/modules-load.d/loop.conf <<< "loop"
sudo modprobe loop
then reboot and try again.
I'm now on 5.2.4-1-MANJARO
and everything works.
I followed these instructions here:
https://linuxhint.com/docker_arch_linux/
Yes, the problem is with your kernel version. I installed the version 5.2.4 and works very well.
Version with problem: 5.0.21

why does dockerized zap hang at the end of a baseline scan?

Fresh image and container of
owasp/zap2docker-stable:latest
The command:
docker exec zap1 ./zap-baseline.py
Hangs or processes forever afer:
FAIL-NEW: 0 FAIL-INPROG: 0 WARN-NEW: 4 WARN-INPROG: 0 INFO: 0 IGNORE: 0 PASS: 12
While earlier (2-3 months ago) it executed properly. Btw when I execute the same command inside the container, then it executes and shuts down properly. How to fix this so that jenkins job won't be stuck forever at the summary?
BTW Why does baseline-scan.py always print out the help section if I add '-r report.html' at the end? (EDIT, a typo -t instead of -r, but the problem stays)
That command doesnt look right to me.
The recommended command is:
docker run -t owasp/zap2docker-stable zap-baseline.py -t https://www.example.com
As per https://github.com/zaproxy/zaproxy/wiki/ZAP-Baseline-Scan
Its always printing out the help because '-t report-html' isnt valid. Look at the help shown to see the valid arguments. For an html report you should be using '-r report.html'

Error while running Jenkins in Docker

I am trying to run a docker with jenkins in it as below command:
docker run --rm -p 2222:2222 -p 9080:9080 -p 8081:8081 -p 9418:9418
-tivjenkinsci/workflow-demo
I continuously get below errors
INFO: Failed mkdirs of /var/jenkins_home/caches
[7412] Connection from 127.0.0.1:57701
[7412] Extended attributes (16 bytes) exist
[7412] Request upload-pack for '/repo'
[4140] [7412] Disconnected
[7415] Connection from 127.0.0.1:39829
[7415] Extended attributes (16 bytes) exist
[7415] Request upload-pack for '/repo'
[4140] [7415] Disconnected
I am following:https://github.com/jenkinsci/workflow-aggregator-plugin/blob/master/demo/README.md
My configuration:
OS : CentOS Linux release 7.2.1511 (Core)
user : jenkins
Checked inside the docker : directory /var/jenkins_home/caches was
getting created as jenkins user, having another directory:
git-f20b64796d6e86ec7654f683c3eea522
EVERYTHING IS DEFAULT
So if I google that error, I find a page: https://recordnotfound.com/git-plugin-jenkinsci-31194/issues (I know, not the project you're looking at but maybe same or similar issue) and that page if you just do a text search for that error, you'll see a line:
fix logging "Failed mkdirs of /var/jenkins_home/caches" when the directory already exists
it indicates that this is an open issue and that it was logged 11 days ago albeit for a different repo. if you delete the folder does it fix the issue? Maybe monitor that bug report for a fix or log an issue against the workflow-aggregator-plugin.

Elastic Beanstalk docker error

I'm getting a cryptic error when trying to update the configuration of a single-container Docker application. Anybody have an idea of what might cause this, or how to go about debugging it?
ERROR [3009] : Command execution failed:
[CMD-ConfigDeploy/ConfigDeployStage0/ConfigDeployPreHook/00run.sh]
command failed with error code 1:
/opt/elasticbeanstalk/hooks/configdeploy/pre/00run.sh
docker: "tag" requires 2 arguments. See 'docker tag --help'.
(ElasticBeanstalk::ActivityFatalError)
I've seen this one before, and believe this happens when the Docker container failed to build. The command that failed is the one which runs your container, and it's failing (IIRC) because it can't find the container from the previous build step. Things to try:
Does the Docker container build successfully with eb local? (https://aws.amazon.com/blogs/aws/run-docker-apps-locally-using-the-elastic-beanstalk-eb-cli/)
Try checking eb-activity.log for errors during the build process
Terminate the EC2 instance or rebuild the EB environment (sometimes smaller instances get out-of-memory errors that prevent further deployments)
It could happen if your application fails to start successfully the first time it deploys. Just started having this problem myself.
Take a look at /var/log/eb-activity.log on your server... you may see something like:
[2015-07-23T00:19:11.015Z] INFO [2624] - [CMD-Startup/StartupStage1/AppDeployEnactHook/00run.sh] : Starting activity...
[2015-07-23T00:19:17.506Z] INFO [2624] - [CMD-Startup/StartupStage1/AppDeployEnactHook/00run.sh] : Activity execution failed, because: jq: error: Cannot iterate over null
aca80d7accfe4800ff04992e2f89a1e05689423d286deee31b53bf470ce89afb
Docker container quit unexpectedly after launch: bleBeanFactory.java:942)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:533)
... 93 more. Check snapshot logs for details. (ElasticBeanstalk::ExternalInvocationError)
caused by: jq: error: Cannot iterate over null
aca80d7accfe4800ff04992e2f89a1e05689423d286deee31b53bf470ce89afb
Docker container quit unexpectedly after launch: bleBeanFactory.java:942)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:533)
... 93 more. Check snapshot logs for details. (Executor::NonZeroExitStatus)
[2015-07-23T00:19:17.506Z] INFO [2624] - [CMD-Startup/StartupStage1/AppDeployEnactHook/00run.sh] : Activity failed.
[2015-07-23T00:19:17.507Z] INFO [2624] - [CMD-Startup/StartupStage1/AppDeployEnactHook] : Activity failed.
[2015-07-23T00:19:17.507Z] INFO [2624] - [CMD-Startup/StartupStage1] : Activity failed.
[2015-07-23T00:19:17.507Z] INFO [2624] - [CMD-Startup] : Completed activity. Result:
Command CMD-Startup(stage 1) failed.
Next, look at /var/log/eb-docker/containers/eb-current-app If you see an unexpected-quit.log then it should contain the errors that your application logged as it tried, unsuccessfully, to start.
Unfortunately, in my case, it's failing to start because an environment variable is missing. However, AWS prevents me from updating the configuration while the beanstalk is in this state. And I can't specify the environment variables while I create the environment. So I'm not sure what I'll do to fix the problem.
I have the exact same issue as #Shannon's. My workaround is
first, deploy a sample Dockerfile that guarantees to work,
then setup all environment variables my real Docker app would need,
finally redeploy the real Docker app.
A sample Dockerfile copy-pasted from AWS documentation:
FROM ubuntu:12.04
RUN apt-get update
RUN apt-get install -y nginx zip curl
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN curl -o /usr/share/nginx/www/master.zip -L https://codeload.github.com/gabrielecirulli/2048/zip/master
RUN cd /usr/share/nginx/www/ && unzip master.zip && mv 2048-master/* . && rm -rf 2048-master master.zip
EXPOSE 80
CMD ["/usr/sbin/nginx", "-c", "/etc/nginx/nginx.conf"]
You can provide your environment variables on the command line in the eb create and eb clone commands. These are set before the create or clone task so the environment will come up with them set.
See the eb cli help. For example...
$ eb create -h
...
--envvars ENVVARS a comma-separated list of environment variables as
key=value pairs
...

Resources