Travis fails to stop Docker containers - docker

I'm using Travis to build my project: https://github.com/Krijger/docker-gradle
The build uses Docker and Docker Compose. During the build, I try to stop a running container, which results in a permission denied.
https://travis-ci.org/Krijger/docker-gradle/builds/82739195
ERROR: for dockerplugin_service_1 Cannot stop container d23b7e9fc2a7bec16bdef883177d7df5582e8de2736b8623e878be6a4943c8b0:
[8] System error: permission denied
I am not alone in this issue. I'm seeing this in other Travis builds as well.

I had the same issue and had no satisfactory solution. I know this won't make an acceptable answer but I figured I could save you some time by sharing a few links.
Related issue on the TravisCI issue tracker
the kill -9 trick
similar issue on the docker-py project
It seems some succeeded with the use of the --privileged flag
Edit: it was reported that the following lines added to the .travis.yml config files does the trick:
install:
# place apparmor docker profile in complain mode
# to workaround https://github.com/travis-ci/travis-ci/issues/4661
- sudo apt-get -y update
- sudo apt-get -y install apparmor-utils
- sudo aa-complain /etc/apparmor.d/docker
For me, I just gave up TravisCI and moved to CircleCI which natively offers Docker 1.5 and also makes it possible to have Docker 1.7.1 if you start your yml file with:
machine:
pre:
# install docker 1.7.1
- sudo curl -L -o /usr/bin/docker 'https://s3-external-1.amazonaws.com/circle-downloads/docker-1.7.1-circleci'; sudo chmod 0755 / usr/bin/docker; true
services:
- docker

Related

Unable to build docker container on synology, as synology uses 7z not unzip

I had set a little docker project for myself and thought it may be fun to try and get azerothcore running on my synology.
I have cloned the repository, but was unable to run the acore.sh script to build the docker containers as synology uses 7zip, and acore.sh threw an error because it couldn't unzip the archives.
I wondered if it was possible for me to find out what scripts were attempting to unzip things, and change the commands to call 7z?
running acore.sh throws an error because it can't find unzip. however synology use 7zip.
user#DS920:/volume1/docker/wow/azerothcore-wotlk$ ./acore.sh docker build NOTICE: file </volume1/docker/wow/azerothcore-wotlk/conf/config.sh> not found, we use default configuration only. Deno version check: /volume1/docker/wow/azerothcore-wotlk/apps/bash_shared/deno.sh: line 18: ./deps/deno/bin/deno: No such file or directory Installing Deno... Error: unzip is required to install Deno (see: https://github.com/denoland/deno_install#unzip-is-required).
The error message points to /volume1/docker/wow/azerothcore-wotlk/apps/bash_shared/deno.sh and says
Error: unzip is required to install Deno
If you look into deno.sh script you'll see the command which installs deno:
curl -fsSL https://deno.land/x/install/install.sh | DENO_INSTALL="$AC_PATH_DEPS/deno" sh
If you download this script you'll see unzip there.
I would suggest trying to install unzip, e.g. like described here: How to install IPKG on Synology NAS
You can bypass the ./acore.sh dashboard with standard docker commands.
to build:
$ docker compose --profile app build
to run:
$ docker compose --profile app up # -d for background
Using standard docker commands has the added side benefit of not needing to install deno locally since it's already being installed to the container.
Have your tried:
sudo opkg install unzip

update solidity version in docker container

I installed oyente using docker installation as described in the link
https://github.com/enzymefinance/oyente using the following command.
docker pull luongnguyen/oyente && docker run -i -t luongnguyen/oyente
I can analyse older smart contracts but I get compilation error when I try it on newer contracts. I need to update the version of solc but I couldn't.
On the container the current version is
solc, the solidity compiler commandline interface
Version: 0.4.21+commit.dfe3193c.Linux.g++ .
I read that the best way to update it is to use the command npm so I executed the following command but I am getting errors cause I assume npm version is not new also.
docker exec -i container_name bash -c "npm install -g solc"
I would appreciate, cause I am trying to sole this for hours now. Thanks in advance,
Ferda
Docker's standard model is that an image is immutable: it contains a fixed version of your application and its dependencies, and if you need to update any of this, you need to build a new image and start a new container.
The first part of this, then, looks like any other Node package update. Install Node in the unlikely event you don't have it on your host system. Run npm update --save solc to install the newer version and update your package.json and package-lock.json files. This is the same update you'd do if Docker weren't involved.
Then you can rebuild your Docker image with docker build. This is the same command you ran to initially build the image. Once you've created the new image, you can stop, delete, and recreate your container.
# If you don't already have Node, get it
# brew install nodejs
# Update the dependency
npm update --save solc
npm run test
# Rebuild the image
docker build -t image_name .
# Recreate the container
docker stop container_name
docker rm container_name
docker run -d --name container_name image_name
npm run integration
git add package*.json
git commit -m 'update solc version to 0.8.14'
Some common Docker/Node setups try to store the node_modules library tree in an anonymous volume. This can't be easily updated, and hides the node_modules tree that gets built from the image. If you have this setup (maybe in a Compose volumes: block) I'd recommend deleting any volumes or mounts that hide the image contents.
Note that this path doesn't use docker exec at all. Think of this like getting a debugger inside your running process: it's very useful when you need it, but anything you do there will be lost as soon as the process or container exits, and it shouldn't be part of your normal operational toolkit.

unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /var/lib/snapd/void/Dockerfile: no such file or directory

I installed docker on Ubuntu with snap (snappy?), and then I ran this:
ln -sf /usr/bin/snap /usr/local/bin/docker
when I run docker build I get:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /var/lib/snapd/void/Dockerfile: no such file or
directory
I ran into this same problem. I was setting up an Ubuntu server and elected to have Docker installed during the initial setup. It installed using snap, and as a result I couldn't run Docker in any directory outside of my home directory. This includes trying to Docker run any program in /var/. I fixed it by doing sudo snap remove docker and reinstalled using the official instructions in the Ubuntu docs:
https://docs.docker.com/engine/install/ubuntu/
I've got the same error using Ubuntu, and I noticed that I had installed the wrong Docker.
Instead of (docker - transitional package) install (docker.io - Linux container runtime)
apt install docker.io
I got this exact error message when I was running in /tmp/foo. When I switched the directory to /home/me/tmp/foo, The error went away.
Run docker cmd by root privilege, you can simply do it by using sudo
Uninstall snap docker version
snap remove docker
then restart and install again using apt / apt-get
apt-get install docker
this will install all symlinks related

snap and gitlab-CI: error: cannot communicate with server: Post http://localhost/v2/snaps/hello-world

If I try to run snap under a gitlab-CI pipeline, installing the most simple package, it fails with:
$ snap install hello-world
error: cannot communicate with server: Post
http://localhost/v2/snaps/hello-world: dial unix /run/snapd.socket:
connect: no such file or directory
The gitlab-ci yml configuration file is the simplest ever:
image: ubuntu:18.04
before_script:
- apt-get update -qq
test:
script:
- apt-get install -y snapd
- snap version
- snap install hello-world
- hello-world
What's going on?
In my case it is solved by starting the snapd service:
systemctl start snapd.service
Unfortunately, snaps use many of the underlying security tech used by docker, and they don't play very nicely. Installing a snap also requires snapd to be running, which it's not in docker (hence the error). I'm afraid you simply cannot reliably install snaps in docker containers today.
Note that there are other non-docker-based CI systems. You can, with a little custom work, use LXD as the backend for your GitLab CI runner, which handle snaps fine. You can also use GitHub Actions, which seem so be based on a Azure VM, which also handles snaps fine.
Seems GithubActionsCI doesn't use Docker so I'm using this now instead of GitLabCI, to build and test snap packages.
Just note:
You need sudo to install snap with apt-get, and also to install any snap package with the snap command.
If you want to run the snapcraft (to build packages, not just test them), getting it via apt-get works, but gives a version that is a bit old (e.g. it doesn't support layouts). If you want a newer version, you can install it via snap with snap install snapcraft but you need some workarounds to make it run, such as sudo chown root:root / and to pass the --destructive-mode flag (see https://forum.snapcraft.io/t/permissions-problem-using-snapcraft-in-azure-pipelines/13258/16).

Yum install won't work on a boot2docker host?

I'm relatively new to Docker.
I have launch a boot2docker host using docker-machine create -d.
Managed to connect to it, and run few commands. All good.
However, when trying to create a basic http server image, based on centos..
"yum install" simply fails. No matter what is the package.
This is my Docker file:
FROM centos
MAINTAINER Amir
#Install Apache
RUN yum install httpd
When running:
docker build .
It's starting to build the image, and everything looks good.. but then fails with:
Your transaction was saved, rerun it with:
yum load-transaction /tmp/yum_save_tx.2015-09-18.15-10.q5ss8m.yumtx
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
The command '/bin/sh -c yum install httpd' returned a non-zero code: 1
Any idea what am I doing wrong?
Thanks in advance.
If you look bit earlier than the last message, you have a good chance to see something like this:
Total download size: 24 M
Installed size: 32 M
Is this ok [y/d/N]: Exiting on user command
Your transaction was saved, rerun it with:
which means you have to change the default choice, e.g.
#Install Apache
RUN yum install -y httpd

Resources