I'm trying to follow the instructions here to install google cloud sdk on my boot2docker environment without success.
Is the boot2docker is a limited linux? what is missing?
The error I get is:
(23) Failed writing body
Is the boot2docker is a limited linux?
Yes, boot2docker is based on the Tiny Core distro, and has the following limitations:
Only C:\Users or /Users is mounted
symlinks are not supported.
If gcloud needs to access to another path or uses symlink, that would fail.
In general, boot2docker is there to host a docker dameon, and allows you to declare the installation of programs in containers.
See for example blacklabelops/gcloud which includes the latest Google Cloud SDK along with all modules (as defined in its Dockerfile).
You would execute gcloud commands by running that container, not by installing gcloud directly on your boot2docker instance.
For instance:
docker run -it --rm \
-e "GCLOUD_ACCOUNT=$(base64 auth.json)" \
-e "GCLOUD_ACCOUNT_EMAIL=useraccount#developer.gserviceaccount.com" \
-e "CLOUDSDK_CORE_PROJECT=example-project" \
-e "CLOUDSDK_COMPUTE_ZONE=europe-west1-b" \
-e "CLOUDSDK_COMPUTE_REGION=europe-west1" \
blacklabelops/gcloud \
gcloud compute instances list
Related
From this article, it states that windows 11 natively supports running of X11 and wayland applications on wsl.
I tried to do the same through a docker container, settinng the environment variable DISPLAY="host.docker.internal:0.0", and running a gui application (like gedit). But instead I got this error:
Unable to init server: Could not connect: Connection refused
Gtk-WARNING **: 17:05:50.416: cannot open display: host.docker.internal:0.0
I stumbled upon your question while attempting the same thing as you are and acctually got it to work with aid of this blog post on Microsoft. I use a minimal Dockerfile based on Ubuntu and installs gedit:
FROM ubuntu:22.04
RUN apt update -y && apt install -y gedit
CMD ["gedit"]
Create the image the usual way, e.g. docker build . -t guitest:1.0
On the WSL command line, start it like this:
docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix \
-v /mnt/wslg:/mnt/wslg \
-e DISPLAY \
-e WAYLAND_DISPLAY \
-e XDG_RUNTIME_DIR \
-e PULSE_SERVER \
guitest:1.0
I hope this is to good use for you as well.
This answer is heavily based on what chrillof has said. Thanks for the excellent start!
The critical things here for Docker Desktop users on Windows with WSL2 are that:
The container host (i.e. docker-desktop-data WSL2 distribution) does not have a /tmp/.X11-unix itself. This folder is actually found in the /mnt/host/wslg/.X11-unix folder on the docker-desktop distribution which translates to /run/desktop/mnt/host/wslg/.X11-unix when running containers.
There are no baked-in environment variables to assist you, so you need to specify the environment variables explicitly with these folders in mind.
I found this GitHub issue where someone had to manually set environment variables which allowed me to connect the dots between what others experience directly on WSL2 and chrillof's solution
Therefore, modifying chrillof's solution using PowerShell from the host, it looks more like:
docker run -it -v /run/desktop/mnt/host/wslg/.X11-unix:/tmp/.X11-unix `
-v /run/desktop/mnt/host/wslg:/mnt/wslg `
-e DISPLAY=:0 `
-e WAYLAND_DISPLAY=wayland-0 `
-e XDG_RUNTIME_DIR=/mnt/wslg/runtime-dir `
-e PULSE_SERVER=/mnt/wslg/PulseServer `
guitest:1.0
On my computer, it looks like
this (demo of WSLg X11)
To be clear, I have not checked if audio is functional or not, but this does allow you to avoid the installation of another X11 server if you already have WSL2 installed.
I am using docker datapower image for local development. I am using this image
https://hub.docker.com/layers/ibmcom/datapower/latest/images/sha256-35b1a3fcb57d7e036d60480a25e2709e517901f69fab5407d70ccd4b985c2725?context=explore
Datapower version: IDG.10.0.1.0
System: Docker for mac
Docker version 19.03.13
I am running the container with the following config
docker run -it \
-v $PWD/config:/drouter/config \
-v $PWD/local:/drouter/local \
-e DATAPOWER_ACCEPT_LICENSE=true \
-e DATAPOWER_INTERACTIVE=true \
-p 9090:9090 \
-p 9022:22 \
-p 5554:5554 \
-p 8000-8010:8000-8010 \
ibmcom/datapower
when I create files in file management or save a DP object configuration I do not see the changes reflected in the directory on my machine
also I would expect to be able to create files on my host directory and see them reflected in /drouter/config + /drouter/local in the container as well as in the management GUI
the volume mounts don't seem to be working correctly or perhaps I misunderstand something about Datapower or Docker
I have tried mounting volumes in other docker containers under the same path and that works fine so I don't think its an issue with file sharing settings in docker.
The file system structure changed in version 10.0. There is some documentation in the IBM Knowledge Center showing the updated locations for config:, local:, etc., but the Dockerhub page is not updated to reflect that yet.
mounting the volumes like this fixed it for me
-v $PWD/config:/opt/ibm/datapower/drouter/config \
-v $PWD/local:/opt/ibm/datapower/drouter/local \
It seems the container is persisting configuration here instead. This is different than the instructions on dockerHub
I'm currently following this tutorial to run a model on Docker that was built using the Google Cloud AutoML Vision:
https://cloud.google.com/vision/automl/docs/containers-gcs-tutorial
I'm having trouble running the container, specifically running this command:
sudo docker run --rm --name ${CONTAINER_NAME} -p ${PORT}:8501 -v ${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -t ${CPU_DOCKER_GCR_PATH}
I have my environment variables set up right (did an echo $<env_var>). I do not have a /tmp/mounted_model/0001 directory on my local system. My model path is configured to be the model location on the cloud storage.
${YOUR_MODEL_PATH} must be a directory on the host on which you're running the container.
Your question suggests that you're using the Cloud Storage bucket path but you cannot do this.
Reviewing the tutorial, I think the instructions are confusing.
You are told to:
gsutil cp \
${YOUR_MODEL_PATH} \
${YOUR_LOCAL_MODEL_PATH}/saved_model.pb
So, your command should probably be:
sudo docker run \
--rm \
--interactive --tty \
--name=${CONTAINER_NAME} \
--publish=${PORT}:8501 \
--volume=${YOUR_LOCAL_MODEL_PATH}:/tmp/mounted_model/0001 \
${CPU_DOCKER_GCR_PATH}
NB I added --interactive --tty to make debugging easier; it's optional
NB ${YOUR_LOCAL_MODEL_PATH} not ${YOUR_MODEL_PATH}
NB The command should not be -t ${CPU_DOCKER_GCR_PATH} omit the -t
I've not run through this tutorial.
I recently found out about Podman (https://podman.io). Having a way to use Linux fork processes instead of a Daemon and not having to run using root just got my attention.
But I'm very used to orchestrate the containers running on my machine (in production we use kubernetes) using docker-compose. And I truly like it.
So I'm trying to replace docker-compose. I will try to keep docker-compose and using podman as an alias to docker as Podman uses the same syntax as docker:
alias docker=podman
Will it work? Can you suggest any other tool? I really intend to keep my docker-compose.yml file, if possible.
Yes, that is doable now, check podman-compose, this is one way of doing it, another way is to convert the docker-compose yaml file to a kubernetes deployment using Kompose. there is a blog post from Jérôme Petazzoni #jpetazzo: from docker-compose to kubernetes deployment
Update 6 May 2022 : Podman now supports Docker Compose v2.2 and higher (see Podman 4.1.0 release notes)
Old answer:
Running docker-compose with Podman as a normal user (rootless)
Requirement: Podman version >= 3.2.1 (released in June 2021)
Install the executable docker-compose
curl -sL -o ~/docker-compose https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)
chmod 755 ~/docker-compose
Alternatively you could also run docker-compose in a container image (see below).
Run
systemctl --user start podman.socket
Set the environment variable DOCKER_HOST
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
Run
~/docker-compose up -d
Running docker-compose with Podman as root
Requirement: Podman version >= 3.0 (released in February 2021)
Follow the same procedure but remove the flag --user
systemctl start podman.socket
Running docker-compose in a container image
Use the container image docker.io/docker/compose to run
docker-compose
podman \
run \
--rm \
--detach \
--env DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock \
--security-opt label=disable \
--volume $XDG_RUNTIME_DIR/podman/podman.sock:$XDG_RUNTIME_DIR/podman/podman.sock \
--volume $(pwd):$(pwd) \
--workdir $(pwd) \
docker.io/docker/compose \
--verbose \
up -d
(the flag --verbose is optional)
The same command with short command-line options on a single line:
podman run --rm -d -e DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock --security-opt label=disable -v $XDG_RUNTIME_DIR/podman/podman.sock:$XDG_RUNTIME_DIR/podman/podman.sock -v $(pwd):$(pwd) -w $(pwd) docker.io/docker/compose --verbose up -d
Regarding SELINUX: Runnng Podman with SELINUX is preferable from a security point-of-view, but I didn't get it to work on a Fedora 34 computer so I disabled SELINUX by adding the command-line option
--security-opt label=disable
Troubleshooting tips
Test the Docker REST API
A minimal check to see that the Docker REST API is working:
$ curl -H "Content-Type: application/json" \
--unix-socket $XDG_RUNTIME_DIR/podman/podman.sock \
http://localhost/_ping
OK$
Avoid short container image names
If any of your docker-compose.yaml or Dockerfile files contain a short container image name, for instance
$ grep image: docker-compose.yaml
image: mysql:8.0.19
$
$ grep FROM Dockerfile
FROM python:3.9
$
edit the files to use the whole container image name instead
$ grep image: docker-compose.yaml
image: docker.io/library/mysql:8.0.19
$
$ grep FROM Dockerfile
FROM docker.io/library/python:3.9
$
Most often short names have been used to reference DockerHub Official Images
(a catalogue) so a good guess would be to prepend the container image name with docker.io/library/
There are currently many different container image registries, not just DockerHub (docker.io). Writing the whole container image name is thus a good practice. Podman might complain otherwise depending on how Podman is configured.
Rootless users can't bind to ports below 1024
If for instance
$ grep -A1 ports: docker-compose.yml
ports:
- 80:80
$
edit docker-compose.yaml so that the host port number >= 1024, for instance 8080
$ grep -A1 ports: docker-compose.yml
ports:
- 8080:80
$
An alternative solution is to adjust net.ipv4.ip_unprivileged_port_start with sysctl (see Shortcomings of Rootless Podman)
In case Systemd is missing
Most Linux distributions use Systemd where you would preferably start the Podman service (providing the REST API) by "starting" the Podman socket
systemctl --user start podman.socket
or
systemctl start podman.socket
but in case Systemd is missing you could also start the Podman service directly
podman system service --time 0 unix:/some/path/podman.sock
Systemd gives the extra benefit that the Podman service is started on demand with Systemd socket activation and stops after some time of inactivity.
Caveat: Swarm functionality is missing
A difference to Docker is that the functionality relating to Swarm is not supported when using docker-compose with Podman.
References:
https://www.redhat.com/sysadmin/podman-docker-compose
https://github.com/containers/podman/discussions/10644#discussioncomment-857897
Ensure Podman is installed on your machine.
You can install Podman Compose in a terminal with the following command:
pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz
cd into the directory your docker-compose file is located in
Run podman-compose up
See the following link for a decent introduction.
Ive been given a docker container which is run via a bash script. The container should set up a php web app, it then goes on to call other scripts and containers. It seems to work fine for others, but for me its throwing an error.
This is the code
sudo docker run -d \
--name eluci \
-v ./config/eluci.settings:/mnt/eluci.settings \
-v ./config/elucid.log4j.settings.xml:/mnt/eluci.log4j.settings.xml \
--link eluci-database:eluci-database \
/opt/eluci/run_eluci.sh
This is the error
docker: Error response from daemon: create ./config/eluci.settings:
"./config/eluci.settings" includes invalid characters for a local
volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to
pass a host directory, use absolute path.
Im running docker on a centos VM using virtualbox on a windows 7 host.
From googling it seems to be something to do with the mount, however I dont want to change it in case the setting it breaks or is relied upon in another docker container. I still have a few more bash scripts to run, which should orchestrate the rest of the build process. As a complete newb to Docker, this has got me stumped.
The command docker run -v /path/to/dir does not accept relative paths, you should provide an absolute path. The command can be re-written as:
sudo docker run -d \
--name eluci \
-v "/$(pwd)/config/eluci.settings:/mnt/eluci.settings" \
-v "/$(pwd)/config/elucid.log4j.settings.xml:/mnt/eluci.log4j.settings.xml" \
--link eluci-database:eluci-database \
/opt/eluci/run_eluci.sh