Hi I'm using K3S to deploy https://github.com/LAION-AI/Open-Assistant,
and I found that when I try to use kompose to convert the docker-compose.yaml in Open-Assistant to multiple yaml to be able to deploy it into k3s by kubectl -f .,
with some image build options in those yaml,kompose tells me need docker available,
by default I can found that now K3S using cri-o as container layer,I wonder if I install docker in K3S machine,will it conflict with the cri-o and affect the running of K3S?
or is there any other good way to build image in this situation?
I can't find a reference documents that said kompose support image build with components other than docker
here is what I got
[root#master1 Open-Assistant]# kompose convert --build=local -f docker-compose.yaml -vvv
DEBU Checking validation of provider: kubernetes
DEBU Checking validation of controller:
WARN Restart policy 'unless-stopped' in service inference-text-client is not supported, convert it to 'always'
WARN Restart policy 'unless-stopped' in service inference-server is not supported, convert it to 'always'
INFO Network open-assistant-default is detected at Source, shall be converted to equivalent NetworkPolicy at Destination
INFO Build key detected. Attempting to build image 'oasst-backend'
DEBU Build image working dir is: /root/Open-Assistant
DEBU Build image service build is: /root/Open-Assistant
DEBU Build image context is: /root/Open-Assistant
INFO Building image 'oasst-backend' from directory 'Open-Assistant'
DEBU Created temporary file /tmp/kompose-image-build-2158795954 for Docker image tarballing
DEBU Image oasst-backend build output:
FATA Unable to build Docker image for service backend: Unable to build image. For more output, use -v or --verbose when converting.: dial unix /var/run/docker.sock: connect: no such file or directory
Related
I've run into a weird permissions issue in WSL 2 and am struggling to solve it. We have multiple containers that are running in docker via docker compose. Everything was working fine and then I went to run make rebuild with an updated dependencies file and I'm receiving the following exception:
(base) jam#Dagobah-System:~/dev$ sudo make rebuild
Input arguments
REPO: local
CONTAINER TAG: 0
SSH KEY PATH: /home/jam/.ssh/id_rsa
BASE REQS: ./deploy/environment/requirements.base.txt
CUSTOM REQS: ./deploy/environment/requirements.custom.txt
LOCAL REQS: ./deploy/environment/requirements.local.txt
TIMEZONE: Etc/UTC
DOCKERFILE: Dockerfile.local
Building base image
[+] Building 0.0s (1/2)
=> ERROR [internal] load build definition from Dockerfile.local 0.0s
------
> [internal] load build definition from Dockerfile.local:
------
failed to solve with frontend dockerfile.v0: failed to read dockerfile: failed to create lease: write /var/lib/docker/buildkit/containerdmeta.db: read-only file system
Finished building base image
Creating network "dev_default" with the default driver
ERROR: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
makefile:24: recipe for target 'start-infra' failed
make: *** [start-infra] Error 1
If I close everything out, restart the machine, and I can bring the old containers back up and they run fine. This looks to be some sort of permissions issue between WSL 2, Docker, and the Docker directory on Windows 10 pro.
Windows 10 Pro
Version: 10.0.19041 Build 19041
Shouldn't matter, but the user in wsl has the same credentials as the user on Windows. I also configured Docker for windows with:
Start Docker Desktop when you login
Expose daemon on tcp://localhost:2375 without TLS
Use the WSL 2 based engine
I would like to deploy an application to a remote server by using docker-compose with a remote context, following this tutorial.
The Dockerfile contains
FROM ubuntu
The docker-compose.yml contains
version: "3.8"
services:
ubuntu_test:
build: .
The remote context remote is set as ssh://root#host
When I run docker-compose --context remote up, it crashes with the following error message
runtime/cgo: pthread_create failed: Resource temporarily unavailable
runtime/cgo: pthread_create failed: Resource temporarily unavailable
SIGABRT: abort
PC=0x7fb21d93bfb7 m=3 sigcode=18446744073709551610
goroutine 0 [idle]:
runtime: unknown pc 0x7fb21d93bfb7
stack: frame={sp:0x7fb21aee9840, fp:0x0} stack=[0x7fb21a6ea288,0x7fb21aee9e88)
[...]
ERROR: Couldn't connect to Docker daemon at http+docker://ssh - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
What is already working
Copying the source code to the remote server, logging in and running docker-compose up
Unpacking docker-compose into the corresponding docker commands
docker --context remote build .: works
docker --context remote run ubuntu: works
Using docker-compose --context remote build on the local machine to build the images on the remote server
In summary, everything works except for docker-compose --context remote up and I can't for the live of me figure out why. Everything I got is this cryptic error message (but obviously, Docker is running on the remote server, otherwise docker with remote context would fail).
Edit: My problem can be reduced to: What is the difference between docker --context remote run ubuntu and docker-compose --context remote up (as defined in my case)?
When I try to execute a build of the images of Syndesis using Minishift, it finishes with this error:
[ERROR] Failed to execute goal
io.fabric8:fabric8-maven-plugin:3.5.38:build (exec) on project s2i:
Failed to execute the build: Unable to build the image using the
OpenShift build service: Can't instantiate binary build, due to error
reading/writing stream. Can be caused if the output stream was closed
by the server. Connection reset
I cheched that minishift is running with "minishift status":
$ minishift status
Minishift: Running
Profile: minishift
OpenShift: Running (openshift v3.11.0+82a43f6-231)
DiskUsage: 76% of 20G (Mounted On: /mnt/sda1)
CacheUsage: 1.679 GB (used by oc binary, ISO or cached images)
and checked that the proper project/pods are installed with "oc get pods" command.
The problem was that minishift was running out of space.
I could recover some space with the command
$ syndesis dev --cleanup
When you do this, make sure the nip.io domains work on your machine. If that's not the case, add the following entry to your /etc/hosts:
$IP $IP.nip.io syndesis.$IP.nip.io docker-registry-default.$IP.nip.io
where $IP is the IP of minishift, which you can find with:
$ minishift ip
192.168.42.58
I'm trying to deploy a docker image from docker hub on openshift.
I crated an image with a simple spring boot rest application:
https://hub.docker.com/r/ernst1970/my-rest
After logging into openshift an choosing the correct project I do
oc new-app ernst1970/my-rest
And I get
W0509 13:17:28.781435 16244 dockerimagelookup.go:220] Docker registry lookup failed: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Clien
t.Timeout exceeded while awaiting headers)
error: Errors occurred while determining argument types:
ernst1970/my-rest as a local directory pointing to a Git repository: GetFileAttributesEx ernst1970/my-rest: The system cannot find the path specified.
Errors occurred during resource creation:
error: no match for "ernst1970/my-rest"
The 'oc new-app' command will match arguments to the following types:
1. Images tagged into image streams in the current project or the 'openshift' project
- if you don't specify a tag, we'll add ':latest'
2. Images in the Docker Hub, on remote registries, or on the local Docker engine
3. Templates in the current project or the 'openshift' project
4. Git repository URLs or local paths that point to Git repositories
--allow-missing-images can be used to point to an image that does not exist yet.
See 'oc new-app -h' for examples.
I also tried with
oc new-app mariadb
But got the same error message.
I thought this might be a proxy problem. So I added the proxy to my .profile:
export http_proxy=http://ue73011:secret#dev-proxy.wzu.io:3128
export https_proxy=http://ue73011:secret#dev-proxy.wzu.io:3128
Unfortunately this did not change anything.
Any ideas why this is not working?
your docker daemon needs the proxy so it can reach the DockerHub. You can specify proxy server by providing it as an environment variable for the docker daemon.
Take a look at the official Docker documentation: https://docs.docker.com/config/daemon/systemd/
sudo mkdir -p /etc/systemd/system/docker.service.d
Add a file /etc/systemd/system/docker.service.d/http-proxy.conf which should contain following
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/" "NO_PROXY=localhost,127.0.0.1,docker-registry.example.com,.corp"
Reload your changes and restart docker daemon
sudo systemctl daemon-reload
sudo systemctl restart docker
Verify by doing a simple "docker pull ... "
I am using Jenkins to make build of project, but now my client wants to make builds inside of a Docker image. i have installed docker on server and its running on 172.0.0.1:PORT. I have installed Docker plugin and assigned this TCP URL to Docker URL. I have also created an image with the name jenkins-1
In configure project I use Build environment Build with Docker Container and provide image name. and then in Build in put Execute Shell and then Build it
But it gives the Error:
Pull Docker image jenkins-1 from repository ...`
$ docker pull jenkins-1`
Failed to pull Docker image jenkins-1`
FATAL: Failed to pull Docker image jenkins-1`
java.io.IOException: Failed to pull Docker image jenkins-1``
at com.cloudbees.jenkins.plugins.docker_build_env.PullDockerImageSelector.prepare DockerImage(PullDockerImageSelector.java:34)`
at com.cloudbees.jenkins.plugins.docker_build_env.DockerBuildWrapper.setUp(DockerB`uildWrapper.java:169)`
at hudson.model.Build$BuildExecution.doRun(Build.java:156)`
at `hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)`
at hudson.model.Run.execute(Run.java:1720)`
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)`
at hudson.model.ResourceController.execute(ResourceController.java:98)`
at hudson.model.Executor.run(Executor.java:404)`
Finished: FAILURE`
I just have run into the same issue. There is a 'Verbose' check-box in the configuration of build environment after selecting 'Advanced...' link to expand on the error details:
CloudBees plug-in Verbose option
In my case I ran out of space downloading the build Docker images. Expanding ec2 volume has resolved the issue.
But there is an ongoing trouble with space as docker does not auto cleans images and I have ended up adding a manual cleanup step into the build:
docker volume ls -qf dangling=true | xargs -r docker volume rm
Complete build script:
https://bitbucket.org/vk-smith/dotnetcore-api/src/master/ci-build.sh?fileviewer=file-view-default