Unable to locate Ubuntu packages during docker build - docker

... with various errors such as
Error writing to output file - write (28: No space left on device) Error writing to file - write (28: No space left on device) [IP: 91.189.91.26 80]`
It's failing well before reaching the point it needs to copy local files, so you can run it and hopefully reproduce.
Dockerfile: https://pastebin.com/BAsJ2BzF
As suggested elsewhere I first attempted to docker system prune.
Also, out of 500gb I still have more than 126gb free. Can this really be a local file system space issue?

You need to increase the disk size in the docker settings.

Related

Problem understanding how to, if at all possible, run my docker file (.tar)

I received a .tar docker file from a friend that told me that it should contain all dependences for a program that I've been struggling to get working and that all I need to do is "run" the Docker file. The Docker file is of a .tar format and is around 3.1 GB. The program this file was setup to run is call opensimrt. The GitHub link to the file is as follows:
https://github.com/mitkof6/OpenSimRT
The google drive link to the Docker file is as follows:
https://drive.google.com/file/d/1M-5RnnBKGzaoSB4MCktzsceU4tWCCr3j/view?usp=sharing
This program has many dependencies, some big ones to note is that it runs off ubuntu 18.04 and Opensim 4.1.
I'm not a computer scientist by any means, so I've been struggling to even learn to do docker basics like load and run a image. However, I desperately need this program to work. If you have any steps or advice on how to run this .tar I'd greatly appreciate it. Alternatively if you are able to find a way to get opensimrt up and running and can post those steps I'd be more than happy with that solution as well.
I've tried the commands "docker run" and "docker load" followed by their respective tags, file paths, args..etc. However, even when I fix various issues I always get stuck with a missing var/lib/docker/tmp/docker-import-....(random numbers) file. The numbers change every so often when trying to solve the issue, but eventually I always end up getting some variation of this error: Error response from daemon: open /var/lib/docker/tmp/docker-import-3640220538/bin/json: no such file or directory.
ps: I have extracted the .tar already and there is no install guide/instruction, .exe, install application. As a result I'm not sure how to get the program installed and running.

Change where Julia is storing /logs/manifest_usage.toml

I'm getting this error when I run Julia in Docker:
julia.core.JuliaError: Exception 'SystemError: opening file "/root/.julia/logs/manifest_usage.toml": Read-only file system' occurred while calling julia code:
I tried setting JULIA_HISTORY, but that doesn't seem to be respected.
It looks like you actually probably want to move your whole .julia folder, assuming that the whole /root/ directory is probably going to give you filesystem permission errors unless you always run as root within your docker image.
You can control the location of the .julia directory at first install with the JULIA_DEPOT_PATH environment variable, which is described in a bit more detail in this answer: permissions for installing packages on julia in slurm cluster, though depending on how you are installing Julia, you may be able to more easily sidestep the whole issue by just not using sudo when installing Julia.

How can I fix the caching issue in TYPO3 and ddev?

I recently got into using ddev to develop TYPO3 pages, but I run into the same issue every once in a while. Sometimes (I don't really know what's causing this issue) the page just stops loading and after a while this errormessage appears:
PHP Warning
Core: Error handler (BE): PHP Warning: rename(/var/www/html/var/cache/code/cache_core/5d5a7572dd900787722599.temp,/var/www/html/var/cache/code/cache_core/site-configuration.php): No such file or directory in /var/www/html/public/typo3/sysext/core/Classes/Cache/Backend/SimpleFileBackend.php line 234
I know that this error appears when TYPO3 has no permission to write cache but I don't know what I can do to prevent this issue. Restarting Docker fixes it for a short while but eventually it's happening again and this really costs a lot of time to restart Docker every 10 to 20 minutes.. Does anybody know what kind of configuration I need to do to prevent this issue?
Btw, I'm using Docker on Windows with TYPO3 9.5.8
As there is no official accepted answer yet I'll elaborate on what has been said:
The issue can be resolved by following Susis example in the comments of the initial post:
Create a docker-compose.tempfs.yaml in the .ddev directory (look carefully for the spaces indent!)
version: '3.6'
services:
web:
volumes:
- type: tmpfs
target: /var/www/html/var
tmpfs:
size: 268435456
Combining this with the setup for NFS described in https://ddev.readthedocs.io/en/stable/users/performance/ also increases performance.
Mind: The most feasible approach for using NFS seems to be creating your own package directory which you include via composer "path" repository inside the ddev directory which already gets mounted. (e.g.. /projectname/Packages/Vendor.MyPackage)
Mounting directories above the ddev directory is complicated and prone to error when using symlinks.
I have the same problem and tried it with the yaml file but after I created the file and starte ddev I got the error:
Uncaught RuntimeException: Could not create directory "/var/www/html/var/log/"!
Did someone have a hint? I also deleted the var folder. after deletion the page runs withoiut issue but after restart ddev the error reappears.
I'm on Mac.

Can I build a Docker image to "cache" a yocto/bitbake build?

I'm building a Yocto image for a project but it's a long process. On my powerful dev machine it takes around 3 hours and can consume up to 100 GB of space.
The thing is that the final image is not "necessarily" the end goal; it's my application that runs on top of it that is important. As such, the yocto recipes don't change much, but my application does.
I would like to run continuous integration (CI) for my app and even continuous delivery (CD). But both are quite hard for now because of the size of the yocto build.
Since the build does not change much, I though of "caching" it in some way and use it for my application's CI/CD and I though of Docker. That would be quite interesting as I could maintain that image and share it with colleagues who need to work on the project and use it in CI/CD.
Could a custom Docker image be built for that kind of use?
Would it be possible to build such an image completely offline? I don't want to have to upload the 100GB and have to re-download it on build machines...
Thanks!
1. Yes.
I've used docker to build Yocto images for many different reasons, always with positive results.
2. Yes, with some work.
You want to take advantage of the fact that Yocto caches all the stuff you need to do your build in what it calls "Shared State Cache". This is normally located in your build directory under ${BUILDDIR}/sstate-cache, and it contains exactly what you are looking for in this case. There are a couple of options for how to get these files to your build machines.
Option 1 is using sstate mirrors:
This isn't completely offline, but lets you download a much smaller cache and build from that cache, rather than from source.
Here's what's in my local.conf file:
SSTATE_MIRRORS ?= "\
file://.* http://my.shared-computer.com/some-folder/PATH"
Don't forget the PATH at the end. That is required. The build system substitutes the correct path within the directory structure.
Option 2 lets you keep a local copy of your sstate-cache and build from that locally.
In your dockerfile, create the sstate-cache directory (location isn't important here, I like /opt for my purposes):
RUN mkdir -p /opt/yocto/sstate-cache
Then be sure to bindmount these directories when you run your build in order to preserve the contents, like this:
docker run ... -v /place/to/save/cache:/opt/yocto/sstate-cache
Edit the local.conf in your build directory so that it points at these folders:
SSTATE_DIR ?= "/opt/yocto/sstate-cache"
In this way, you can get your cache onto your build machines in whatever way is best for you (scp, nfs, sneakernet).
Hope this helps!

Error: ./ts3server: not found

I'm trying to make an Teamspeak image running on Alpine linux but am honest not sure why docker says
./ts3server: not found
This is the Github page with the Dockerfile code:
https://github.com/signofkoen/docker-teamspeak/blob/snapshot/Dockerfile
Container log:
/opt/teamspeak3-server_linux_amd64/ts3server_minimal_runscript.sh: line 8: ./ts3server: not found
Anyone know's what am doing wrong? I think a did something wrong with the extracting part but am not sure.
The ts3server binary in your image looks like it was built against glibc, but it is unable to find the appropriate runtime loader on the filesystem.
You can see this by running ldd /opt/teamspeak/ts3server, which reports:
Error loading shared library ld-linux-x86-64.so.2:
No such file or directory (needed by ts3server)
This is the direct cause of your error.
I see that you're starting with the skardoska/alpine-glibc image, which sounds like maybe it was designed to provide a standard glibc environment to Alpine linux, but the image does not appear to have been constructed in a way that is compatible with your binaries. Looking at the description at https://hub.docker.com/r/skardoska/alpine-glibc/, it appears this may be a known problem, because the description says, "Waiting for https://github.com/andyshinn/alpine-pkg-glibc/issues/1".
You may be better off just starting with a glibc based distribution like Fedora or Ubuntu.

Resources