Can't attach CircleCI Workspace from Windows to Linux due to "Cannot change ownership to uid 3434" - tar

I am using CircleCI's persistent workspace feature to run jobs with the same build folder between Linux and Windows executor types. I was able to go from Linux to Windows but when I went from Windows to Linux I got this error when CircleCI attempted to attach the workspace.
Applying workspace layers:
9ba3eddc-3658-43c2-858b-aea39250af3e
25c476af-8804-4125-b979-05a62a9ac204
Error applying workspace layer for job 25c476af-8804-4125-b979-05a62a9ac204: Error extracting tarball /tmp/workspace-layer-25c476af-8804-4125-b979-05a62a9ac204854634413 : tar: project/.circleci/config.yml: Cannot change ownership to uid 3434, gid 197121: Invalid argument
Looking at the error it's clear that the UIDs are not existing on the system. I attempted to run commands to create the same UID/GID it was erroring on but I still got an unable to change owner issue.
I was expecting CircleCI to move the files and ignore the user: group part when it was extracted as you can't guarantee the UID/GID exists.
I opened a support ticket but hoping for a faster solution to this issue.

I found a solution to this issue and it's forcing CircleCI to use TAR_OPTIONS environment variable to force the options to ignore the owner/group.
Here is what I added to my jobs steps that attach the workspace when the previous job run was Window.
build-app:
build:
docker:
- image: Dockerhub.com/myrepo/myimage:1.0.0
environment:
TAR_OPTIONS: --no-same-owner
using TAR_OPTIONS environment to inject the option --no-same-owner allowed CircleCI to extract the tarball without issue.

Related

dotnet web application runs in docker container but is not active

I've tried editing the dockerfile, i get a successful build and running the container but the container doesn't reflect when i do "docker ps". i get this error when i check the container logs "Could not execute because the specified command or file was not found.
Possible reasons for this include:
You misspelled a built-in dotnet command.
You intended to execute a .NET Core program, but dotnet-QuintAPI.dll does not exist.
You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.
"
Figured it out, the executable file specified in the docker file was wrongly spelled. Once that was corrected, everything worked perfect.

How do you pass file_system_blacklist arg to Datadog Docker Agent run command?

I want to exclude a path to avoid getting my logs spammed like so:
(disk.py:75) | Unable to get disk metrics for /host/proc/sys/fs/binfmt_misc:
[Errno 40] Too many levels of symbolic links:
'/host/proc/sys/fs/binfmt_misc'\n","stream":"stdout","time":"2020-03-12T23:01:38.424330408Z"}
I'm running datadog as a docker agent using the command here:
https://docs.datadoghq.com/agent/docker/?tab=standard#installation
how do I specify files to exclude in the docker run command? is it an environment variable?
Found the answer here: https://github.com/DataDog/datadog-agent/issues/3329
The field is mount_point_blacklist
If you want to remove the warning, you can try adding none and shm to the excluded_filesystems in disk.yaml. This file should exist or be created in the Agent's conf.d directory.
Otherwise, you'll find more options here.
If you are looking to exclude the logs from the agent within the platform you can look at excluding the agent (doc)

docker build question no permission to read from

I'm trying to build a docker image
I get this error
sudo docker build . -t django-demo
error checking context: 'no permission to read from '/home/benny/.ICEauthority''
any ideas why this is happening?
--------------------------
ubuntu 18.04
Docker version 18.09.9
Create a new directory, place your Dockerfile in this new directory and then run your sudo docker build . -t django-demo command from that directory. This should solve your problem. Found related problems and solution in this external thread.
Generally to solve this kind of problem you should add a .dockerignore file in which you list all the files you don't want to be sent to the build's context (ie. the files that docker don't need to build your image).
In your case simply creating a .dockerignore with the following content should solve the issue :
.ICEauthority
Note that specifically in your case though, you should not run your docker build command directly from your home directory, because all the content of your home is being sent to the build's context (which is heavy, might make you run out of disk space or generate permission issues).

How to run Bazel container images on OSX?

According to the documentation at bazelbuild/rules_docker, it should be possible to work with these container images on OSX, and it also claims that it's possible to do so without docker.
These rules do not require / use Docker for pulling, building, or pushing images. This means:
They can be used to develop Docker containers on Windows / OSX without boot2docker or docker-machine installed.
They do not require root access on your workstation.
How do I do that? Here's a simple rule:
go_image(
name = "helloworld_image",
importpath = "github.com/nictuku/helloworld",
library = ":go_default_library",
visibility = ["//visibility:public"],
)
I can build the image with bazel build :helloworld_image. It produces a tar ball in blaze-bin, but it won't run it:
INFO: Running command line: bazel-bin/helloworld_image
Loaded image ID: sha256:08d312b529d30431c68741fd3a31468a02533f27a8c2c29eedc969dae5a39852
Tagging 08d312b529d30431c68741fd3a31468a02533f27a8c2c29eedc969dae5a39852 as bazel:helloworld_image
standard_init_linux.go:185: exec user process caused "exec format error"
ERROR: Non-zero return code '1' from command: Process exited with status 1.
It's trying to run the linux this is OSX, which is silly.
I also tried doing a "docker load" on the .tar content but it doesn't seem to like that format.
$ docker load -i bazel-bin/helloworld_image-layer.tar
open /var/lib/docker/tmp/docker-import-330829602/app/json: no such file or directory
Help? Thanks!
You are building for your host platform by default so you need to build for the container platform if you want to do that.
Since you are using a go binary, you can do cross compilation by specifying --cpu=k8 on the command line. Ideally we would be able to just say that the docker image needs a linux binary (so no need to specify the --cpu command-line flag) but this is still a work in progress in Bazel.

Unable to start any container when Volumes are enabled Docker Toolbox

I am running Docker Toolbox v. 1.13.1a on Windows 7 Pro Service pack 1 x64OS.
with Virtual Box Version 5.1.14 r112924
when I try to run any docker image e.g. official postgres image from Docker Hub with volumes disabled, it works fine!
But when I enable the volumes it fails.
I tried all official documentations
The VM has shared folder as required and has full access to it also
shared folder screenshot
In case of my example of postgresql it crashes with following log
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... LOG: could not link file "pg_xlog/xlogtemp.27" to "pg_xlog/000000010000000000000001": Operation not permitted
FATAL: could not open file "pg_xlog/000000010000000000000001": No such file or directory
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
I know its the problem with folder permissions. But kinda stuck!
A ton of thanks in advance
I've been busy with this problem all day and my conclusion that it's currently simply not possible to run postgresql inside a docker container while keeping your data persistent in a separate volume.
I even tried running the container without linking to a volume and copying the data that was originally in /var/lib/postgresql into a folder of my host OS (Windows 10 Home), then copy that into the folder that got then linked to the container itself.
Alas, I got the next error:
FATAL: data directory "/var/lib/postgresql/data/pgadmin" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
In conclusion: There's something going wrong with the ownership and the correct user owning it and to be able to fix it, you'll need a unix commandline on Windows that is able to run docker (something currently not possible with Bash on Ubuntu on Windows that is running using Ubuntu 16.04 binaries).
Maybe, in the future, you'll be able to run the needed commands (found here, under Arbitrary --user Notes), but these are *nix commands and powershell (started by Kitematic) can't run those. Bash for Ubuntu for Windows could run those, but that shell has no connection to the docker daemon/service on windows...
TL;DR: Lost a day of work: It is currently impossible on Windows.
I have been trying to fix this issue also ..
At first I thought it was a symlink problem (because the first error fails on " could not link .. operation not permitted)
To be sure symlink is permitted you have to :
share a folder in virtualbox
run virtualbox as administrator (if you account is in administrator group) Right click virtualbox.exe and select run as Administrator
if your account is not administrator, add the symlink privilege with secpol.msc > "Local Policies-User Rights Assignments" add your user to "Create symbolic links"
enable symlink for your shared folder in virtualbox :
VBoxManage setextradata VM_NAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/SHARED_FOLDER_NAME 1
Alternatively you can also use the c:\User\username folder which is shared and symlink enabled by default dockertools installation
Now I can create symlinks in the shared folder from the docker container .. but I still have the same error "could not link ... operation not permitted"
So the reason must be somewhere else ... in the file permissions as you said but I do not see why ?

Resources