I am trying to build my own linux image using buildroot in docker with GitLab CI. Everything is going fine until I start downloading the "linux" repository. Then I get an error like below.
>>> linux d0f5c460aac292d2942b23dd6199fe23021212ad Downloading
Doing full clone
Cloning into bare repository 'linux-d0f5c460aac292d2942b23dd6199fe23021212ad'...
Looking up git.ti.com ... done.
Connecting to git.ti.com (port 9418) ... 198.47.28.207 done.
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
--2023-01-05 11:53:37-- http://sources.buildroot.net/linux-d0f5c460aac292d2942b23dd6199fe23021212ad.tar.gz
Resolving sources.buildroot.net (sources.buildroot.net)... 104.26.1.37, 172.67.72.56, 104.26.0.37, ...
Connecting to sources.buildroot.net (sources.buildroot.net)|104.26.1.37|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2023-01-05 11:53:37 ERROR 404: Not Found.
package/pkg-generic.mk:73: recipe for target '/builds/XXX/XXX/output/build/linux-d0f5c460aac292d2942b23dd6199fe23021212ad/.stamp_downloaded' failed
make: *** [/builds/XXX/XXX/output/build/linux-d0f5c460aac292d2942b23dd6199fe23021212ad/.stamp_downloaded] Error 1
Cleaning up project directory and file based variables
00:02
ERROR: Job failed: exit code 1
The image being built without docker has no problem downloading this repository. I was building this image in docker a while ago and there was no problem downloading this repository. Could it be a problem of poorer network connection? The package is bigger than the others
You are using a custom git repo (git.ti.com) which is not working and buildroot doesn't know anything about.
For this reason, you cannot expect a mirror copy available on sources.buildroot.net: buildroot only has copies of the packages distributed whithin it.
Related
I am using IBM Cloud Code Engine. Building the container locally, pushing it to the IBM Container Registry and then creating the Code Engine app from it works. Now, I wanted to build the container image using Code Engine. The code is in a public GitHub repository. I am using the build strategy "Dockerfile" based on this Dockerfile.
When I submit a build using the console, it fails after a while and I see these lines in the output.
#13 1.368 Collecting ibm_db>=3.0.2
#13 1.374 Downloading ibm_db-3.1.2.tar.gz (1.1 MB)
#13 1.381 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 234.4 MB/s eta 0:00:00
#13 1.711 Installing build dependencies: started
#13 5.425 Installing build dependencies: finished with status 'done'
#13 5.430 Getting requirements to build wheel: started
#13 6.751 Getting requirements to build wheel: finished with status 'done'
#13 6.752 ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/tmp/tmpqm5wa7sq/output.json'
How can I investigate this further? Could the errors be from different tools creating the container image? How would I proceed?
There are several options on how to approach the issue:
First and most important: Obtain and check the logs for anything suspicious. The logs are available in the Code Engine browser console for the buildruns, or via command line:
ibmcloud ce buildrun logs --name mybuildrun
In addition, there is extra information on the buildrun by using either
ibmcloud ce buildrun get --name mybuildrun
or
ibmcloud ce buildrun events --name mybuildrun
On a new GitHub branch or better locally, modify the Dockerfile and add extra steps to access files of interest or print additional debugging output. The latter could be done be adding || cat /some/log/file to a failing step.
Make sure that all requirements and packages that are installed are correct and available.
Check the dependencies for known issues.
In the case that lead to my question I needed more debug output to find an issue with a broken software module from my requirements.txt.
My rails app is deployed via Chef, and I'm having trouble cloning a private repo of a Yarn package into my project upon deployment. Linking to the private repo works just fine locally. The repo is hosted at my company's internal GitHub URL (github.mycompany.com). The app also uses a private repo for a gem, and that retrieval works just fine. The issue comes about when Chef tries to run rails:assets:precompile on the production server:
Error executing action `run` on resource 'ruby_execute[rake assets:precompile]'
================================================================================
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '1'
---- Begin output of ["/opt/ruby_build/builds/system/bin/ruby", "/opt/ruby_build/builds/system/bin/bundle", "exec", "/opt/ruby_build/builds/system/bin/ruby", "/var/www/app_name/releases/latest/vendor/bundle/ruby/2.5.0/bin/rake", "assets:precompile"] ----
STDOUT: yarn install v1.21.1
[1/4] Resolving packages...
[2/4] Fetching packages...
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
STDERR: error Command failed.
Exit code: 128
Command: git
Arguments: ls-remote --tags --heads git#<repo_location>.git
Directory: /var/www/app_name/releases/latest
Output:
Load key "/home/ubuntu/.ssh/id_rsa": Permission denied
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
This is my first time using Yarn. I have read some issues on the Yarn repo about cloning private repos, but I think this situation is a bit different. This setup also works fine for all our other Rails apps that don't use any private yarn repos.
1) Why would the permission be denied to my SSH key? It's created from a deploy_key in a data bag, and like I said before works just fine for installing a private gem repo (we have a Chef GitHub user, which is added as a collaborator on both the gem and yarn repos).
2) Is there a way to completely package this repo inside my project, precompile assets locally, then force our production server to serve the precompiled assets from the /public folder? This doesn't feel right to me but it might be an acceptable workaround for now.
I have created a cluster on Digital Ocean (DC/OS 1.9) using terraform following these instructions here
Everything seems to have installed correctly, to pull from a private docker repo, I need to add a compressed .docker file to my /core/home/ and fetch it during deployment by including it in my JSON.
"fetch":[
{
"uri":"file:///home/core/docker.tar.gz"
}
]
Based on these instructions: https://docs.mesosphere.com/1.9/deploying-services/momee/docker-creds-agent/
And I'm still getting errors:
Failed to launch container:
Failed to fetch all URIs for container 'abc123-xxxxx' with exit status: 256
Upon looking at the logs of one of the agents:
Starting container '123-abc-xxx' for task 'my-docker-image-service.321-dfg-xxx' (and executor 'my-docker-image-service.397d20cb-1
Begin fetcher log (stderr in sandbox) for container 123-abc-xxx from running command: /opt/mesosphere/packages/mesos--aaedd03eee0d57f5c0d49c
Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/94af100c-4dc2-416d-b6d7-eec0d947a1a6-S11","items":[{"action":"BYPASS_CACHE","uri":{"cache":false,"executable":false,"extract":true,"value":"file:\/\/\/home\/core\/docker.tar.gz"}}],"sandbox_directory":"\/var\/lib\/mesos\/slave\/slaves\/94af100c-4dc2-416d-b6d7-eec0d947a1a6-S11\/frameworks\/94af100c-4dc2-416...
Fetching URI 'file:///home/core/docker.tar.gz'
Fetching directly into the sandbox directory
Fetching URI 'file:///home/core/docker.tar.gz'
Copied resource '/home/core/docker.tar.gz' to '/var/lib/mesos/slave/slaves/94af100c-4dc2-416d-b6d7-eec0d947a1a6-S11/frameworks/94af100c-4dc2-416d-b6d7-eec0d947a1a6-0
Failed to obtain the IP address for 'digitalocean-dcos-agent-20'; the DNS service may not be able to resolve it: Name or service not known
End fetcher log for container 123-abc-xxx
Failed to run mesos-fetcher: Failed to fetch all URIs for container '123-abc-xxx' with exit status: 256
You are missing the extract instruction:
"fetch":[
{
"uri":"file:///home/core/docker.tar.gz",
"extract":true
}
]
When running npm install within circleci we fetch some node packages from our github repositories through package.json. This operation is happening when building a docker image from a Dockerfile.
This has been working great until last week when without changes in our side, we started to get errors while cloning these packages. To perform this operation, we were using Basic Authentication in the URL providing user credentials in it. For ie:
https://<username>:<password>#github.com/elektron-technogoly/<repository>.git
Now, we get the following errors:
npm ERR! Command failed: git clone ...
npm ERR! fatal: unable to access 'https://<username>:<password>#github.com/elektron-technogoly/<repository>.git':
Could not resolve host: <username>
From the error message it seems like it thinks the username is the host and thus, fails. I checked that password is still valid and it did not expire.
Has recently - around last week - something changed that could cause this error? Has Basic Authentication been disabled?
UPDATE: Playing a bit seems like when you change the base docker image (say from node:4-slim to node:4), the first time it works, subsequent times don't. Unfortunately, logs are not giving me any lead, both look exactly the same but the error appears from the first onwards.
I have a build server which has no internet access which I would like to be able to perform a bower install upon.
I tried to copy the c:\users\<TheAccountTheBuildServerRunsAs>\AppData\Local\bower to my build server (which I have done with the npm cache (successfully) but it keeps trying to access the internet:
bower ECMDERR Failed to execute "git ls-remote --tags --heads
https://github.com/stefanpenner/ember-jj-abrams-resolver.git",
exit code of #128 fatal: unable to access 'https://github.com/stefanpenner/ember-jj-abrams-resolver.git/':
Received HTTP code 403 from proxy after CONNECT
Additional error details:
fatal: unable to access 'https://github.com/stefanpenner/ember-jj-abrams-resolver.git/': Received HTTP code 403 from proxy after CONNECT
Am I using the wrong process?
As a workaround I've had to check my bower_components into source control, but I'd really rather not.
Run bower install --offline. This forces it to only use the cache.