Missing dependency for hdf5: totem Error: Failed cloning git repository - hdf5

dlualkuma57985:torch-hdf5 asuma2$ luarocks make hdf5-0-0.rockspec
as per given in this link Missing dependency for hdf5: totem i tried
wget https://raw.githubusercontent.com/deepmind/torch-totem/master/rocks/totem-0-0.rockspec
sudo luarocks install totem-0-0.rockspec
but for
sudo luarocks install totem-0-0.rockspec
it is giving me following error:
Using totem-0-0.rockspec... switching to 'build' mode
Cloning into 'torch-totem'...
fatal: unable to connect to github.com:
github.com[0: 192.30.253.113]: errno=Connection refused
github.com[1: 192.30.253.112]: errno=Connection refused
Error: Failed cloning git repository.

It has been resolved. There was some issue with network, so instead i connected to hotspot of mobile and then run the command.

Related

Gitlab jobs failling (sudo: command not found OR Failed to fetch)

I am using Gitlab Jobs to deploy a tool. The code below returns sudo: command not found. If I remove the sudo I get the following:
W: Failed to fetch http://deb.debian.org/debian/dists/stable/InRelease Could not connect to deb.debian.org:80 (199.232.138.132), connection timed out
W: Failed to fetch http://security.debian.org/debian-security/dists/stable-security/InRelease Could not connect to security.debian.org:80 (151.101.130.132), connection timed out Could not connect to security.debian.org:80 (151.101.66.132), connection timed out Could not connect to security.debian.org:80 (151.101.2.132), connection timed out Could not connect to security.debian.org:80 (151.101.194.132), connection timed out
W: Failed to fetch http://deb.debian.org/debian/dists/stable-updates/InRelease Unable to connect to deb.debian.org:80:
W: Some index files failed to download. They have been ignored, or old ones used instead.
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package latex209-bin
E: Unable to locate package texlive-latex-base
E: Unable to locate package texlive-latex-extra
E: Unable to locate package ant
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
This is the .gitlab-ci.yml file:
stages:
- deploy
variables:
RA_NAME: "My_project"
default:
before_script:
- sudo apt-get update -qq && sudo apt-get install -y latex209-bin texlive-latex-base texlive-latex-extra ant && sudo apt-get install zip unzip
deploy_Default:
stage: deploy
script:
- sh -x deploy.sh "$RA_NAME" "$(cat RA_VERSION)"
artifacts:
paths:
- "${RA_NAME}_$(cat RA_VERSION).zip"
only:
- master
- dev
- tags
This has been happening for 1 week (most likely from the Gitlab 15.0 release).
Every Job before this started to happen Passed without any problems. Now, without changing anything they all fail (even trying to rerun old ones that Passed).
I tried adding
build_image:
script:
- docker build --network host
and a couple of similar configurations but none of them worked.
Now my question: why sudo doesn't work anymore without changing anything on my .gitlab-ci.yml and what can I do to solve it.
I should mention that these Jobs are triggered by commits to the branches mentioned in only. I can run them by running pipeline or rerun the ones that were already run. I am not aware of any other modality to run them. All the work with Gitlab and this Docker are done by Gitlab UI
This happens when the instance/server has networking issues. If the runner isn't self-hosted then Gitlab should fix the problem fairly soon. If the runner is self-hosted then you've got networking issues. Here's what I can propose:
Note: Try this without sudo
Select the debian mirror close to you from the debian mirror sites list
Edit the sources list via nano /etc/apt/sources.list if nano is not available then use vi
Change to a different mirror
For example
http://deb.debian.org/debian/dists/stable/InRelease // <-- remove this
http://ftp.us.debian.org/debian/dists/stable/InRelease // <-- add this
Save.
I solved the problem. Before the update, without specifying any image in .gitlab-ci.yml, by default it used my company Docker Image . After the update I noticed it used another one
I added to my .gitlab-ci.yml the following:
image: myCompanyImage
and now works fine as before.
If you encounter such a problem, check the image it's using and the image used in a previous successfully run pipeline.

v4l2loopback on gcp cannot depmod / compile

i'm trying to make fake webcam using v4l2loopback on a docker container inside gcp instance.
i'm using debian:stretch with 4.9.0-9-amd64 kernel
so far, these are steps that i tried to compile the v4l2loopback:
`apt install linux-headers-$(uname -r)` to install proper header
`apt-get install kmod` and `apt-get install make` so i can use `make` and `depmod` feature
`apt-get install aufs-dkms aufs-tools aufs-dev` to get the `modules.builtin.bin` file
after steps above, i cloned the v4l2loopback repo, run make && sudo make install command, and finally depmod -a command. But when i run depmod -a, i got this warning:
depmod: WARNING: could not open /lib/modules/4.9.0-9-amd64/modules.order: No such file or directory
depmod: WARNING: could not open /lib/modules/4.9.0-9-amd64/modules.builtin: No such file or directory
when i check it manually, there are no modules.order and modules.builtin inside the /lib/modules/4.9.0-9-amd64 directory.
so when i tried to load the v4l2loopback module using modprobe v4l2loopback, it gives me error like this:
modprobe: ERROR: ../libkmod/libkmod.c:514 lookup_builtin_file() could not open builtin file '/lib/modules/4.9.0-9-amd64/modules.builtin.bin'
modprobe: ERROR: could not insert 'v4l2loopback': Operation not permitted
how can i fix this? or how can i compile the v4l2loopback properly on my environment?
In my case, I realized that it is only possible to load the module via insmod command instead of modprobe and you would be ready to use. Here you can find an explanation about the differences between these methods.
Example: sudo insmod PATH/TO/THE/FILE/v4l2loopback.ko devices=2 card_label="camera1","camera2" exclusive_caps=1,1

fatal: Unable to find remote helper for 'http'

I had tried to clone my repository on Solaris using git bash but I got the below error
Cloning into 'devops'...
warning: templates not found /usr/local/share/git-core/templates
fatal: Unable to find remote helper for 'http'
I have always seen that error message when Git was compiled without curl-devel installed.
For Solaris, that would be CSWlibcurl-dev/.
Once installed, recompile Git and you are good to go.

Installing Jenkins on RHEL 6 getting error "No valid crumb was included in the request"

I've just installed Jenkins on an AWS EC2. However when I go to configure Jenkins from the browser I get the following error immediately after I select install recommended plugins:
An error occurred during installation: No valid crumb was included in the request
When I looked up this error, it seems it was a known issue but was resolved last year.
https://issues.jenkins-ci.org/browse/JENKINS-12875
However I am still encountering it on a stable build version.
Installation on RHEL 6:
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install jenkins
Running Jenkins
sudo service jenkins start
Does anybody know a workaround or how to fix this problem?

How to locate and copy the Bower local cache on windows

I have a build server which has no internet access which I would like to be able to perform a bower install upon.
I tried to copy the c:\users\<TheAccountTheBuildServerRunsAs>\AppData\Local\bower to my build server (which I have done with the npm cache (successfully) but it keeps trying to access the internet:
bower ECMDERR Failed to execute "git ls-remote --tags --heads
https://github.com/stefanpenner/ember-jj-abrams-resolver.git",
exit code of #128 fatal: unable to access 'https://github.com/stefanpenner/ember-jj-abrams-resolver.git/':
Received HTTP code 403 from proxy after CONNECT
Additional error details:
fatal: unable to access 'https://github.com/stefanpenner/ember-jj-abrams-resolver.git/': Received HTTP code 403 from proxy after CONNECT
Am I using the wrong process?
As a workaround I've had to check my bower_components into source control, but I'd really rather not.
Run bower install --offline. This forces it to only use the cache.

Resources