Recursive copy of grails maven repository with wget - grails

This post is part of this one
I want to copy this repository on my local computer : https://repo.grails.org/grails/core/
To achieve my goal, I ran all night long this wget command :
wget -r -l=inf --no-parent --reject "index.html" https://repo.grails.org/grails/core"
This morning, the wget command was ended, and I went into ".m2e", and instead of having the subfolders "connectors" and "discovery_catalog" as expected, I have an "index.htm.tmp" file.
Same on all the folders (".meta", ".nexus", ".report", and so on).
How can I mirror this maven repository using wget ? Is wget the right tool for this ?
Thanks in advance.

The error was due to an Internet connection error.
I tried the same command on a subfolder and all worked as expected.
Sorry for the question, due to the massive amount of data on core grails repository, the script crashed and I thought it was ended.

Related

Is there an alternative link instead of the one in the docker book?

I am new to docker and was working through the docker book and just got to the building a Sinatra web application part. But the link provided to download the source code doesn't exist. I tried using the GitHub link for the code but that isn't working as well. I need to make a binary executable as the next step and I am unable to do so.
$ cd sinatra
$ wget --cut-dirs=3 -nH -r -e robots=off --reject="index.html","
Dockerfile" --no-parent http://dockerbook.com/code/5/sinatra/ webapp/
This is what I am supposed to do but if you copy paste the link into your browser, it doesn't work. I also tried making each folder I needed and physically making files but in the next step which is chmod +x webapp/bin/webapp, it says that the directory does not exist even though it does.

Jenkins cannot find g++

I am learning all of these new technologies. I have a home server for private development with latest version of centos 7.6 (minimal installation). I am trying to keep the server as light as possible.
I have installed jenkins (v2.164.2) and it is up and running correctly. I have created a new Freestyle project to compile a g++ project hosted on another own gogs server. I have defined gogs url and credentials and then added the following in the execute shell command:
which g++; make clean; make;
When I press the "Build Now" button, it fails with the following message:
which: no g++ in (/sbin:/usr/sbin:/bin:/usr/bin)
Cloning the repository, etc seems to be working fine.
I have NOT installed the default g++ version but instead I have installed the one that comes with devtools-7 (g++ v7.3.1). I have created a new file under /etc/profile.d/devtools.sh with the following text:
!#/bin/bash
source scl_source enable devtoolset-7
If I login into a bash shell in the server and then run which g++, I get the expected output.
Finally, the question: why jenkins is not picking this up? As far as I know, adding that file under /etc/profile.d ensures that everyone will be able to access g++.
Thanks very much in advance for any help.
I have managed to fix it at the end. I leave the question just in case someone else runs into the same problem. I only had to add the following line as first line in the "execute shell" command field:
#!/bin/bash -l
make clean; make;
That #!/bin/bash -l did the trick. (Please mind the -l).
Found it here: What shell does Jenkins use?

Running ASP.NET Core app from Amazon Linux 2 on Docker - Globalization

I have my ASP.NET Core app running beautifully (more or less) on microsoft/aspnetcore:2.0-jessie. Now I want to try to get it to deploy to amazonlinux:2.
So far, the biggest hurdle has been libicu. I tried setting Globalization to Invariant, but this caused weird failures in, e.g., mySQL database calls.
Here's the relevant step from my Dockerfile:
RUN curl -L --http1.1 http://download.icu-project.org/files/icu4c/57.1/icu4c-57_1-RHEL6-x64.tgz --output icu.tgz \
&& tar -xf icu.tgz -C / \
&& export LD_LIBRARY_PATH=/usr/local/lib \
&& rm icu.tgz
(SourceForge was down while I was trying to work on this yesterday, which didn't improve matters.)
In any case, I still get the message of doom from .NET Core:
FailFast: Couldn't find a valid ICU package installed on the system. Set the configuration flag System.Globalization.Invariant to true if you want to run with no globalization support.
Any suggestions how to proceed?
Well, I revisited this yesterday. I don't know if it's because the base .tar of the Amazon Linux image has been updated, or because I was doing something wrong last time, but I installed the following packages using yum and all was well:
libunwind
libicu
dotnet-hosting-2.0.5
Note that for the dotnet package I needed first to set up Microsoft's package repository for yum, i.e.
rpm --import https://packages.microsoft.com/keys/microsoft.asc
and copying the following file to /etc/yum.repos.d/dotnetdev.repo :
[packages-microsoft-com-prod]
name=packages-microsoft-com-prod
baseurl=https://packages.microsoft.com/yumrepos/microsoft-rhel7.3-prod
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc
(see Microsoft's instructions for CentOS and other Linux distros)

Docker mkimage_yum.sh for centos 7 fails

A little confused at the moment. I've got docker on one my servers and as it doesn't have internet access, I'm trying to build a base image for centos7.4. The nice Docker site has a mkimage_yum.sh script for this purpose, but it consistently fails when it tries running:
yum -c /tmp/mkimage_yum.sh.gnagTv/etc/yum.conf --installroot=/tmp/mkimage_yum.sh.gnagTv -y clean all
with a "No enabled repos" error. The thing is, if I enter "yum repolist" I get back 17 entries, and I have manually tried to set several repos to enabled. Yet, this command still fails, and I do not understand what could be missing.
Anybody have some idea of what I can so this succeeds?
Jay
I figured out why this was failing, the docker file for mkimage_yum.sh does not contain the proper code if you're storing your repos in /etc/yum.repos.d, it assumes that everything is in /etc/yum.conf. This is really not correct, and it causes one of the later yum clean operations to fail. I fixed it, but I cannot upload the change as the server has no internet access.

wget command error in jenkins

I am trying to run below command using Jenkins shell:
wget -r --no-parent --reject "index.html*" http://${IP}/ranjans/ -P ${WORKSPACE}/tests
File is getting downloaded but my Jenkins job is marked failure. Am I missing anything here?
You might try looking at the no clobber option. I wonder if you are getting a bad result from wget because you are running this multiple times, and wget is complaining because the files already exist.
To double check that it is indeed wget that is failing, try printing out the exit value of wget. If it is 0, its a problem elsewhere.

Resources