wget command not found in git bash - path

I've already tried pip install wget in my cmd, which reads
>pip install wget
Requirement already satisfied: wget in c:\users\user\...\python\python38-32\lib\site-packages (3.2)
however when I try the command in git bash, it keeps showing
$ wget
bash: wget: command not found
I've made sure both the python file and the git file are in PATH.
What am I doing wrong here?

If you would like to use curl on Git Bash, here is an example:
$ curl -kLSs https://github.com/opscode/chef-repo/tarball/master -o master.tar.gz
$ ls master.tar.gz
master.tar.gz
-L follow redirects
-o (lower case O) to write output to file instead of stdout.
Ss silent mode, but show errors, if any
k allows curl to proceed and operate even for server connections otherwise considered insecure.
Reference: curl manpage.

With the command:
pip install wget
you installed this Python library https://pypi.org/project/wget/, so you can use that from inside Python:
import wget
I imagine what you actually want is to be able to use wget from inside Git bash. To do what, install Wget for Windows and add the executable to the path. Or, alternatively, use curl.

if you are just looking for having wget in the git bash without pip or any other dependency, you can follow the nice and quick tutorial from this page:
How to add more to Git Bash on Windows
the essence of it is:
Download wget binaries for Windows here (preferrably as ZIP) eternallybored
extract the wget.exe from the zip
copy the EXE file to your git bash binaries folder e.g. "c:\Program Files\Git\mingw64\bin"
done :)

Quick and dirty replacement for the single argument, fetch a file usecase:
alias wget='curl -O'
-O, --remote-name Write output to a file named as the remote file
Maybe give the alias a different name so you don't try to use wget flags in curl.

Related

How do I install erlang OTP 25 on ubuntu?

I am trying to install erlang 25 (and elixir 1.13) on my ubuntu VM, but the default version installed by apt is erlang 24.
I've tried both :
sudo wget https://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb && sudo dpkg -i erlang-solutions_1.0_all.d
sudo apt update
and
sudo wget https://packages.erlang-solutions.com/erlang-solutions_2.0_all.deb && sudo dpkg -i erlang-solutions_2.0_all.d
sudo apt update
but in both case, running apt-cache policy esl-erlang didn't show the desired version. I have recently installed erlang 25 on a identical vm, and I don't remember struggling at all, so I'm guessing there's a simple way of doing it that I just forgot ?
I hope you can help me, thank you !
From the Erlang OTP repo, you should do:
apt-get install erlang
If you decide to compile from source:
git clone https://github.com/erlang/otp.git
cd otp
git checkout maint-25 # current latest stable version
./configure
make
make install
Alternatively, you can use Kerl:
curl -O https://raw.githubusercontent.com/kerl/kerl/master/kerl
chmod a+x kerl
and place kerl in your PATH so that you can invoke it from the terminal (remember to source your .bashrc or similar if you update your PATH variable there, or open a new terminal to reload the PATH env), i.e.,
export PATH=<path-to-kerl>:$PATH
Instructions on how to use it here.
I would recommend the usage of the Erlang Version Manager, thanks to which you can compile and install any Erlang OTP version you need, regardless of what the default version is currently available for your Linux distro.
Installation of Erlang Version Manager:
$ git clone https://github.com/robisonsantos/evm /tmp/evm/
$ cd /tmp/evm/
$ /tmp/evm/install
$ echo 'source ~/.evm/scripts/evm' >> ~/.bashrc
$ bash
Installation of the specific Erlang OTP version:
$ evm install 25.1.1 -y
$ evm default 25.1.1

Docker build - use the same shell for all RUN commands

Is it possible to use the same shell in all RUN commands when building a docker image? As opposed to each RUN command running on its own shell.
Use case: at some point, I need to source some file containing environment variables that are used later on. I cannot do this, because the commands run in different shells:
RUN source something.sh
RUN ./install.sh
RUN ... more commands
Instead I have to do:
RUN source something.sh && \
./install.sh && \
... more commands
Which I'm trying to avoid since it hurts readability, it's error prone and does not allow inserting comments in between commands.
Any ideas?
Thanks!
It's not possible to have separate RUN statement run in the same shell.
If you don't like the look of concatenated commands, you could write a shell script and RUN that.
You'll have to get it into the container by using a COPY statement.
Or you can use wget or curl to fetch it and pipe it into a shell. That requires that wget or curl is present in the container, so you might have to install them first.
If you use curl and Debian, it could look like this
RUN apt update && \
apt install -y curl && \
curl -sL https://github.com/link/to/my/install-script.sh | bash
If you COPY it in, it'd look like this
COPY install-script.sh .
RUN ./install-script.sh

Forked docker image not building

I am trying to fork this docker image so that if anything changes on the original it won't affect me.
I have forked the repo corresponding to that image to my own repo.
I have cloned the repo and am trying to build it:
docker build . -t davcal/gcc-cross-x86_64-elf
I am getting this error:
+ cd /usr/local/src
+ ./build-binutils.sh 2.31.1
/bin/sh: 1: ./build-binutils.sh: not found
The command '/bin/sh -c set -x && cd /usr/local/src && ./build-binutils.sh ${BINUTILS_VERSION} && ./build-gcc.sh ${GCC_VERSION}' returned a non-zero code: 127
What makes no sense to me is that if I use the original image, it builds successfully:
FROM randomdude/gcc-cross-x86_64-elf
...
Maybe Docker Hub stores a pre-built image?
How do I fix this?
Note: I am using Windows. This shouldn't make a difference since the error originates within the container.
Edit
I tried patching the Dockerfile to chmod executable permissions to the sh files in case that was causing problems on Windows. Unfortunately, the exact same error occurs.
RUN set -x \
&& chmod +x /usr/local/src/build-binutils.sh \
&& chmod +x /usr/local/src/build-gcc.sh \
&& cd /usr/local/src \
&& ./build-binutils.sh ${BINUTILS_VERSION} \
&& ./build-gcc.sh ${GCC_VERSION}
Edit 2
Following this method, I inspected the container to see if the sh files actually exist. Here is the output.
I ran docker run --rm -it c53693f11514 bash, including the hash of the intermediate container of the previous successful step of the Dockerfile.
This is the output showing that the files do exist:
root#9b8a64ac2090:/# cd usr/local/src
root#9b8a64ac2090:/usr/local/src# ls
binutils-2.31.1 build-binutils.sh build-gcc.sh gcc-8.2.0
From the described symptoms, file exists, is a shell script, and works on other machines, the "file not found" error is most likely from Winidows linefeeds being added to the file. When the Linux kernel processes a shell script, it looks at the first line, the #!/bin/sh or similar, and then finds that interpreter to run the shell script. If that interpreter isn't found, you'll get a "file not found" error.
In this case, the file it's looking for won't be /bin/sh, but instead /bin/sh\r or /bin/sh^M depending on how you want to represent the carriage return character. You can fix that for single files with a tool like dos2unix but in general, you'll want to fix git itself since there are likely other files that have had their linefeeds corrupted. For details on adjusting the behavior of git, see this post.

Change system locale inside a CentOS/RHEL without using localectl?

I'm trying to build a Docker image based on oracle/database:11.2.0.2-xe (which is based on Oracle Linux based on RHEL) and want to change the system locale in this image (using some RUN command inside a Dockerfile).
According to this guide I should use localectl set-locale <MYLOCALE> but this command is failing with Failed to create bus connection: No such file or directory message. This is a known Docker issue for commands that require SystemD to be launched.
I tried to start the SystemD anyway (using /usr/sbin/init as first process as well as using -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /run thanks to this help) but then the localectl set-locale failed with Could not get properties: Connection timed out message.
So I'm now trying to avoid the usage of localectl to change my system globale locale, how could I do this?
According to this good guide on setting locale on Linux, I should use
localedef -c -i fr_FR -f ISO-8859-15 fr_FR.ISO-8859-15
But this command failed with
cannot read character map directory `/usr/share/i18n/charmaps': No such file or directory`
This SO reply indicated one could use yum reinstall glibc-common -y to fix this and it worked.
So my final working Dockerfile is:
RUN yum reinstall glibc-common -y && \
localedef -c -i fr_FR -f ISO-8859-15 fr_FR.ISO-8859-15 && \
echo "LANG=fr_FR.ISO-8859-15" > /etc/locale.conf
ENV LANG fr_FR.ISO-8859-15

Docker installation just downloads index.html file

Following these instructions:
Ubuntu installation
on Ubuntu Server 12.04. I've set my https_proxy in /etc/environment. Next I do:
sudo wget https://get.docker.com/
and the response is "cannot verify get.docker.com's certificate... to connect insecurely use '--no-check-certificate'.
So I do:
sudo wget --no-check-certificate https://get.docker.com/
I'm still getting a message complaining "cannot verify get.docker.com's certificate" and wget downloads the index.html file from get.docker.com rather than an installation package.
I am very new to Linux - please can anyone tell me what I'm doing wrong?
You are doing this:
sudo wget https://get.docker.com/
The instructions to which you linked tell you to do this:
wget -qO- https://get.docker.com/ | sh
That retrieves the shell script and pipes it to the shell for execution. For the record I am morally opposed to this sort of installation, but that's what you need to do to follow those instructions.

Resources