Sawtooth network and CoopEdge - installation problem with dockerfile - docker

I'm learning about blockchain and now I'm starting with Sawtooth network as I have heard that it's quite popular. I came across and research paper about CoopEdge and it's very interesting (github link: https://github.com/coopedge/prototype).
However, I don't know clearly how to make this work. I personally send an email and still don't have the response so I have to try it with basic knowledge (blindly, somehow). There are two folder sawtooth-core and sawtooth-poer. I went with poer because the publication was talking about it. There are two type of dockerfile - docker-compose.yml and docker-compose-installed.yaml. I installed the first one with docker-compose and there was no problem. However, when I tried to install the latter one I keep getting the error:
Step 9/13 : RUN export VERSION=$(./bin/get_version) && sed -i -e "0,/version.*$/ s/version.*$/version\ =\ \"${VERSION}\"/" Cargo.toml && /root/.cargo/bin/cargo deb --deb-version $VERSION
---> Running in 7d244fc29e30
/bin/sh: 1: ./bin/get_version: Permission denied
91mcargo-deb: Argument to option 'deb-version' missing
I tried several methods by searching the internet but no luck so far. I also tried to install with root (sudo -i) but still it doesn't work at all.
Another thing is I don't know the second docker-file is mandatory for installation as there is no document or guidance provided by the author.
I appreciate any help that could solve this permission problem. Thank so much.
P.s: I'm using virtual machine with Ubuntu 18.04.

Related

Airflow on windows 10 - Module not found errors

I'm new to data science and wanted to do a little tutorial, which requires airflow, among other things. I installed it on windows using git bash in VS Code. I tried running it but it had a problem not being able to load the sqlite3 import
command (module not found error). I figured out that if I added the directory of sqlite3.py to the path, it would run, but now it gives me a similar error: pwd module not found from daemon.py
File "C:\ProgramData\Anaconda3\lib\site-packages\daemon\daemon.py", line 18, in <module>
import pwd
ModuleNotFoundError: No module named 'pwd'
Strange to me that it can't find pwd. Obviously pwd works in both git bash and powershell natively. It seems like a basic universal command. I'd love to learn more about what's going on. I don't want to have to end up adding 100 things to path just to get this program to run. I'd love any insights anyone can provide.
PS I'm using Anaconda.
it's seems to be the side effects of Spawning new Python Daemons .
You likely can fix this by downgrading the Python-Daemon :
pip install python-daemon==2.1.2

Need to remove symbolic link I created for /var/lib/docker/ but can't find it. Unable to use docker and feel helpless

I'm feeling really terrible atm so any help would be really appreciated. I kept running out of space when downloading docker images on /var, so I decided I needed to change the location for where docker was installing images. I tried several methods but had no success. First, I tried creating daemon.json in etc/docker and mapping data-root to a place with more storage (data2/docker). I stopped docker, moved everything over, made the file, but no dice. The docker daemon wouldn't start.
Then, I saw this method https://stackoverflow.com/a/49743270/13034460 which involves creating a symbolic link between /var/lib/docker and the new directory (data2/docker). I followed his instructions:
Much easier way to do so:
Stop docker service: sudo systemctl stop docker
Move existing docker directory to new location sudo mv /var/lib/docker/ /path/to/new/docker/
Create symbolic link
sudo ln -s /path/to/new/docker/ /var/lib/docker
Start docker service
sudo systemctl start docker
Well, this didn't work for me. I can't find the error message b/c it's too far up in my terminal, but it was along the lines of "you don't have enough storage/we don't know where to store this image". /data2/docker should have tons of storage so that can't be the issue.
But the big problem now is that this symbolic link exists and I can't figure out how to get rid of it. I tried removing everything related to docker on the computer, uninstalling, then reinstalling docker (which always used to work for me if there were any issues). But when I reinstall, it won't even run docker hello-world b/c of the link (I think). I get a message:
docker: open /data2/docker/tmp/GetImageBlob289478576: no such file or directory
So...it's looking in data2/docker because of the symbolic link (I assume), but that directory doesn't exist anymore. But neither does /var/lib/docker! All I want is to delete this link and get everything back to fresh defaults. I can worry about the storage issue another time. If I can't use docker at all, I'm so screwed. I've tried looking in every directory to find the link using -ls -l, but I can't find it. I used the exact code that the above references when I created the link (just my paths instead).
I would be so grateful to anyone who could help--I'm so lost on this. Thank you!

Composer Docker image won't run at all

I'm attempting to learn how to create a Laravel Docker image by following a tutorial on DigitalOcean using WSL. Following the instructions on the Docker Hub page, however, yields an error:
❯ docker run --rm --interactive --tty -v $(pwd):/app composer install
Loading composer repositories with package information
Updating dependencies (including require-dev)
Package operations: 94 installs, 0 updates, 0 removals
- Installing voku/portable-ascii (1.4.10): Failed to download voku/portable-ascii from dist: Could not delete /app/vendor/voku/portable-ascii/src/voku/helper:
Now trying to download from source
- Installing voku/portable-ascii (1.4.10):
[RuntimeException]
Could not delete /app/vendor/voku/portable-ascii/src/voku/helper:
install [--prefer-source] [--prefer-dist] [--dry-run] [--dev] [--no-dev] [--no-custom-installers] [--no-autoloader] [--no-scripts] [--no-progress] [--no-suggest] [-v|vv|vvv|--verbose] [-o|--optimize-autoloader] [-a|--classmap-authoritative] [--apcu-autoloader] [--ignore-platform-reqs] [--] [<packages>]...
How can I diagnose what I'm doing wrong?
It turns out that the underlying problem had nothing to do with Docker at all. In fact, Composer was trying to tell me what the problem was all along, but I dismissed it as just a symptom of a deeper issue:
[RuntimeException]
Could not delete /app/vendor/voku/portable-ascii/src/voku/helper:
This message was the crux of it all. I noticed that the directory mentioned, [...]/helper, was empty, so I tried to remove it by hand with rmdir. Instead, I got a No such file or directory error message. I attempted many other was to kill this directory, the entire project directory with rm -rf ~/laravel-app from the home folder, etc. Nothing worked.
Some digging around on the internet suggested that it could be an NTFS corruption if I was running into this issue on Windows. Since I am, indeed, attempting this on WSL (Windows Subsystem for Linux), I gave their suggested fix a try: running chkdsk /F in CMD/PowerShell. A reboot was necessary to complete this task, but after getting everything back up and trying those first few tutorial steps again, I was able to get composer to install the Laravel dependencies without a hitch.
Bottom line: If you run into this sort of issue on WSL, please try running chkdsk /F and reboot. You might just have a similar file system corruption.
We have two possibilities for this error:
1 - You did not execute the command inside the directory :
cd ~/laravel-app
2 - I'm sure there is an internal proxy that is blocking the download of packages.

Docker mkimage_yum.sh for centos 7 fails

A little confused at the moment. I've got docker on one my servers and as it doesn't have internet access, I'm trying to build a base image for centos7.4. The nice Docker site has a mkimage_yum.sh script for this purpose, but it consistently fails when it tries running:
yum -c /tmp/mkimage_yum.sh.gnagTv/etc/yum.conf --installroot=/tmp/mkimage_yum.sh.gnagTv -y clean all
with a "No enabled repos" error. The thing is, if I enter "yum repolist" I get back 17 entries, and I have manually tried to set several repos to enabled. Yet, this command still fails, and I do not understand what could be missing.
Anybody have some idea of what I can so this succeeds?
Jay
I figured out why this was failing, the docker file for mkimage_yum.sh does not contain the proper code if you're storing your repos in /etc/yum.repos.d, it assumes that everything is in /etc/yum.conf. This is really not correct, and it causes one of the later yum clean operations to fail. I fixed it, but I cannot upload the change as the server has no internet access.

dockerfile: vim (compiled python), vim-ipython, and ipython notebook

I would like to build a Dockerfile in linux which
1. compiles vim with python
2. installs python stack (such as numpy, scipy, ipython, etc)
3. creates ssl certificate for ipython-notebook, to view the notebooks on host machine
It seemed straightforward enough. But I have run into problems despite a variety of approaches, such as linking separate containers, using anaconda, as well as with a single unified image vs separate layers, or creating a user or running all as a root.
In order to run vim, simply installing to root, does not activate pathogen bundle/vim-ipython. Creating a user allows pathogen bundles (ie nerdtree works) to install, but :IPython throws error.
:IPython failed
^-- failed '' not found .
Ive tried the above with no layers/1 large Dockerfile, and with different layers for the python stack, vim, and the ipython notebook.
Dockerfile
What am I not seeing here ?
what does the ^-- failed '' not found referring to?
Ive tried running the ipython notebook using --no-browser & and then running vim, or using running two shells on the same container... but cant get past this error.
Here is a working Dockerfile for anyone trying to get vim-ipython working in Docker.
issues:
user/shared home needed to for vim, despite runtimepath in .vimrc to pathogen/bundle
%connect_info >> required with containers
I am running in root, not sure why vim required a USER to install packages, but changing to USER would throw errors with CMD
--best

Resources