Hyperledger Sawtooth Application using docker - docker

I am trying to run a simple wallet sawtooth based application using docker. but the docker-compose -f simplewallet-build-client-js.yaml up generates an error for me.
I just want to run a simple sawtooth based application on my machine. i am using ubuntu 22 via vmware

Related

Docker in Docker and AWS CLI for Windows Containers

I'm trying to migrate .NET legacy application to AWS ECS/Fargate. I'm following this article that explains how to create a custom Windows Docker image with MSBuild tools used in AWS CodePipeline/CodeBuild project. I also need to be able to install a Docker deamon and AWS CLIV2 into that custom image so that I could execute docker and AWS CLI commands in buildspec.yaml file in CodeBuild. So far I've been able to use this code in my custom image Dockerfile which installs Docker in Docker but the Docker service never gets started even though it understands docker --version command. I was also trying to modify this PowerShell script to install AWS CLI but also stuck with having little to no progress.
I'd appreciate any help in installing Docker in Docker and AWS CLI.
When I had to use docker in docker, I instead used the host docker socket by mounting that in the container.
I had to mount 2 files in linux.
/usr/bin/docker (executable)
/var/run/docker.sock (service socket)
Update - Above would work for linux, for windows, a double slash is required. Below socket would have to be mounted for windows. I couldn't personally test as I don't have windows.
"\\.\pipe\docker_engine:\\.\pipe\docker_engine"
I found a very good GUI tool explained this
Ref: https://tomgregory.com/running-docker-in-docker-on-windows/

Azure web app for containers- additional parameters for docker run command

I have web application developed using the PHP and Laravel and trying to host in the Azure Web App for Containers service. I have integrated stackify logging functionality for the application and the server.
I need to send additional parameters to the docker run command in the azure web app for containers, need to pass the pid and v in the docker run command.
docker run -it --pid=host -v /usr/local/stackify:/usr/local/stackify
I did not find a way to configure. Please suggest a solution to resolve this. Is there any way configure the docker run commaned in the Azure web app for container service.

Accessing website hosted in a linux container from windows host

I have a linux container based on latest ruby image hosted in my windows laptop. I have a sample website running in the container. I can confirm that the website is running fine because running curl http://localhost:4000 with in the container returns the expected html. However, when I try to access the url from a browser in Windows it fails saying that it is not reachable. I am trying to figure out a way to access the sample app using browser in Windows.
Though I am trying to setup a ruby container (as I am trying to learn ruby), I am suspecting this is a networking problem with LCOW because I cannot ping the ip address of the container from Windows command prompt. It gives the error that "TTL expired in transit". Has anyone ever successfully tried a linux container on windows and accessing the website hosted in the container from windows? Could you please help me in figuring out what I am doing wrong or if I have to do something extra, like network route configuration, to make this working?
Further Details:
Docker version: 19.03.4
Host: Windows 10 Business Edition
Base Image: ruby on linux
docker run command used:
docker run -dit -p 4000:4000 --rm --name test updatedRubyImage
updatedRubyImage: This is a new image I created using base image. I installed a few gems on top of the base and commited it as a new image.

Composer Chaincode containers lifecycle

In Hyperledger fabric each chaincode deployed runs in a separate docker container.
Hyperledger-composer, therefore, creates a new container at each upgrade of the chaincode. From my understanding composer-rest-server or any other way to interact with the composer channel always relies on the last version that has been deployed.
The framework itself does not stop containers running old chaincodes.
Should I do it manually? Is there a good reason to keep them running?
see Upgrading Hyperledger Fabric Business Network for the answer - you can stop them, yes. I suggest to read the link for more detail
Once an information is written on the Blockchain (via Hyperledger Composer or any other mean), you cannot remove it from the ledger.
Keeping the containers running old chaincodes can be considered as a mean to recover your network (for example, if you made a mistake in the ACL and you cannot access to your network anymore).
You can kill and remove old Docker containers using the following commands:
docker kill ID_OF_THE_OLD_CONTAINER
docker rm ID_OF_THE_OLD_CONTAINER

How can I use ubuntu container as a regular server that I can ssh into remotely?

I just started learning to use docker. My original purpose is to build a development environment image based on ubuntu, so that I can have a consistent development environment when I frequently switching between different machines: company windows PC, company windows laptop, macbook at home...
Now I've built an image FROM ubuntu. But what surprises me is that the container cannot be used as a running machine. When I run docker run xxxx, the container just exit immediately due to that I didn't run any service in it.
I found that I can use docker run -it xxx to get into /bin/bash on the machine and do something. But the container exit immediately I type exit in bash.
How can I use an ubuntu image as a long running server, and I can ssh into it from whatever machine I'm current using??
Checkout the LXD project from canonical, which is attempting to build a full OS capability using containers:
http://www.ubuntu.com/cloud/lxd
Docker, on the other hand, is designed primarily to package and deploy applications.
Dockers can totally be a long running process, it is designed first for running servers. The issue you are seeing is just because you didn't give it a process in your Dockerfile to run. If you do not then as you saw it will run like a service.
Try doing more of the docker lessons, you just need a CMD at the end to keep it running, like most containers use.

Resources