Intrigue.io with Docker - docker

I'm attempting to set up intrigue on linux mint, and to set up the development environment I'm using docker. I was able to successfully install it
sudo apt-get install docker.io
Currently I'm following a guide which is supposed to describe how to do all of this. Unfortunately it doesn't seem to match up. Here are the commands the guide is having me run:
git clone https://github.com/intrigueio/intrigue-core
cd intrigue-core
docker build .
docker run -i -t -p 7777:7777
Then it says that postgresql, redis, and intrigue-io should all start. It works up to the very last command. After building, I try to run and get this error:
"docker run" requires at least 1 argument(s).
It's not as if the guide is complicated to follow, so I'm just wondering if there is something I'm missing. Is the guide downright incorrect?

You would need to write at least the name of what service you want to run.
But I see they define a docker-compose file, so it might be even easier if you just install it and then run docker-compose up (you can add a -d in the end to make it run as a deamon)

Related

Error "Define and run multi-container applications with Docker" when running docker-compose cmd

When running the following command docker-compose -f ./docker-compose-frontend.yml --env-file .env up I get the following error "Define and run multi-container applications with Docker" which provides no further guidance.
Installed on the system is Docker 20.10.7 and docker-compose 1.18.0.
yml files specify version number 3.3 which should be suitable with this version of compose so I'm not really sure what the issue is.
Any suggestions on things to try would be great as I am now just scratching my head on this one.
use sudo apt install docker-compose-plugin -y then try to install again

Install package in running docker container

i've been using a docker container to build the chromium browser (building for Android on Debian 10). I've already created a Dockerfile that contains most of the packages I need.
Now, after building and running the container, I followed the instructions, which asked me to execute an install script (./build/install-build-deps-android.sh). In this script multiple apt install commands are executed.
My question now is, is there a way to install these packages without rebuilding the container? Downloading and building it took rather long, plus rebuilding a container each time a new package is required seems kind of suboptimal. The error I get when executing the install script is:
./build/install-build-deps-android.sh: line 21: lsb_release: command not found
(I guess there will be multiple missing packages). And using apt will give:
root#677e294147dd:/android-build/chromium/src# apt install nginx
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package nginx
(nginx just as an example install).
I'm thankfull for any hints, as I could only find guides that use the Dockerfile to install packages.
You can use docker commit:
Start your container sudo docker run IMAGE_NAME
Access your container using bash: sudo docker exec -it CONTAINER_ID bash
Install whatever you need inside the container
Exit container's bash
Commit your changes: sudo docker commit CONTAINER_ID NEW_IMAGE_NAME
If you run now docker images, you will see NEW_IMAGE_NAME listed under your local images.
Next time, when starting the docker container, use the new docker image you just created:
sudo docker run **NEW_IMAGE_NAME** - this one will include your additional installations.
Answer based on the following tutorial: How to commit changes to docker image
Thanks for #adnanmuttaleb and #David Maze (unfortunately, they only replied, so I cannot accept their answers).
What I did was to edit the Dockerfile for any later updates (which already happened), and use the exec command to install the needed dependencies from outside the container. Also remember to
apt update
otherwise you cannot find anything...
A slight variation of the steps suggested by Arye that worked better for me:
Create container from image and access in interactive mode: docker run -it IMAGE_NAME bin/bash
Modify container as desired
Leave container: exit
List launched containers: docker ps -a and copy the ID of the container just modified
Save to a new image: docker commit CONTAINER_ID NEW_IMAGE_NAME
If you haven't followed the Post-installation steps for Linux
, you might have to prefix Docker commands with sudo.

Dev environnement for gcc with Docker?

I would like to create a minimalist dev environment for occasional developers which only need Docker.
The ecosystem would have:
code-server image to run Visual Studio Code
gcc image to build the code
git to push/commit the code
ubuntu with some modifications to run the code
I looked to docker-in-docker which could be a solution:
Docker
code-server
docker run -it -v ... gcc make
docker run -it -v ... git git commit ...
docker run -it -v ... ubuntu ./program
But it seems perhaps a bit overkill. What would be the proper way to have a full dev environment well separated, that only require Docker to be installed on the host machine (Linux, Windows, MacOS, Chromium)
I suggest using a Dockerfile.
This file specifies a few steps used to build an image.
The first line of the file specifies a base image(in your case, I would use Ubuntu):
FROM ubuntu:latest
Then, you can e.g. copy files to the image or select commands to run:
RUN apt install gcc make
RUN apt install git
and so on.
At the end, you may want to specify the program that is run when you start the container
CMD /bin/bash
Then you can build it with the command docker build -f Dockerfile -t devenv:latest. This builds a new image named devenv:latest (latest is the version) from the file Dockerfile.
Then, you can create a container from the file using docker run devenv:latest.
If you want to use this container multiple times, you could create it using docker run -it devenv:latest
If you want to, you can also use the code-server base image instead of ubuntu:latest.

How to run Dockerfile or Dock Image to install the python dependencies

Sorry for basic Question, as I am new on Docker and I want to install the dependencies by using the docker file, So please guide me how to run this file on Ubuntu?
Author have written the dependencies in the Dockerfile for building the Opensfm.
GitHub Repository Link
FROM ubuntu:18.04
# Install apt-getable dependencies
RUN export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get install -y \
build-essential \
Can anyone guide me how to run the file and install the dependencies on Ubuntu?
You really should follow mchawre's advice and read the docker get-started. However, I can try to direct you into the right direction.
I want to install the dependencies by using the docker file
You have to understand that a docker file compiles to a docker image which then can be run as a docker container. You can think of a docker container as a lightweight virtual machine. With this in mind, your statement does not make sense, since you can not install dependencies for your host system (the system in which you might want to start the docker container) with the help of a docker image. This is not how docker containers are supposed to work.
Instead, the docker file allows you to create a virtualized (isolated) environment in which you can ssh (the docker way: docker exec -it <container_name> bash) and then build the respective application.
If you do not want to mess with docker at all and your systems runs something close to ubuntu:18.04, you can also manually execute the instruction from the docker file on your normal system in order to build your desired application on your system.

Error "The input device is not a TTY"

I am running the following command from my Jenkinsfile. However, I get the error "The input device is not a TTY".
docker run -v $PWD:/foobar -it cloudfoundry/cflinuxfs2 /foobar/script.sh
Is there a way to run the script from the Jenkinsfile without doing interactive mode?
I basically have a file called script.sh that I would like to run inside the Docker container.
Remove the -it from your cli to make it non interactive and remove the TTY. If you don't need either, e.g. running your command inside of a Jenkins or cron script, you should do this.
Or you can change it to -i if you have input piped into the docker command that doesn't come from a TTY. If you have something like xyz | docker ... or docker ... <input in your command line, do this.
Or you can change it to -t if you want TTY support but don't have it available on the input device. Do this for apps that check for a TTY to enable color formatting of the output in your logs, or for when you later attach to the container with a proper terminal.
Or if you need an interactive terminal and aren't running in a terminal on Linux or MacOS, use a different command line interface. PowerShell is reported to include this support on Windows.
What is a TTY? It's a terminal interface that supports escape sequences, moving the cursor around, etc, that comes from the old days of dumb terminals attached to mainframes. Today it is provided by the Linux command terminals and ssh interfaces. See the wikipedia article for more details.
To see the difference of running a container with and without a TTY, run a container without one: docker run --rm -i ubuntu bash. From inside that container, install vim with apt-get update; apt-get install vim. Note the lack of a prompt. When running vim against a file, try to move the cursor around within the file.
For docker run DON'T USE -it flag
(as said BMitch)
And it's not exactly what you are asking, but would be also useful for others:
For docker-compose exec use -T flag!
The -T key would help people who are using docker-compose exec! (It disable pseudo-tty allocation)
For example:
docker-compose -f /srv/backend_bigdata/local.yml exec -T postgres backup
or
docker-compose exec -T mysql mysql -uuser_name -ppassword database_name < dir/to/db_backup.sql
For those who struggle with this error and git bash on Windows, just use PowerShell where -it works perfectly.
If you are using git bash on windows, you just need to put
winpty
before your 'docker line' :
winpty docker exec -it some_container bash
In order for docker to allocate a TTY (the -t option) you already need to be in a TTY when docker run is called. Jenkins executes its jobs not in a TTY.
Having said that, the script you are running within Jenkins you may also want to run locally. In that case it can be really convenient to have a TTY allocated so you can send signals like ctrl+c when running it locally.
To fix this make your script optionally use the -t option, like so:
test -t 1 && USE_TTY="-t"
docker run ${USE_TTY} ...
when using 'git bash',
1) I execute the command:
docker exec -it 726fe4999627 /bin/bash
I have the error:
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
2) then, I execute the command:
winpty docker exec -it 726fe4999627 /bin/bash
I have another error:
OCI runtime exec failed: exec failed: container_linux.go:344: starting container process caused "exec: \"D:/Git/usr/bin/
bash.exe\": stat D:/Git/usr/bin/bash.exe: no such file or directory": unknown
3) third, I execute the:
winpty docker exec -it 726fe4999627 bash
it worked.
when I using 'powershell', all worked well.
Using docker-compose exec -T fixed the problem for me via Jenkins
docker-compose exec -T containerName php script.php
Same Case Here, I am running the following command throw .sh script(bash) and python .py
However, I get the same error "The input device is not a TTY".
in my case, I'm trying to take the dump from a running container of my "production" env with authentication and passing with some arguments,
then take the output of .bak file of my mssql database container.
Remove -it from the command. If you want to keep it interactive then keep -i.
you can check my .sh file and a long command taking dump.
if using windows, try with cmd , for me it works. check if docker is started.
My Jenkins pipeline step shown below failed with the same error.
steps {
echo 'Building ...'
sh 'sh ./Tools/build.sh'
}
In my "build.sh" script file "docker run" command output this error when it was executed by Jenkins job. However it was working OK when the script ran in the shell terminal.The error happened because of -t option passed to docker run command that as I know tries to allocate terminal and fails if there is no terminal to allocate.
In my case I have changed the script to pass -t option only if a terminal could be detected. Here is the code after changes :
DOCKER_RUN_OPTIONS="-i --rm"
# Only allocate tty if we detect one
if [ -t 0 ] && [ -t 1 ]; then
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -t"
fi
docker run $DOCKER_RUN_OPTIONS --name my-container-name my-image-tag
I know this is not directly answering the question at hand but for anyone that comes upon this question who is using WSL running Docker for windows and cmder or conemu.
The trick is not to use Docker which is installed on windows at /mnt/c/Program Files/Docker/Docker/resources/bin/docker.exe but rather to install the ubuntu/linux Docker. It's worth pointing out that you can't run Docker itself from within WSL but you can connect to Docker for windows from the linux Docker client.
Install Docker on Linux
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce
Connect to Docker for windows on the port 2375 which needs to be enabled from the settings in docker for windows.
docker -H localhost:2375 run -it -v /mnt/c/code:/var/app -w "/var/app" centos:7
Or set the docker_host variable which will allow you to omit the -H switch
export DOCKER_HOST=tcp://localhost:2375
You should now be able to connect interactively with a tty terminal session.
In Jenkins, I'm using docker-compose exec -T
eg:-
docker-compose exec -T app php artisan migrate
winpty works as long as you don't specify volumes to be mounted such as .:/mountpoint or ${pwd}:/mountpoint
The best workaround I have found is to use the git-bash plugin inside Visual Code Studio and use the terminal to start and stop containers or docker-compose.
For those using Pyinvoke see this documentation which I'll syndicate here in case the link dies:
99% of the time, adding pty=True to your run call will make things work as you were expecting. Read on for why this is (and why pty=True is not the default).
Command-line programs often change behavior depending on whether a controlling terminal is present; a common example is the use or disuse of colored output. When the recipient of your output is a human at a terminal, you may want to use color, tailor line length to match terminal width, etc.
Conversely, when your output is being sent to another program (shell pipe, CI server, file, etc) color escape codes and other terminal-specific behaviors can result in unwanted garbage.
Invoke’s use cases span both of the above - sometimes you only want data displayed directly, sometimes you only want to capture it as a string; often you want both. Because of this, there is no “correct” default behavior re: use of a pseudo-terminal - some large chunk of use cases will be inconvenienced either way.
For use cases which don’t care, direct invocation without a pseudo-terminal is faster & cleaner, so it is the default.
Instead of using -it use --tty
So your docker run should look like this:
docker run -v $PWD:/foobar --tty cloudfoundry/cflinuxfs2 /foobar/script.sh
use only -i flag than -it flag. which can help you to see what going on inside container.
docker exec -i $USER bash <<EOF
apt install nano -y
EOF
you might see the warning but it shows you output on the terminal inside docker.

Resources