MISP instance through docker on raspberry pi running Ubuntu 20.04 server - docker

Thanks so much in advance for taking the time to read/provide any advice here.
So, I am trying to get an instance of MISP running through docker. The hardware I have it running on is a raspberry pi 4 running Ubuntu 20.04 (server edition).
I thought I installed all software dependencies, but being new to using docker, perhaps I haven't. I'm using this repository for the docker image: https://github.com/MISP/misp-docker
After running the command sudo docker-compose up after copying the .env file to the root directory, I get the error that I am going to post an image of below along with the text of the error for easy copy/pasting
ERROR: Service 'web' failed to build: The command '/bin/sh -c bash INSTALL_NODB.sh -A -u' returned a non-zero code: 1
ERROR MESSAGE SCREENSHOT
Once again, thank you all for any and all help! Please let me know if I can provide any more information!

Looks like this may be an issue that was closed in May of 2021 https://github.com/MISP/MISP/issues/7375. That Docker image has an INSTALL_NODB.sh that was initially committed in March 2021 https://github.com/MISP/misp-docker/commit/1e2f18f2c1211e382bd8df5371b1d3d718dad061. Since it was added before that fix the container may not include the fix for rpi that was added in the main repo. To verify, you can check if the output of uname -m is in this support map from the script used by the docker image https://github.com/MISP/misp-docker/blob/master/web/INSTALL_NODB.sh#L3070. If it isn't, then you would need this fix implemented in the docker image.

aarch64 isn't a supported architecture. There's a pull request on the repository that adds it, so you can add that change to your local repository like this (from a command line in the misp-docker repository):
git remote add fukusuket https://github.com/fukusuket/misp-docker.git
git fetch fukusuket
git merge fukusuket/hotfix/build-error-on-m1-mac -m "add aarch64 support"
Hopefully the pull request will be accepted soon and then you can go back to using the unaltered MISP git repository.

Related

How do you resolve the GitLab error "Error response from daemon: invalid condition: 'not-running'"?

I set up a Windows GitLab runner that's supposed to download a Docker image from our Container Registry and then run a build script in the pipeline. Unfortunately the Docker container never launches due to the following error:
Running with gitlab-runner 15.1.0 (76984217)
on WindowsDockerRunner wZMWQZYi
Resolving secrets
Preparing the "docker-windows" executor
Using Docker executor with image mcr.microsoft.com/windows/servercore:ltsc2019 ...
Pulling docker image mcr.microsoft.com/windows/servercore:ltsc2019 ...
Using docker image sha256:e6b07227af5ca9303c2112b574f6f27f38135bbf9df29d829142410221967401 for mcr.microsoft.com/windows/servercore:ltsc2019 with digest mcr.microsoft.com/windows/servercore#sha256:26c6c296a4737ba478fe3c3e531b098f89b5562c40b416ba6fb8177ac462d1af ...
Preparing environment
Running on RUNNER-WZMWQZYI via
runner2...
ERROR: Job failed (system failure): prepare environment: Error response from daemon: invalid condition: "not-running". Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
The error message doesn't clearly state what the cause of the problem is and the documentation that it references doesn't mention anything about "condition". Based on the link pointing to shell profiles I suspect it might have something to do with the shell that's being run, but when I run the Docker container locally it boots into PowerShell just fine.
Does anyone know how to solve this?
I came across this issue after installing Docker Engine using the Windows Server install script, which fetches docker.exe and dockerd.exe from https://master.dockerproject.org, These builds were last updated in March 2022, I found gitlab-runner 14.9 and earlier work okay with this version (released prior to March 2022), but 14.10 does not (released 2022-04-19) nor do any newer versions.
Installing Docker Desktop resolves this as it provides the latest version. However using Docker Desktop introduces licensing issues. An alternative is to manually install Docker Engine / update the version downloaded by the Microsoft script.
Docker Engine builds are provided on the Moby GitHub project to download from https://download.docker.com/win/static/stable/x86_64/ downloading the lastest version from here and replacing the docker executables in C:\Windows\System32 fixes the problem, working with the latest gitlab-runner.
An alternative is to use the docker-engine chocolatey package (which incidentally I maintain) which provides installation scripting for the above stable builds:
choco install docker-engine
There is also an open issue with the Windows-Containers team to move off (out of date) nightlies: https://github.com/microsoft/Windows-Containers/issues/256 which would provide a stable docker build, through the Microsoft recommended installation method.
Was finally able to solve this issue. We had the Docker Engine installed on our GitLab Runner, but that doesn't seem to be sufficient for GitLab CI/CD. After installing Docker Desktop on the runner the issue disappeared and we were able to run the pipeline.
After some trial and error I got it up and running.
I have another server running the gitlab-runner and docker without any issues (no docker desktop installed, which is not allowed because of licensing stuff).
The server I'm trying to setup right now is a 'redundancy' build server.
So to find out what was my problem, I started switching things from one build server to the other. Currently, it appears that simply downgrading to the gitlab-runner V13.4.0 was enough.
I did reregister the runner, since gitlab stated that the V15.x.x version was using executor "unknown".
Not sure what is going on there, but at least I can continue building now.

Using docker context to a mac

I had been trying to create a context to deploy a few containers from my main Mac to another another but I have been getting a weird error.
So, I have two Macs, one iMac (Late 2013) (here will be called Enterprise) and one Macbook Pro (Mid 2015) (here will be called Defiant). Defiant is my main computer and I want to deploy my container to Enterprise in order to not overload Defiant memory. I have been working with docker context to achieve that. Currently, I have Enterprise running Docker v20.10.16 and Defiant running v20.10.16.
I have created the context on Defiant and after I run the docker context use enterprise and when I run docker container ls I get the following error:
error during connect: Get "http://docker.example.com/v1.24/containers/json": command [ssh -l rafaelguerra -- Enterprise.local docker system dial-stdio] has exited with exit status 127, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=zsh:1: command not found: docker
Does anyone know how to make it work?
Thanks
UPDATE:
Weird thing I just found out, when logged into Enterprise and run echo $PATH returns /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin however when I run ssh rafaelguerra#enterprise.local 'echo $PATH' I got the following /usr/bin:/bin:/usr/sbin:/sbin
I don't have a clue for a reason for this.
I found the reason for the problem, all commands docker run are run within ssh not with a permanent ssh session. So, zsh does not load the correct PATH when the command is run within the ssh. Therefore, the only thing I need to do was setting the correct PATH inside of the ~/.zshenv file and everything is working now.

Can I roll back to a previous version of Docker Desktop?

On Mac, I'm running Lando inside Docker. I'm on Lando v3.0.1 and was running Docker Desktop v2.2.0.5 successfully.
Docker released stable an update v.2.3.0.3 and I installed it. After that I attempted to run Lando, but got a warning message stating the Docker Desktop version is not supported.
So, I'm wondering if it is possible to roll back to my previous Docker Desktop version without uninstalling Docker.
Download your desired version from the Release Notes.
Open the download, drag "Docker" to "Applications"
Chose to "Replace" the existing installation
Run Docker desktop
All your previous containers should still be there.
If you're using Docker Desktop, I found deselecting the option Use Docker Compose V2 fixed my problems. Spent a long time working on reinstalling things. Definitely worth a try before doing anything big.
[Answer 2022]
As said #patricknelson
Sadly, this no longer works. Now it only says "Existing installation is up to date".
And workaround of Docker Descktop downgrade with retains of the data described below:
Get a list of containers
docker container ls
Commit the container to save the data:
docker commit -p 64bf7c9f7122 new-image
where 64bf7c9f7122 - id of my container
new-image - new image name
Save the committed image with changes to the archive
docker save -o c:\backup.tar new-image
Delete current Docker Desktop
Install desired Docker Desktop version
Unpacking the image in docker
docker load -i c:\backup.tar
run container
docker run --name sample-container new-image
Congrats, all data saved and Docker downgraded 😃
So, I run the installer of the previous Docker Desktop version: 2.2.0.5 - got a warning message stating that a newer Docker already exists and if I wanted to replace it (stop, or keep it both). I selected 'Replace'.
The installation went successful.
But when I open Docker all my running containers were gone.
I run lando to recreate my Drupal 7 site.
I got the "Boomshakala" from lando confirming that the app has started up correctly, and provided with its corresponding vitals -including the APPSERVER URLS.
But when I access the URL, I got an error message:
"Error: the website encounter an unexpected error. Please try again later."
The uncaught exception thrown in shutdown function:
"PDOException: SQLSTATE[]: Base table or view not found:1146 Table 'drupal7.semaphore' doesn't exist...."
To solve this, I imported and old copy of the drupal database site:
lando db-import .sql
then I navigated to the docroot folder, and run a database update:
lando drush upddatedb
All good now; thanks #halfer for your comments!
The quick hack here for Lando specifically, is just to reinstall Lando from the installer for the version you want. We've bundled the supported version of Docker Desktop with Lando itself which means you can always specifically install the supported version when installing Lando. This may wipe out your containers and volumes, so be careful!

Unable to deploy using icp-inception:2.1.0.2-ee

i run the following command to deploy a new cluster:
docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.2-ee install
and i get this response:
Unable to find image 'ibmcom/icp-inception:2.1.0.2-ee' locally
docker: Error response from daemon: manifest for ibmcom/icp-inception:2.1.0.2-ee not found.
See 'docker run --help'.
This happened about a couple of weeks ago, and when i did nothing other than wait a few days and tried again, the command worked. It is as if the tag, 2.1.0.2-ee is not accessible. indeed, when i go to docker hub, i don't see that tag listed. But then i'm a newbie with docker hub so I'm not sure if i'm interpreting this correctly.
is it me, or is ee not available to all? i could do a docker pull with ce, no problem.
Thanks
It isn't you! ibmcom/icp-inception:2.1.0.2-ee points to the paid-for version of IBM Cloud Private and is not distributed through DockerHub. We distribute our community edition, ibmcom/icp-inception:2.1.0.2-ce, (which is free to use) via DockerHub, which is why you are not having any issues with ce.
The only difference you will find between Community Edition and Enterprise Edition is that multi-master deployments and production are not enabled in Community Edition.
Let me know if you have any further questions.
Thanks everyone on slack and stack overflow.
My mistake was that the snapshot that i took just prior to deploying the cluster successfully was not what i thought.
i think i must have checkpointed prior to loading ee into docker rather than just before running icp-inception.
It makes sense that icp ee is not publicly downloadable.

Using SSH in a Docker container (Windows)

Starting from the microsoft/aspnetcore docker image, I was able to install chocolatey and then use chocolatey to install some other software:
open-ssh
git
Now I want to clone a repo from our Bitbucket server:
I added the Bitbucket server to the known_hosts file (and even ssh'd into the server from the container to double-check)
I added my Bitbucket ssh key, which I've been using successfully on my machine and used successfully from an Ubuntu container
I added a config file in my user's .ssh directory to tell git to use my ssh key
I expect to be able to use git clone ssh://git#<host>/<path to repo>, but this command always fails with the following error:
fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository
exists.
I got this to work in an Ubuntu container with the following command:
ssh-agent sh -c 'ssh-add /home/bamboo/.ssh/id_rsa; git clone ssh://git#<host>/<path to repo>', but this command seemingly does nothing on the Windows container. I never get any feedback from ssh-agent so I am unsure if Open-SSH is even working or if there are known issues of Open-SSH in Windows containers? I do get feedback from ssh-add saying that my key was successfully added, but still I am unable to clone my git repo.
Has anybody been able to successfully do this in Windows containers? It works on my Windows machine but I'm not using Open-SSH, I'm using the Git Bash tools, which don't work in the Windows container. This is all very confusing because all the information on this topic pertains to Ubuntu containers and the resolutions all involve Unix commands that I don't have available in the Windows container.
Another strange thing I notice is that cloning using HTTP doesn't work either, instead I get the following error:
error: failed to execute prompt script (exit code 66) fatal: could not
read Password for 'http://(user)#(host)': No error
I got a little help from the Git for Windows people who suggested I use the verbose flag with the ssh command, i.e. ssh -vvvvv <host>. This showed that the config file I had in my user's .ssh directory had some extra permissions, indicated by an error message:
debug3: Bad permissions. Try removing permissions for user: S-1-5-11 on file C:\Users\ContainerAdministrator/.ssh/config
Using the icacls utility I was able to remove those permissions, which allowed the config file to be used.

Resources