Name and Version
bitnami/minio:2022.8.22-debian-11-r1
The docker startup command is as follows, the initial node is 4, it is running well
docker run -d --restart=always --name minio --network host \
--ulimit nofile=65536:65536 \
-v "/etc/localtime":/etc/localtime:ro \
-v "/data/minio/data":/data \
-v "/data/minio/hosts":/etc/hosts \
-v "/data/logs/minio":/opt/logs \
-e LANG=C.UTF-8 \
-e MINIO_ROOT_USER=xxxxxxxxxx \
-e MINIO_ROOT_PASSWORD=xxxxxxxxxxxxxx \
-e MINIO_DISTRIBUTED_MODE_ENABLED=yes \
-e MINIO_DISTRIBUTED_NODES=minio-1,minio-2,minio-3,minio-4 \
-e MINIO_SKIP_CLIENT=yes \
-e MINIO_HTTP_TRACE=/opt/bitnami/minio/log/minio-http.log \
-e MINIO_PROMETHEUS_AUTH_TYPE="public" \
bitnami/minio:2022.8.22-debian-11-r1
I want to expand to 8 nodes, but the following configuration cannot be started
docker run -d --restart=always --name minio --network host \
--ulimit nofile=65536:65536 \
-v "/etc/localtime":/etc/localtime:ro \
-v "/data/minio/data":/data \
-v "/data/minio/hosts":/etc/hosts \
-v "/data/logs/minio":/opt/logs \
-e LANG=C.UTF-8 \
-e MINIO_ROOT_USER=xxxxxxxxxx \
-e MINIO_ROOT_PASSWORD=xxxxxxxxxxxxxx \
-e MINIO_DISTRIBUTED_MODE_ENABLED=yes \
-e MINIO_DISTRIBUTED_NODES=minio-1,minio-2,minio-3,minio-4,minio-5,minio-6,minio-7,minio-8 \
-e MINIO_SKIP_CLIENT=yes \
-e MINIO_HTTP_TRACE=/opt/bitnami/minio/log/minio-http.log \
-e MINIO_PROMETHEUS_AUTH_TYPE="public" \
bitnami/minio:2022.8.22-debian-11-r1
Got the following error in the log
API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-1:9000 offline temporarily; caused by Post "http://minio-1:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.89:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()
API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-4:9000 offline temporarily; caused by Post "http://minio-4:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.57:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()
API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-2:9000 offline temporarily; caused by Post "http://minio-2:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.139:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()
API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-6:9000 offline temporarily; caused by Post "http://minio-6:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.140:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()
API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-7:9000 offline temporarily; caused by Post "http://minio-7:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.159:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()
API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-8:9000 offline temporarily; caused by Post "http://minio-8:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.161:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()
API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-5:9000 offline temporarily; caused by Post "http://minio-5:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.112:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()
ERROR Unable to initialize backend: http://minio-2:9000/data drive is already being used in another erasure deployment. (Number of drives specified: 8 but the number of drives found in the 2nd drive's format.json: 4)
I know that there is a problem with my configuration, but I don't know how to change it to achieve the effect of expansion. I hope friends who have solved related problems can guide me
It's not your configuration, you just can't expand MinIO in this manner. Once the drives are enrolled in the cluster and the erasure coding is configured, nodes and drives cannot be added to the same MinIO Server deployment.
Instead, you would add another Server Pool that includes the new drives to your existing cluster.
Alternatively, you could back up your data or replicate to S3 or another MinIO instance temporarily, then delete your 4-node configuration, replace it with a new 8-node configuration and bring MinIO back up.
Take a look at our multi-tenant deployment guide: https://docs.minio.io/docs/multi-tenant-minio-deployment-guide
Related
I have spent the entire day trying to figure out why my sscala app running on windows is unable to make a successful connection with hbase running in a docker container.
I can shell into the container and run the hbase shell, create tables etc
Also I can port forward to localhost:16010 and see the Hbase UI. Some additional details of the setup as follows.
Env:
Scala app: Windows (host)
Hbase: docker container
Docker container details
FROM openjdk:8
ENV HBASE_VERSION=2.4.12
RUN apt-get update
RUN apt-get install -y netcat
RUN mkdir -p /var/hbase && \
cd /opt && \
wget -q https://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz && \
tar xzf hbase-${HBASE_VERSION}-bin.tar.gz
WORKDIR /opt/hbase-${HBASE_VERSION}
COPY hbase-site.xml conf
CMD ./bin/start-hbase.sh && tail -F logs/hbase*.log
hbase-site.xml -> hbase.cluster.distributed & hbase.unsafe.stream.capability.enforce set to false
The hbase container is up and running and accesible. Also confirmed zookeeper is reachable within the container as well as from the host using echo ruok | nc localhost 2181; echo
Running container as follows:
docker run -it -p 2181:2181 -p 2888:2888 -p 3888:3888 -p 16010:16010 -p 16000:16000 -p 16020:16020 -p 16030:16030 -p 8080:8080 -h hbb hbase-1
Scala app
val conf : Configuration = HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum", "hbb")
conf.set("hbase.zookeeper.property.clientPort", "2181")
conf.set("hbase.master", "hbb")
conf.set("hbase.cluster.distributed","false")
// conf.set("hbase.client.pause", "1000")
// conf.set("hbase.client.retries.number", "2")
// conf.set("zookeeper.recovery.retry", "1")
val connection = ConnectionFactory.createConnection(conf)
The part of the stack trace
1043 [ReadOnlyZKClient-hbb:2181#0x30b6ffe0] DEBUG org.apache.zookeeper.ClientCnxn - zookeeper.disableAutoWatchReset is false
3947 [ReadOnlyZKClient-hbb:2181#0x30b6ffe0-SendThread(hbb:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server hbb:2181. Will not attempt to authenticate using SASL (unknown error)
3949 [ReadOnlyZKClient-hbb:2181#0x30b6ffe0-SendThread(hbb:2181)] WARN org.apache.zookeeper.ClientCnxn - Session 0x0 for server hbb:2181, unexpected error, closing socket connection and attempting reconnect
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:100)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:620)
at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277)
at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1021)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1064)
3953 [ReadOnlyZKClient-hbb:2181#0x30b6ffe0-SendThread(hbb:2181)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Ignoring exception during shutdown input
java.net.SocketException: Socket is not connected
at sun.nio.ch.Net.translateToSocketException(Net.java:122)
at sun.nio.ch.Net.translateException(Net.java:156)
at sun.nio.ch.Net.translateException(Net.java:162)
at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:401)
at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:200)
at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1250)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1174)
I have tried changing the hbase.zookeeper.quorum & master props on the client side to localhost / 127.0.0.1 as well as changing the etc/hosts file with the container id. No luck yet
Would greatly appreciate some guidance on this :)
we were trying to reinstall XMPP ejabbered server using below commands but we are getting error as attached.
Commands we are executing on our server are below:
Step 1: sudo apt-get update
Step 2: sudo apt-get -y install ejabberd
Step 3: sudo ejabberdctl register admin localhost ejabberdpass123
Step 4: sudo service ejabberd restart
we are also copying or yml file for backup purpose.
we have also verify that our ports 5280 and 5222 are open.
I have also review below article but still no solution i found.
The error I am getting is as follows: /usr/sbin/ejabberdctl: line 428: 4052 Segmentation fault $EXEC_CMD "$CMD"
Error screenshot is like below :
I'm using supervisord to run multi-service in a container. I want a ldap service for my web application. So I installed and started opendj with the follow info,
Dockerfile
RUN dpkg -i $APP_HOME/packages/opendj_3.0.0-1_all.deb && \
/opt/opendj/setup \
--cli \
--backendType je \
--baseDN dc=test,dc=net \
--ldapPort 389 \
--adminConnectorPort 4444 \
--rootUserDN cn=Directory\ Manager \
--rootUserPassword 123456 \
--no-prompt \
--noPropertiesFile \
--acceptLicense \
--doNotStart
supervisord.conf
[program:ldap]
command=/opt/opendj/bin/start-ds
priority=1
When running my customized imgae, I got the following exiting message for ldap.
2020-05-25 06:46:03,486 INFO exited: ldap (exit status 0; expected)
Logging into the container to get all process status info with supervisorctl status all and ps -aux respectively.
$supervisorctl status all
ldap EXITED May 25 06:46 AM
$ps -aux
root 97 3.4 5.9 3489048 240248 pts/0 Sl 06:15 0:08 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -server -Dorg.opends.server.scriptName=start-ds org.opends.server.core.DirectoryServer --configClass org.opends.server.extensions.ConfigFileHandler
I found the ldap program starting up with start-ds shell script, that is, that start-ds shell process exited, but the ldap server which isn't controlled by supervisor is running.
If stopping supervisor subprocesses, the ldap server can't be stopped gracefully.
So my question is how to configure to make the supervisor to control the ldap server process which is started up by start-ds.
There is a --nodetach option that you should use in such cases: https://github.com/ForgeRock/opendj-community-edition/blob/master/resource/bin/start-ds#L60
Reference Doc says:
Options
The start-ds command takes the following options:
-N | --nodetach
Do not detach from the terminal and continue running in the foreground. This option cannot be used with the -t, --timeout option.
Default: false
The statement in start-ds.sh file is:
exec "${OPENDJ_JAVA_BIN}" ${OPENDJ_JAVA_ARGS} ${SCRIPT_NAME_ARG} \
org.opends.server.core.DirectoryServer \
--configClass org.opends.server.extensions.ConfigFileHandler \
--configFile "${CONFIG_FILE}" "${#}"
start-ds script will append this option when run /opt/opendj/bin/start-ds -N
/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -server -Dorg.opends.server.scriptName=start-ds org.opends.server.core.DirectoryServer --configClass org.opends.server.extensions.ConfigFileHandler --configFile /opt/opendj/config/config.ldif -N
I need a help from you. my task is dockerized my current redmine.
almost 2 week I am working on this task.
I copied public folder from host redmine to docker container redmine/public
I copied all plugins and installed successfully
but problem is image,my theme background image, wiki image is not disply.
I think my docker redmine is not find path to image.
any one know how to display image?
I will explain what step i do so far.
docker network create --driver bridge redmine_network
docker volume create postgres-data
docker volume create redmine-data
docker container run -d \
--name postgres \
--network redmine_network \
-v postgres-data:/var/lib/postgresql/data \
--restart always \
-e POSTGRES_PASSWORD='password' \
-e POSTGRES_DB='redmine' \
postgres:latest
docker container run -d \
--name redmine \
--network redmine_network \
-p 80:3000 \
--restart always \
-v redmine-data:/usr/src/redmine/files \
-e REDMINE_DB_POSTGRES='postgres' \
-e REDMINE_DB_DATABASE='redmine' \
-e REDMINE_DB_PASSWORD='password' \
redmine:latest
inside the container i install some package and gem for my plugins
docker exec -it redmine bash
apt update
apt install build-essential libpq-dev pkg-config libmagickwand-dev ruby ruby-dev
bundle install --no-deployment
gem install will_paginate
gem install jenkins_api_client
gem install activesupport -v 4.2.8
gem install haml-rails -v 1.0
gem install deface -v 1.0.2
gem install brakeman -v 4.8.0
bundle updates
docker redmine log file
Resolving dependencies...
The Gemfile's dependencies are satisfied
[2020-03-17 16:31:41] INFO WEBrick 1.3.1
[2020-03-17 16:31:41] INFO ruby 2.4.4 (2018-03-28) [x86_64-linux]
[2020-03-17 16:31:41] INFO WEBrick::HTTPServer#start: pid=1 port=3000
[2020-03-17 16:32:03] ERROR Errno::ECONNRESET: Connection reset by peer # io_fillbuf - fd:16
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `eof?'
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `run'
/usr/local/lib/ruby/2.4.0/webrick/server.rb:308:in `block in start_thread'
[2020-03-17 16:32:03] ERROR Errno::ECONNRESET: Connection reset by peer # io_fillbuf - fd:14
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `eof?'
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `run'
/usr/local/lib/ruby/2.4.0/webrick/server.rb:308:in `block in start_thread'
[2020-03-17 16:32:08] ERROR Errno::ECONNRESET: Connection reset by peer # io_fillbuf - fd:19
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `eof?'
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `run'
/usr/local/lib/ruby/2.4.0/webrick/server.rb:308:in `block in start_thread'
[2020-03-17 16:32:08] ERROR Errno::ECONNRESET: Connection reset by peer # io_fillbuf - fd:13
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `eof?'
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `run'
/usr/local/lib/ruby/2.4.0/webrick/server.rb:308:in `block in start_thread'
[2020-03-17 16:32:08] ERROR Errno::ECONNRESET: Connection reset by peer # io_fillbuf - fd:18
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `eof?'
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `run'
/usr/local/lib/ruby/2.4.0/webrick/server.rb:308:in `block in start_thread'
[2020-03-17 16:32:09] ERROR Errno::ECONNRESET: Connection reset by peer # io_fillbuf - fd:16
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `eof?'
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `run'
/usr/local/lib/ruby/2.4.0/webrick/server.rb:308:in `block in start_thread'
[2020-03-17 16:32:09] ERROR Errno::ECONNRESET: Connection reset by peer # io_fillbuf - fd:14
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `eof?'
/usr/local/lib/ruby/2.4.0/webrick/httpserver.rb:82:in `run'
/usr/local/lib/ruby/2.4.0/webrick/server.rb:308:in `block in start_thread'
anyone could explain how to clear this error.. please
I want to make docker redmine same like my host redmine.
i can explain what i did so far.
create docker containers for redmine 3.4.4 and postgresql:latest with network and volume for each container
https://medium.com/#gurayy/setting-up-redmine-with-docker-be110387ba1c
install package and gem in redmine container
copy /public folder to docker redmine:/usr/src/redmine
copy all plugins to docker redmine:/usr/src/redmine/plugins
install plugins
bundle exec rake redmine:plugins NAME=readme_at_repositories RAILS_ENV=production
docker container restart redmine
check everything work fine
make a pg_dump from my host machine
/usr/bin/pg_dump -U postgres -h localhost -W -F -T --file=/tmp/redminedata.sqlc projectportal
enter to my postgresql container and restore my database
pg_restore -c -U postgres -h postgres -d projectportal /tmp/redminedata.sqlc
restart redmine container and I check again redmine in browser
when I check my redmine .png and .jpg files not working (not showing)
all data are migrated but image are missing and my theme background image .
enter image description here
I am trying to complete the instructions here: https://docs.docker.com/compose/aspnet-mssql-compose/. I am at the last stage:
$ docker-compose up
I see this error:
DotNetCore$ sudo docker-compose up
Starting dotnetcore_db_1 ... done
Starting dotnetcore_web_1 ... done
Attaching to dotnetcore_db_1, dotnetcore_web_1
web_1 | ./entrypoint.sh: line 2: $'\r': command not found
: invalid optionpoint.sh: line 3: set: -
web_1 | set: usage: set [-abefhkmnptuvxBCHP] [-o option-name] [--] [arg ...]
web_1 | ./entrypoint.sh: line 5: $'\r': command not found
web_1 | ./entrypoint.sh: line 15: syntax error: unexpected end of file
dotnetcore_web_1 exited with code 2
I have spent all day trying to fix this simple error. Here is the entrypoint.sh:
#!/bin/bash
set -e
run_cmd="dotnet run --server.urls http://*:80"
until dotnet ef database update; do
>&2 echo "SQL Server is starting up"
sleep 1
done
>&2 echo "SQL Server is up - executing command"
exec $run_cmd
So far I have tried:
1) Open file using Notepad ++ and select Edit/EOL Conversion. Unix is greyed out. This method is descrbed here: https://askubuntu.com/questions/966488/how-do-i-fix-r-command-not-found-errors-running-bash-scripts-in-wsl
2) sudo dos2unix {filename}. This method is desecribed here: https://askubuntu.com/questions/966488/how-do-i-fix-r-command-not-found-errors-running-bash-scripts-in-wsl
How can I resolve this?
Your entrypoint script has windows linefeeds in it, they aren't valid on a Linux OS and are being parsed as commands to run. Correct that with your editor in the save menu, or use a utility like dos2unix to correct the files.
Once you have removed the linefeeds, you'll need to rebuild your image and then recreate a new container.
You could also set:
git config --global core.autocrlf input
from: https://github.com/docker/toolbox/issues/126