I am running on mac m1, docker, gulp.
my first error was command ld not found, but i fixed it in here.
how to solve running gcc failed exist status 1 in mac m1?
After that it leads me to this error.
this is the full error:
[17:09:04] 'restart-supervisor' errored after 1.04 s
[17:14:45] '<anonymous>' errored after 220 ms
[17:14:45] Error in plugin "gulp-shell"
Message:
Command `supervisorctl restart projectname` failed with exit code 7
[17:14:45] 'restart-supervisor' errored after 838 ms
Ive done a lot of research:
Ive tried doing this, but the command isn't found.
https://github.com/Supervisor/supervisor/issues/121
This as well.
https://github.com/Supervisor/supervisor/issues/1223.
I even change my image to arm64v8/golang:1.17-alpine3.14
this is my gulpfile.js:
var gulp = require("gulp");
var shell = require('gulp-shell');
gulp.task("build-binary", shell.task(
'go build'
));
gulp.task("restart-supervisor", gulp.series("build-binary", shell.task(
'supervisorctl restart projectname'
)))
gulp.task('watch', function() {
gulp.watch([
"*.go",
"*.mod",
"*.sum",
"**/*.go",
"**/*.mod",
"**/*.sum"
],
{interval: 1000, usePolling: true},
gulp.series('build-binary', 'restart-supervisor'
));
});
gulp.task('default', gulp.series('watch'));
This is my current dockerfile:
FROM arm64v8/golang:1.17-alpine3.14
RUN apk update && apk add gcc make git libc-dev binutils-gold
# Install dependencies
RUN apk add --update tzdata \
--no-cache ca-certificates git wget \
nodejs npm \
g++ \
supervisor \
&& update-ca-certificates \
&& npm install -g gulp gulp-shell
COPY ops/api/local/supervisor /etc
ENV PATH $PATH:/go/bin
WORKDIR /go/src/github.com/projectname/src/api
in my docker-compose.yaml i have this:
entrypoint:
[
"sh",
"-c",
"npm install gulp gulp-shell && supervisord -c /etc/supervisord.conf && gulp"
]
vim /etc/supervisord.conf:
#!/bin/sh
[unix_http_server]
file=/tmp/supervisor.sock
username=admin
password=revproxy
[supervisord]
nodaemon=false
user=root
logfile=/dev/null
logfile_maxbytes=0
logfile_backups=0
loglevel=info
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
username=admin
password=revproxy
[program:projectname_api]
directory=/go/src/github.com/projectname/src/api
command=/go/src/github.com/projectname/src/api/api
autostart=true
autorestart=true
stderr_logfile=/go/src/github.com/projectname/src/api/api_err.log
stderr_logfile_maxbytes=0
stdout_logfile=/go/src/github.com/projectname/src/api/api_debug.log
stdout_logfile_maxbytes=0
startsecs=0
But seriously, what is wrong with this mac m1.
I have tried doing it in rosetta and non-rosetta, version 2.
If the title of my question is wrong please correct me, I also not sure of my error.
I fixed the problem by adding #!/bin/sh and startsecs=0, no errors to be showing but the next problem is the API is not running.
Related
my app has feature flags that i would like to dynamically set to true for my npm build.
essentially i'd like to do something like
COMPILE_ASSETS=true npm build or NEW_EMAILS=true npm build, only dynamically from CI.
i have a CI pipeline that will grab the flag, but am having trouble setting it to the true and running npm in the Dockerfile.
my Dockerfile -
FROM ubuntu:bionic
ARG FEATURE_FLAG
RUN if [ "x$FEATURE_FLAG" = "x" ] ; \
then npm run build ; \
else $FEATURE_FLAG=true npm run build; \
fi
this gets run with --
docker build --no-cache --rm -t testing --build-arg FEATURE_FLAG=my_feature_flag . (i would like to keep this the way it is)
in CI i get
/bin/sh: 1: my_feature_flag=true: not found
i've tried various forms of the else statement --
else export $FEATURE_FLAG=true npm run build; (this actually looks like it works on my mac but fails in CI with export: : bad variable name
else ${FEATURE_FLAG:+$FEATURE_FLAG=true} npm build;
else eval(`$FEATURE_FLAG=true npm build`);
`else env $FEATURE_FLAG=true bash -c 'npm build';
these all fail :(
i've tried reworking the Dockerfile completely and setting the flag to true as an ENV --
ARG FEATURE_FLAG
ENV FF_SET_TRUE=${FEATURE_FLAG:+$FEATURE_FLAG=true}
ENV FF_SET_TRUE=${FF_SET_TRUE:-null}
RUN if [ "$FF_SET_TRUE" = "null" ] ; \
then npm build; \
else $FF_SET_TRUE npm build; \
fi
nothing works! is this simply a bash limitation? is expanding a variable before running a command is not possible?
or is this not possible with Docker?
Did you mean:
FROM ubuntu:bionic
ARG FEATURE_FLAG
RUN set -eux; \
if [ "x$FEATURE_FLAG" == "x" ] ; then \
npm run build ; \
else \
eval $($FEATURE_FLAG=true npm run build); \
fi
You need to wrap your command in eval for the variable to expand based on the ARGS being passed.
this worked!
ARG FEATURE_FLAG
RUN if [ -z "$FEATURE_FLAG" ] ; \
then npm run build ; \
else \
echo setting $FEATURE_FLAG to true; \
env "$FEATURE_FLAG"=true sh -c 'npm run build'; \
fi
I am trying to test ansible in docker containers where I have assigned one docker container as "ansible-controller" and the rest two as target nodes. I am using a ssh-enabled ubuntu docker image to spawn the docker containers. below is the dockerfile :-
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:Passw0rd' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]*
I have also installed "sshpass" and "ansible" packages in the controller container but when I am trying to test ansible module "ping" to the target I am facing the following issue :-
target1 | FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "",
"module_stdout": "Traceback (most recent call last):\r\n File \"/root/.ansi
ble/tmp/ansible-tmp-1586534281.47-159197676145489/ping\", line 44, in <module>\r
\n import exceptions\r\nImportError: No module named 'exceptions'\r\n",
"msg": "MODULE FAILURE",
"parsed": false
}
Below is my inventory file :-
target1 ansible_host=172.17.0.3 ansible_ssh_pass=Passw0rd ansible_python_interpreter=/usr/bin/python3.5
I am unable to solve this issue so asking for your assistance. Thanks in advance
My bad..The target machine did not have python installed.
I'm using fastlane container that stores at google container registry to upload APK to google play store using Google Cloud Build.
APK has been succesfully created.However, when processing last step (fastlane), it face errors:
Step #2: 487ea6dabc0c: Pull complete
Step #2: a7ae4fee33c9: Pull complete
Step #2: Digest: sha256:2e31d5ae64984a598856f1138c6be0577c83c247226c473bb5ad302f86267545
Step #2: Status: Downloaded newer image for gcr.io/myapp789-app/fastlane:latest
Step #2: gcr.io/myapp789-app/fastlane:latest
Step #2: docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"supply\": executable file not found in $PATH": unknown.
Step #2: time="2019-08-29T23:22:55Z" level=error msg="error waiting for container: context canceled"
Finished Step #2
ERROR
ERROR: build step 2 "gcr.io/myapp789-app/fastlane" failed: exit status 127
Note:
1) Docker Source file was taken from https://hub.docker.com/r/fastlanetools/fastlane and then I build my own image.
2) Docker Image Build on Google Cloud VM using Debian GNU/Linux 9 (stretch)
Docker Source File for fastlane:
# Final image #
###############
FROM circleci/ruby:latest
MAINTAINER milch
ENV PATH $PATH:/usr/local/itms/bin
# Java versions to be installed
ENV JAVA_VERSION 8u131
ENV JAVA_DEBIAN_VERSION 8u131-b11-1~bpo8+1
ENV CA_CERTIFICATES_JAVA_VERSION 20161107~bpo8+1
# Needed for fastlane to work
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
# Required for iTMSTransporter to find Java
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/jre
USER root
# iTMSTransporter needs java installed
# We also have to install make to install xar
# And finally shellcheck
RUN echo 'deb http://archive.debian.org/debian jessie-backports main' > /etc/apt/sources.list.d/jessie-
backports.list \
&& apt-get -o Acquire::Check-Valid-Until=false update \
&& apt-get install --yes \
make \
shellcheck \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
USER circleci
COPY --from=xar_builder /tmp/xar /tmp/xar
RUN cd /tmp/xar \
&& sudo make install \
&& sudo rm -rf /tmp/*
CloudBuild.yaml:
- name: 'gcr.io/$PROJECT_ID/fastlane'
args: ['supply', '--package_name','${_ANDROID_PACKAGE_NAME}', '--track', '${_ANDROID_RELEASE_CHANNEL}', '--json_key_data', '${_GOOGLE_PLAY_UPLOAD_KEY_JSON}', '--apk', '/workspace/${_REPO_NAME}/build/app/outputs/bundle/release/app.aab']
timeout: 1200s
Any Idea to solve this?
I solve this by building docker image using docker source from Google Cloud Official other than fastlane on hub.docker.com (where's it never update since 5 month ago)
I'm trying to install nvm within a Dockerfile. It seems like it installs OK, but the nvm command is not working.
Dockerfile:
# Install nvm
RUN git clone http://github.com/creationix/nvm.git /root/.nvm;
RUN chmod -R 777 /root/.nvm/;
RUN sh /root/.nvm/install.sh;
RUN export NVM_DIR="$HOME/.nvm";
RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
RUN nvm ls-remote;
Build output:
Step 23/39 : RUN git clone http://github.com/creationix/nvm.git /root/.nvm;
---> Running in ca485a68b9aa
Cloning into '/root/.nvm'...
---> a6f61d486443
Removing intermediate container ca485a68b9aa
Step 24/39 : RUN chmod -R 777 /root/.nvm/
---> Running in 6d4432926745
---> 30e7efc5bd41
Removing intermediate container 6d4432926745
Step 25/39 : RUN sh /root/.nvm/install.sh;
---> Running in 79b517430285
=> Downloading nvm from git to '$HOME/.nvm'
=> Cloning into '$HOME/.nvm'...
* (HEAD detached at v0.33.0)
master
=> Compressing and cleaning up git repository
=> Appending nvm source string to /root/.profile
=> bash_completion source string already in /root/.profile
npm info it worked if it ends with ok
npm info using npm#3.10.10
npm info using node#v6.9.5
npm info ok
=> Installing Node.js version 6.9.5
Downloading and installing node v6.9.5...
Downloading https://nodejs.org/dist/v6.9.5/node-v6.9.5-linux-x64.tar.xz...
######################################################################## 100.0%
Computing checksum with sha256sum
Checksums matched!
Now using node v6.9.5 (npm v3.10.10)
Creating default alias: default -> 6.9.5 (-> v6.9.5 *)
/root/.nvm/install.sh: 136: [: v6.9.5: unexpected operator
Failed to install Node.js 6.9.5
=> Close and reopen your terminal to start using nvm or run the following to use it now:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
---> 9f6f3e74cd19
Removing intermediate container 79b517430285
Step 26/39 : RUN export NVM_DIR="$HOME/.nvm";
---> Running in 1d768138e3d5
---> 8039dfb4311c
Removing intermediate container 1d768138e3d5
Step 27/39 : RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
---> Running in d91126b7de62
---> 52313e09866e
Removing intermediate container d91126b7de62
Step 28/39 : RUN nvm ls-remote;
---> Running in f13c1ed42b3a
/bin/sh: 1: nvm: not found
The command '/bin/sh -c nvm ls-remote;' returned a non-zero code: 127
The error:
Step 28/39 : RUN nvm ls-remote;
---> Running in f13c1ed42b3a
/bin/sh: 1: nvm: not found
The command '/bin/sh -c nvm ls-remote;' returned a non-zero code: 127
The end of my /root/.bashrc file looks like:
[[ -s /root/.nvm/nvm.sh ]] && . /root/.nvm/nvm.sh
Everything else in the Dockerfile works. Adding the nvm stuff is what broke it. Here is the full file.
I made the following changes to your Dockerfile to make it work:
First, replace...
RUN sh /root/.nvm/install.sh;
...with:
RUN bash /root/.nvm/install.sh;
Why? On Redhat-based systems, /bin/sh is a symlink to /bin/bash. But on Ubuntu, /bin/sh is a symlink to /bin/dash. And this is what happens with dash:
root#52d54205a137:/# bash -c '[ 1 == 1 ] && echo yes!'
yes!
root#52d54205a137:/# dash -c '[ 1 == 1 ] && echo yes!'
dash: 1: [: 1: unexpected operator
Second, replace...
RUN nvm ls-remote;
...with:
RUN bash -i -c 'nvm ls-remote';
Why? Because, the default .bashrc for a user in Ubuntu (almost at the top) contains:
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
And the source-ing of nvm's scripts takes place at the bottom. So we need to make sure that bash is invoked interactively by passing the argument -i.
Third, you could skip the following lines in your Dockerfile:
RUN export NVM_DIR="$HOME/.nvm";
RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
Why? Because bash /root/.nvm/install.sh; will automatically do it for you:
[fedora#myhost ~]$ sudo docker run --rm -it 2a283d6e2173 tail -2 /root/.bashrc
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
Instalation of nvm on ubuntu in dockerfile
In the case of Ubuntu 20.04 you can use only these commands and everything will be alright
FROM ubuntu:20.04
RUN apt update -y && apt upgrade -y && apt install wget bash -y
RUN wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
RUN bash -i -c 'nvm ls-remote'
hopefully it will work
I've got the following kickstart file (ks.cfg) for a raw centos installation. I'm trying to implement a "%post" process that will allow the installation to be modified, using you functions (install, groupremove, etc). The whole ks file is at the end of the issue.
I'm not sure why, but the following kickstart is not running the yum install mysql, yum install mysql-server in the post process.
After the install, entering "service mysql start" results in the err msg saying mysql is not found. I am, however, able to run the yum install cmds after installation, and mysql gets installed.
I know I'm missing something subtle, but not sure what it is.
%post
yum install mysql -y <<<<<<<<<<<<<<NOT WORKING!!!!!
yum install mysql-server -y <<<<<<<<<<<<<<NOT WORKING!!!!!
%end
Thanks
ks.cfg
[root#localhost ~]# cat /root/anaconda-ks.cfg
# Kickstart file automatically generated by anaconda.
#version=DEVEL
install
cdrom
lang en_US.UTF-8
keyboard us
network --onboot yes --device eth0 --bootproto dhcp
rootpw --iscrypted $1$JCZKA/by$sVSHffsPr3ZDUp6m7c5gt1
# Reboot after installation
reboot
firewall --service=ssh
authconfig --useshadow --enablemd5
selinux --enforcing
timezone --utc America/Los_Angeles
bootloader --location=mbr --driveorder=sda --append=" rhgb crashkernel=auto quiet"
# The following is the partition information you requested
# Note that any partitions you deleted are not expressed
# here so unless you clear all partitions first, this is
# not guaranteed to work
#clearpart --all --initlabel
#part /boot --fstype=ext4 --size=200
#part / --fstype=ext4 --grow --size=3000
#part swap --grow --maxsize=4064 --size=2032
repo --name="CentOS" --baseurl=cdrom:sr1 --cost=100
%packages
#Base
#Core
#Desktop
#Fonts
#General Purpose Desktop
#Internet Browser
#X Window System
binutils
gcc
kernel-devel
make
patch
python
%end
%post
cp /boot/grub/menu.lst /boot/grub/grub.conf.bak
sed -i 's/ rhgb//' /boot/grub/grub.conf
cp /etc/rc.d/rc.local /etc/rc.local.backup
cat >>/etc/rc.d/rc.local <<EOF
echo
echo "Installing VMware Tools, please wait..."
if [ -x /usr/sbin/getenforce ]; then oldenforce=\$(/usr/sbin/getenforce); /usr/sbin/setenforce permissive || true; fi
mkdir -p /tmp/vmware-toolsmnt0
for i in hda sr0 scd0; do mount -t iso9660 /dev/\$i /tmp/vmware-toolsmnt0 && break; done
cp -a /tmp/vmware-toolsmnt0 /opt/vmware-tools-installer
chmod 755 /opt/vmware-tools-installer
cd /opt/vmware-tools-installer
mv upgra32 vmware-tools-upgrader-32
mv upgra64 vmware-tools-upgrader-64
mv upgrade.sh run_upgrader.sh
chmod +x /opt/vmware-tools-installer/*upgr*
umount /tmp/vmware-toolsmnt0
rmdir /tmp/vmware-toolsmnt0
if [ -x /usr/bin/rhgb-client ]; then /usr/bin/rhgb-client --quit; fi
cd /opt/vmware-tools-installer
./run_upgrader.sh
mv /etc/rc.local.backup /etc/rc.d/rc.local
rm -rf /opt/vmware-tools-installer
sed -i 's/3:initdefault/5:initdefault/' /etc/inittab
mv /boot/grub/grub.conf.bak /boot/grub/grub.conf
if [ -x /usr/sbin/getenforce ]; then /usr/sbin/setenforce \$oldenforce || true; fi
if [ -x /bin/systemd ]; then systemctl restart prefdm.service; else telinit 5; fi
EOF
/usr/sbin/adduser test
/usr/sbin/usermod -p '$1$QcRcMih7$VG3upQam.lF4BFzVtaYU5.' test
/usr/sbin/adduser test1
/usr/sbin/usermod -p '$1$LMyHixbC$4.aATdKUb2eH8cCXtgFNM0' test1
/usr/bin/chfn -f 'ruser' root
%end
%post
yum install mysql -y <<<<<<<<<<<<<<NOT WORKING!!!!!
yum install mysql-server -y <<<<<<<<<<<<<<NOT WORKING!!!!!
%end
It was caused by line-ending when I faced same problem as you. Try to check line-ending of ks.cfg. It should be LF not CR+LF or CR.
It will be help you if you;
Try system-config-kickstart tool.
Find generated /root/anaconda-ks.cfg though there may be no %post section.
Cheers.
You should just put mysql and mysql-server into the %packages section, no need to do this in %post.