SonarQube in Docker failes to resolve local host - docker

I'm attempting to set up SonarQube in Docker using an Alpine Linux Docker Image. However, when running the image, SonarQube seems to have problem resolving local host. Has anyone experienced this issue before?
Help with this issue would be greatly appreciated!
Dockerfile
FROM gliderlabs/alpine:3.2
ENV SONAR_VERSION=5.6.1 \
SONARQUBE_HOME=/opt/sonarqube \
SONARQUBE_FORCE_AUTHENTICATION=true \
# Database configuration
# Defaults to using H2
SONARQUBE_JDBC_USERNAME=sonar \
SONARQUBE_JDBC_PASSWORD=sonar \
SONARQUBE_JDBC_URL=
# Http port
EXPOSE 9000
RUN apk -Uu add gnupg curl \
&& rm -rf /var/cache/apk/*
# pub 2048R/D26468DE 2015-05-25
# Key fingerprint = F118 2E81 C792 9289 21DB CAB4 CFCA 4A29 D264 68DE
# uid sonarsource_deployer (Sonarsource Deployer) <infra#sonarsource.com>
# sub 2048R/06855C1D 2015-05-25
RUN gpg --keyserver ha.pool.sks-keyservers.net --recv-keys F1182E81C792928921DBCAB4CFCA4A29D26468DE
RUN set -x \
&& mkdir /opt \
&& cd /opt \
&& curl -o sonarqube.zip -fSL https://sonarsource.bintray.com/Distribution/sonarqube/sonarqube-$SONAR_VERSION.zip \
&& curl -o sonarqube.zip.asc -fSL https://sonarsource.bintray.com/Distribution/sonarqube/sonarqube-$SONAR_VERSION.zip.asc \
&& gpg --batch --verify sonarqube.zip.asc sonarqube.zip \
&& unzip sonarqube.zip \
&& mv sonarqube-$SONAR_VERSION sonarqube \
&& rm sonarqube.zip* \
&& rm -rf $SONARQUBE_HOME/bin/*
VOLUME ["$SONARQUBE_HOME/data", "$SONARQUBE_HOME/extensions"]
WORKDIR $SONARQUBE_HOME
COPY run.sh $SONARQUBE_HOME/bin/
ENTRYPOINT ["./bin/run.sh"]
./bin/run.sh
#!/bin/sh
set -e
if [ "${1:0:1}" != '-' ]; then
exec "$#"
fi
exec java -jar lib/sonar-application-$SONAR_VERSION.jar \
-Dsonar.log.console=true \
-Dsonar.jdbc.username="$SONARQUBE_JDBC_USERNAME" \
-Dsonar.jdbc.password="$SONARQUBE_JDBC_PASSWORD" \
-Dsonar.jdbc.url="$SONARQUBE_JDBC_URL" \
-Dsonar.web.javaAdditionalOpts="$SONARQUBE_WEB_JVM_OPTS -Djava.security.egd=file:/dev/./urandom" \
"$#"
Docker log
016.08.31 07:56:45 INFO es[o.s.p.ProcessEntryPoint] Starting es
2016.08.31 07:56:45 INFO es[o.s.s.EsSettings] Elasticsearch listening on 127.0.0.1:9001
2016.08.31 07:56:45 INFO es[o.elasticsearch.node] [sonar-1472630204100] version[1.7.5], pid[18], build[00f95f4/2016-02-02T09:55:30Z]
2016.08.31 07:56:45 INFO es[o.elasticsearch.node] [sonar-1472630204100] initializing ...
2016.08.31 07:56:45 INFO es[o.e.plugins] [sonar-1472630204100] loaded [], sites []
2016.08.31 07:56:45 INFO es[o.elasticsearch.env] [sonar-1472630204100] using [1] data paths, mounts [[/opt/sonarqube/data (/dev/vda2)]], net usable_space [55gb], net total_space [59gb], types [ext4]
2016.08.31 07:56:46 WARN es[o.e.bootstrap] JNA not found. native methods will be disabled.
2016.08.31 07:56:47 INFO es[o.elasticsearch.node] [sonar-1472630204100] initialized
2016.08.31 07:56:47 INFO es[o.elasticsearch.node] [sonar-1472630204100] starting ...
2016.08.31 07:56:47 WARN es[o.e.common.network] failed to resolve local host, fallback to loopback
java.net.UnknownHostException: 05ae620efc22: 05ae620efc22: unknown error
at java.net.InetAddress.getLocalHost(InetAddress.java:1505) ~[na:1.8.0_72]
at org.elasticsearch.common.network.NetworkUtils.<clinit>(NetworkUtils.java:55) ~[elasticsearch-1.7.5.jar:na]
at org.elasticsearch.transport.netty.NettyTransport.createClientBootstrap(NettyTransport.java:350) [elasticsearch-1.7.5.jar:na]
at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:242) [elasticsearch-1.7.5.jar:na]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85) [elasticsearch-1.7.5.jar:na]
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:153) [elasticsearch-1.7.5.jar:na]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85) [elasticsearch-1.7.5.jar:na]
at org.elasticsearch.node.internal.InternalNode.start(InternalNode.java:257) [elasticsearch-1.7.5.jar:na]
at org.sonar.search.SearchServer.start(SearchServer.java:46) [sonar-search-5.6.1.jar:na]
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:102) [sonar-process-5.6.1.jar:na]
at org.sonar.search.SearchServer.main(SearchServer.java:81) [sonar-search-5.6.1.jar:na]
Caused by: java.net.UnknownHostException: 05ae620efc22: unknown error
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_72]
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) ~[na:1.8.0_72]
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) ~[na:1.8.0_72]
at java.net.InetAddress.getLocalHost(InetAddress.java:1500) ~[na:1.8.0_72]
... 10 common frames omitted

May be there is no Name Service Switch file in Alpine linux, and java need one for java.net.InetAddress.getLocalHost for example.
Add this line in dockerfile
RUN echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' > /etc/nsswitch.conf

Since InetAddress.html#getLocalHost will "retrieving the name of the host from the system", make sure you launch your container with an hostname.
docker run --add-host xxx --hostname yyy
See docker run Network Settings for --add-host and --hostname options

I run into a same like issue on k8s and resolved by providing an environment variable to force the DB bind to 127.0.0.1
kubectl run sonarqube --image=sonarqube --port=9092 --env="SONARQUBE_WEB_JVM_OPTS=-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dh2.bindAddress=127.0.0.1"

Related

Jenkins Startup fails on AWS ECS due to secrets-manager-credentials-provider-plugin

I'm using the AWS Secrets Manager Credentials Provider plugin and it seems to be causing Jenkins to fail on startup.
I've followed the troubleshooting steps here with no luck, the last thing I did was split out the IAM perms.
I am running the Jenkins/Jenkins:lts docker image, on AWS ECS, and describing my stack using AWS CDK. i installed the plugin using /usr/local/bin/install-plugins.sh from the docker image.
When I run the same docker image on an EC2 server startup is succesful but via ECS i get this error.
java.lang.NullPointerException
at io.jenkins.plugins.credentials.secretsmanager.AwsSecretSource.reveal(AwsSecretSource.java:35)
at io.jenkins.plugins.casc.SecretSourceResolver$ConfigurationContextStringLookup.lambda$lookup$ad236547$1(SecretSourceResolver.java:141)
at io.vavr.CheckedFunction0.lambda$unchecked$52349c75$1(CheckedFunction0.java:247)
at io.jenkins.plugins.casc.SecretSourceResolver$ConfigurationContextStringLookup.lambda$lookup$0(SecretSourceResolver.java:141)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1632)
at java.base/java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:127)
at java.base/java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:502)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:488)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:150)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.findFirst(ReferencePipeline.java:543)
at io.jenkins.plugins.casc.SecretSourceResolver$ConfigurationContextStringLookup.lookup(SecretSourceResolver.java:143)
at org.apache.commons.text.lookup.InterpolatorStringLookup.lookup(InterpolatorStringLookup.java:144)
at org.apache.commons.text.StringSubstitutor.resolveVariable(StringSubstitutor.java:1067)
at org.apache.commons.text.StringSubstitutor.substitute(StringSubstitutor.java:1433)
at org.apache.commons.text.StringSubstitutor.substitute(StringSubstitutor.java:1308)
at org.apache.commons.text.StringSubstitutor.replaceIn(StringSubstitutor.java:1019)
at io.jenkins.plugins.casc.SecretSourceResolver.resolve(SecretSourceResolver.java:109)
at io.jenkins.plugins.casc.impl.configurators.PrimitiveConfigurator.configure(PrimitiveConfigurator.java:44)
at io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator.tryConstructor(DataBoundConfigurator.java:159)
at io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator.instance(DataBoundConfigurator.java:76)
at io.jenkins.plugins.casc.BaseConfigurator.configure(BaseConfigurator.java:267)
at io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator.configure(DataBoundConfigurator.java:82)
at io.jenkins.plugins.casc.impl.configurators.HeteroDescribableConfigurator.lambda$doConfigure$16668e2$1(HeteroDescribableConfigurator.java:277)
at io.vavr.CheckedFunction0.lambda$unchecked$52349c75$1(CheckedFunction0.java:247)
at io.jenkins.plugins.casc.impl.configurators.HeteroDescribableConfigurator.doConfigure(HeteroDescribableConfigurator.java:277)
at io.jenkins.plugins.casc.impl.configurators.HeteroDescribableConfigurator.lambda$configure$2(HeteroDescribableConfigurator.java:86)
at io.vavr.control.Option.map(Option.java:392)
at io.jenkins.plugins.casc.impl.configurators.HeteroDescribableConfigurator.lambda$configure$3(HeteroDescribableConfigurator.java:86)
at io.vavr.Tuple2.apply(Tuple2.java:238)
at io.jenkins.plugins.casc.impl.configurators.HeteroDescribableConfigurator.configure(HeteroDescribableConfigurator.java:83)
at io.jenkins.plugins.casc.impl.configurators.HeteroDescribableConfigurator.configure(HeteroDescribableConfigurator.java:55)
at io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator.tryConstructor(DataBoundConfigurator.java:151)
at io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator.instance(DataBoundConfigurator.java:76)
at io.jenkins.plugins.casc.BaseConfigurator.configure(BaseConfigurator.java:267)
at io.jenkins.plugins.casc.impl.configurators.DataBoundConfigurator.check(DataBoundConfigurator.java:100)
at io.jenkins.plugins.casc.BaseConfigurator.configure(BaseConfigurator.java:344)
at io.jenkins.plugins.casc.BaseConfigurator.check(BaseConfigurator.java:287)
at io.jenkins.plugins.casc.BaseConfigurator.configure(BaseConfigurator.java:351)
at io.jenkins.plugins.casc.BaseConfigurator.check(BaseConfigurator.java:287)
at io.jenkins.plugins.casc.ConfigurationAsCode.lambda$checkWith$8(ConfigurationAsCode.java:777)
at io.jenkins.plugins.casc.ConfigurationAsCode.invokeWith(ConfigurationAsCode.java:713)
at io.jenkins.plugins.casc.ConfigurationAsCode.checkWith(ConfigurationAsCode.java:777)
at io.jenkins.plugins.casc.ConfigurationAsCode.configureWith(ConfigurationAsCode.java:762)
at io.jenkins.plugins.casc.ConfigurationAsCode.configureWith(ConfigurationAsCode.java:638)
at io.jenkins.plugins.casc.ConfigurationAsCode.configure(ConfigurationAsCode.java:307)
at io.jenkins.plugins.casc.ConfigurationAsCode.init(ConfigurationAsCode.java:299)
Caused: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at hudson.init.TaskMethodFinder.invoke(TaskMethodFinder.java:104)
Caused: java.lang.Error
at hudson.init.TaskMethodFinder.invoke(TaskMethodFinder.java:110)
at hudson.init.TaskMethodFinder$TaskImpl.run(TaskMethodFinder.java:175)
at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:296)
at jenkins.model.Jenkins$5.runTask(Jenkins.java:1129)
at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:214)
at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:117)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused: org.jvnet.hudson.reactor.ReactorException
at org.jvnet.hudson.reactor.Reactor.execute(Reactor.java:282)
at jenkins.InitReactorRunner.run(InitReactorRunner.java:49)
at jenkins.model.Jenkins.executeReactor(Jenkins.java:1162)
at jenkins.model.Jenkins.<init>(Jenkins.java:960)
at hudson.model.Hudson.<init>(Hudson.java:86)
at hudson.model.Hudson.<init>(Hudson.java:82)
at hudson.WebAppMain$3.run(WebAppMain.java:295)
Caused: hudson.util.HudsonFailedToLoad
at hudson.WebAppMain$3.run(WebAppMain.java:312)
I have also submitted an issue.
https://github.com/jenkinsci/aws-secrets-manager-credentials-provider-plugin/issues/117#issue-938932773
EDIT: In the like from your question is a simpler solution: To use AWS_REGION. I was using AWS_DEFAULT_REGION which isn't working.
Here's my simplified solution.
# install aws cli
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_REGION
ENV AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \
AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
AWS_REGION=${AWS_REGION}
RUN wget "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -O "awscliv2.zip" \
&& unzip -q awscliv2.zip \
&& ./aws/install \
&& rm awscliv2.zip
for completeness here's my make build command
build:
echo -e $(shell getent group docker | cut -d: -f3)
docker build -t $(PROJ):$(VERSION) --build-arg DOCKER_GID=$(shell getent group docker | cut -d: -f3) \
--build-arg AWS_ACCESS_KEY_ID=$(shell aws configure get aws_access_key_id --profile=default) \
--build-arg AWS_SECRET_ACCESS_KEY=$(shell aws configure get aws_secret_access_key --profile=default) \
--build-arg AWS_REGION=$(shell aws configure get region --profile=default) \
-f Dockerfile .
OLD ANSWER:
Looks like Hudson uses credentials and config from ~/.aws
Problem with Jenkins and docker is that $JENKINS_HOME is a volume (as I learned here), not a folder.
You can't create folders in there from Dockerfile.
Luckily you can change the location of those configs.
Here's part of my Dockerfile that solved the issue:
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_DEFAULT_REGION
ENV AWS_CONFIG_FOLDER=/opt/.aws
ENV AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \
AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION} \
AWS_CONFIG_FILE=$AWS_CONFIG_FOLDER/config \
AWS_SHARED_CREDENTIALS_FILE=$AWS_CONFIG_FOLDER/credentials
RUN wget "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -O "awscliv2.zip" \
&& unzip -q awscliv2.zip \
&& ./aws/install \
&& rm awscliv2.zip \
&& mkdir $AWS_CONFIG_FOLDER \
&& aws --profile default configure set aws_access_key_id "$AWS_ACCESS_KEY_ID" \
&& aws --profile default configure set aws_secret_access_key "$AWS_SECRET_ACCESS_KEY" \
&& aws --profile default configure set region "$AWS_DEFAULT_REGION" \
&& chown -R jenkins:jenkins $AWS_CONFIG_FOLDER

UniTime keep creating an error when i run it with my Dockerfile?

i need to use UniTime on Docker but it doesn't work properly, i keep getting an error when i run it.
The error I get:
ERROR TaskExecutorService -> Failed to check for tasks: The session factory has not been initialized (or an error occured during initialization)
java.lang.RuntimeException: The session factory has not been initialized (or an error occured during initialization)
at org.unitime.timetable.model.base._BaseRootDAO.getSessionFactory(_BaseRootDAO.java:111)
at org.unitime.timetable.model.base._BaseRootDAO.getSession(_BaseRootDAO.java:151)
at org.unitime.timetable.model.base._BaseRootDAO.createNewSession(_BaseRootDAO.java:141)
at org.unitime.timetable.server.script.TaskExecutorService.checkForQueuedTasks(TaskExecutorService.java:67)
at org.unitime.timetable.server.script.TaskExecutorService$TaskExecutor.run(TaskExecutorService.java:162)
Dockerfile:
FROM tomcat:8
EXPOSE 8080
RUN apt-get update && \
apt-get install -y apt-utils && \
apt-get install -y default-mysql-server
ENV JAVA_OPTS="-Djava.awt.headless=true -Xmx2g -XX:+UseConcMarkSweepGC"
ENV TOMCAT8_GROUP=tomcat8
ENV TOMCAT8_USER=tomcat8
RUN wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar && \
cp mysql-connector-java-5.1.38.jar /usr/local/tomcat/lib/ && \
wget https://github.com/UniTime/unitime/releases/download/v4.4.140/unitime-4.4_bld140.zip && \
unzip unitime-4.4_bld140.zip -d unitime && \
/etc/init.d/mysql start && \
mysql -uroot -f <unitime/doc/mysql/schema.sql && \
mysql -uroot -f <unitime/doc/mysql/schema.sql && \
mysql -utimetable -punitime <unitime/doc/mysql/blank-data.sql && \
mkdir /usr/local/tomcat/data && \
useradd tomcat && \
chown tomcat /usr/local/tomcat/data && \
cp unitime/web/UniTime.war /usr/local/tomcat/webapps
If you are still looking for help on this. Here are some of the changes I had to make in the UniTime files to work with Docker Deploy. Some of the lib updates are specific to Java 8.
Un-comment the mysql connectivity section in the UniTime pom.xml
https://github.com/UniTime/unitime/blob/master/pom.xml#L275-L282
Update the pom's mysql.version to the version you are using.
In UniTime .project add com.gwtplugins.gwt.eclipse references
<buildCommand>
<name>com.gwtplugins.gdt.eclipse.core.webAppProjectValidator</name>
<arguments>
</arguments>
</buildCommand>
<buildCommand>
<name>com.gwtplugins.gwt.eclipse.core.gwtProjectValidator</name>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
...
<nature>com.gwtplugins.gwt.eclipse.core.gwtNature</nature>
</natures>
...
In UniTime Documentation/Database/MySQL/schema.sql find the docker equivalent of localhost (127..?) or expand the timetable user scope (for testing).
...
drop user timetable#localhost;
drop user IF EXISTS timetable#localhost;
create user timetable#localhost identified by 'unitime';
/* for Docker */
create user timetable#'%' identified by 'unitime';
grant all on timetable.* to timetable#localhost;
/* for Docker */
grant all on timetable.* to timetable#'%';
...
In UniTime JavaSource/hibernate.cfg.xml add additional hibernate params
...
<property name="connection.url">jdbc:mysql://database:3306/timetable?allowPublicKeyRetrieval=TRUE</property>
...
<!-- for Java 8-11 TSLv1.2 SSL 3 -->
<property name="hibernate.connection.verifyServerCertificate">false</property>
<property name="hibernate.connection.requireSSL">false</property>
<property name="hibernate.connection.useSSL">false</property>
<!-- End of MySQL Configuration -->
...
I used a docker-compose.yml file to set the tomcat and docker config.

HyperLedger Fabric v1.4.4: Instantiating smart contract on mychannel with error

I am following the HyperLedger Fabric v1.4.4 "Writing Your First Application" tutorial[1], but I am having a problem running the code ./startFabric.sh javascript:
+ echo 'Instantiating smart contract on mychannel'
Instantiating smart contract on mychannel
+ docker exec -e CORE_PEER_LOCALMSPID=Org1MSP -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp cli peer chaincode instantiate -o orderer.example.com:7050 -C mychannel -n fabcar -l node -v 1.0 -c '{"Args":[]}' -P 'AND('\''Org1MSP.member'\'','\''Org2MSP.member'\'')' --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
2019-11-25 16:14:38.470 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
2019-11-25 16:14:38.470 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
Error: could not assemble transaction, err proposal response was not successful, error code 500, msg error starting container: error starting container: Failed to generate platform-specific docker build: Error returned from build: 1 "npm ERR! code EAI_AGAIN
npm ERR! errno EAI_AGAIN
npm ERR! request to https://registry.npmjs.org/fabric-shim failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org:443
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2019-11-25T16_17_00_000Z-debug.log
"
I think that error is related to the docker image, because it occurs when the following code is executed:
echo "Instantiating smart contract on mychannel"
docker exec \
-e CORE_PEER_LOCALMSPID=Org1MSP \
-e CORE_PEER_MSPCONFIGPATH=${ORG1_MSPCONFIGPATH} \
cli \
peer chaincode instantiate \
-o orderer.example.com:7050 \
-C mychannel \
-n fabcar \
-l "$CC_RUNTIME_LANGUAGE" \
-v 1.0 \
-c '{"Args":[]}' \
-P "AND('Org1MSP.member','Org2MSP.member')" \
--tls \
--cafile ${ORDERER_TLS_ROOTCERT_FILE} \
--peerAddresses peer0.org1.example.com:7051 \
--tlsRootCertFiles ${ORG1_TLS_ROOTCERT_FILE}
I don't know much about docker, but I'm learning. Until then, does anyone help me with this problem?
[1] https://hyperledger-fabric.readthedocs.io/en/release-1.4/write_first_app.html
Update 1
The same error was observed when I run ./byfn.sh up -l node, but no error for ./byfn.sh up. I think the error is connected to fabric-shim. I am still looking for an answer to this error.
The command to Instantiate the smart contract will be trying to start a new chaincode container, and this is failing because the new container cannot successfully run npm install commands.
The problem could be a Docker DNS issue, or an npm registry connection problem due to the country or company you are connecting from.
The following 2 previous answers should help you:
Network calls fail during image build on corporate network
Error while running fabcar sample in javascript

How to flash a pixhawk from docker container?

I do my first step in developing on the PX4 using Docker.
Therefore I extend the px4io/px4-dev-nuttx image to px4dev with some extra installations.
Dockerfile
FROM px4io/px4-dev-nuttx
RUN apt-get update && \
apt-get install -y \
python-serial \
openocd \
flex \
bison \
libncurses5-dev \
autoconf \
texinfo \
libftdi-dev \
libtool \
zlib1g-dev
RUN useradd -ms /bin/bash user
ADD ./Firmware /src/firmware/
RUN chown -R user:user /src/firmware/
Than I run the image/container:
docker run -it --privileged \
--env=LOCAL_USER_ID="$(id -u)" \
-v /dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00:/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00:rw \
px4dev \
bash
I also tried:
--device=/dev/ttyACM0 \
--device=/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00 \
Than I switched to /src/firmware/, build the code. But the upload always results in this error:
make px4fmu-v2_default upload
ninja: Entering directory `/src/firmware/build/nuttx_px4fmu-v2_default'
[0/1] uploading px4
Loaded firmware for board id: 9,0 size: 1028997 bytes (99.69%), waiting for the bootloader...
I use a Pixhawk 2.4.8, my host is an Ubuntu 18.04 64bit. Doing the same at the host will work.
What is going wrong here? Does maybe a reboot of the PX4 during flashing it cause the problem?
If it is generally not possible, what is the output file of the build and is it possible to upload this using QGroundControl?
Kind regards,
Alex
run script:
#!/bin/bash
docker run -it --rm --privileged \
--env=LOCAL_USER_ID="$(id -u)" \
--device=/dev/ttyACM0 \
--device=/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00 \
--name=dev01 \
px4dev \
bash
for any reason sometimes the upload ends differently:
user#7d6bd90821f9:/src/firmware$ make px4fmu-v2_default upload
...
[153/153] Linking CXX executable nuttx_px4io-v2_default.elf
[601/602] uploading /src/firmware/build/px4fmu-v2_default/px4fmu-v2_default.px4
Loaded firmware for 9,0, size: 1026517 bytes, waiting for the bootloader...
If the board does not respond within 1-2 seconds, unplug and re-plug the USB connector.
but even if I do so. It stucks here.
regarding the default device, I grep through the build folder:
user#7d6bd90821f9:/src/firmware$ grep -r "/dev/serial" ./build/
./build/px4fmu-v2_default/build.ninja: COMMAND = cd /src/firmware/build/px4fmu-v2_default && /usr/bin/python /src/firmware/Tools/px_uploader.py --port "/dev/serial/by-id/*_PX4_*,/dev/serial/by-id/usb-3D_Robotics*,/dev/serial/by-id/usb-The_Autopilot*,/dev/serial/by-id/usb-Bitcraze*,/dev/serial/by-id/pci-3D_Robotics*,/dev/serial/by-id/pci-Bitcraze*,/dev/serial/by-id/usb-Gumstix*" /src/firmware/build/px4fmu-v2_default/px4fmu-v2_default.px4
there is px_uploader.py --port "...,/dev/serial/by-id/usb-3D_Robotics*,...". So I would say it looks out for /dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00!
Looking with ls /dev/ inside the container for the devices available, neither /dev/ttyACM0 nor /dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00 is listed. Here may is the problem. Something is wrong with --device=...
But ls shows that /dev/usb/ is available. So I checked it with lsusb and the PX4 is listed next to the others:
user#3077c8b483f8:/$ lsusb
Bus 003 Device 018: ID 26ac:0011
Maybe there is not correct driver inside the container for this USB device?
On my host I got the major:minor no 166:0:
user:~$ ll /dev/
crw-rw---- 1 root dialout 166, 0 Jan 2 00:40 ttyACM0
The folder /sys/dev/char/166:0 is identical at host and container as far as I can see. And at the container seems to be a link to someting with */tty/ttyACM0 like on the host:
user#3077c8b483f8:/$ ls -l /sys/dev/char/166\:0
lrwxrwxrwx 1 root root 0 Jan 1 23:44 /sys/dev/char/166:0 -> ../../devices/pci0000:00/0000:00:14.0/usb3/3-1/3-1.3/3-1.3.1/3-1.3.1.3/3-1.3.1.3:1.0/tty/ttyACM0
At the host I got this information about the devices - but this is missing inside the container:
user:~$ ls -l /dev/ttyACM0
crw-rw---- 1 root dialout 166, 0 Jan 2 00:40 ttyACM0
user:~$ ls -l /dev/serial/by-id/
total 0
lrwxrwxrwx 1 root root 13 Jan 2 00:40 usb-3D_Robotics_PX4_FMU_v2.x_0-if00 -> ../../ttyACM0
Following this post I changed my run script to (without the privileged flag)
#!/bin/bash
DEV1='/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00'
docker run \
-it \
--rm \
--env=LOCAL_USER_ID=0 \
--device=/dev/ttyACM0 \
--device=$DEV1 \
-v ${PWD}/Firmware:/opt/Firmware \
px4dev_nuttx \
bash
Than I see the devices. But they are not accessible.
root#586fa4570d1c:/# setserial /dev/ttyACM0
/dev/ttyACM0, UART: unknown, Port: 0x0000, IRQ: 0
root#586fa4570d1c:/# setserial /dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00
/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00, UART: unknown, Port: 0x0000, IRQ: 0

CentOS 6 Docker build using livemedia-creator is failing

I am trying to build an Docker base image using livemedia-creator on CentOS 7.5 with latest patches installed is failing. Below is the error I am getting.
# livemedia-creator --make-tar --no-virt --iso=CentOS-6.10-x86_64-netinstall.iso --ks=centos-6.ks --image-name=centos-root.tar.xz
Starting package installation process
The installation was stopped due to incomplete spokes detected while running in non-interactive cmdline mode. Since there cannot be any questions in cmdline mode, edit your kickstart file and retry installation.
The exact error message is:
CmdlineError: Missing package: firewalld.
The installer will now terminate.
The kickstart file which I am using is as below
url --url="http://mirrors.kernel.org/centos/6.9/os/x86_64/"
install
keyboard us
lang en_US.UTF-8
rootpw --lock --iscrypted locked
authconfig --enableshadow --passalgo=sha512
timezone --isUtc Etc/UTC
selinux --enforcing
#firewall --disabled
firewall --disable
network --bootproto=dhcp --device=eth0 --activate --onboot=on
reboot
bootloader --location=none
# Repositories to use
repo --name="CentOS" --baseurl=http://mirror.centos.org/centos/6.9/os/x86_64/ --cost=100
repo --name="Updates" --baseurl=http://mirror.centos.org/centos/6.9/updates/x86_64/ --cost=100
# Disk setup
zerombr
clearpart --all
part / --size 3000 --fstype ext4
%packages --excludedocs --nobase --nocore
vim-minimal
yum
bash
bind-utils
centos-release
shadow-utils
findutils
iputils
iproute
grub
-*-firmware
passwd
rootfiles
util-linux-ng
yum-plugin-ovl
%end
%post --log=/tmp/anaconda-post.log
# Post configure tasks for Docker
# remove stuff we don't need that anaconda insists on
# kernel needs to be removed by rpm, because of grubby
rpm -e kernel
yum -y remove dhclient dhcp-libs dracut grubby kmod grub2 centos-logos \
hwdata os-prober gettext* bind-license freetype kmod-libs dracut
yum -y remove dbus-glib dbus-python ebtables \
gobject-introspection libselinux-python pygobject3-base \
python-decorator python-slip python-slip-dbus kpartx kernel-firmware \
device-mapper* e2fsprogs-libs sysvinit-tools kbd-misc libss upstart
#clean up unused directories
rm -rf /boot
rm -rf /etc/firewalld
# Randomize root's password and lock
dd if=/dev/urandom count=50 | md5sum | passwd --stdin root
passwd -l root
#LANG="en_US"
#echo "%_install_lang $LANG" > /etc/rpm/macros.image-language-conf
awk '(NF==0&&!done){print "override_install_langs='$LANG'\ntsflags=nodocs";done=1}{print}' \
< /etc/yum.conf > /etc/yum.conf.new
mv /etc/yum.conf.new /etc/yum.conf
echo 'container' > /etc/yum/vars/infra
rm -f /usr/lib/locale/locale-archive
#Setup locale properly
localedef -v -c -i en_US -f UTF-8 en_US.UTF-8
#disable services
for serv in `/sbin/chkconfig|cut -f1`; do /sbin/chkconfig "$serv" off; done;
mv /etc/rc1.d/S26udev-post /etc/rc1.d/K26udev-post
rm -rf /var/cache/yum/*
rm -f /tmp/ks-script*
rm -rf /etc/sysconfig/network-scripts/ifcfg-*
#Generate installtime file record
/bin/date +%Y%m%d_%H%M > /etc/BUILDTIME
%end
I am not able to figure out from where firewalld is being picked. Any thought how to fix this issue.

Resources