I built a docker image that has suricata in, but when i'm trying to run suricata, there is an error below:
3/9/2018 -- 02:58:12 - - This is Suricata version 4.0.5 RELEASE
3/9/2018 -- 02:58:12 - - [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to set feature via ioctl for 'ens33': Operation not permitted (1)
3/9/2018 -- 02:58:12 - - [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to set feature via ioctl for 'ens33': Operation not permitted (1)
3/9/2018 -- 02:58:12 - - [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to set feature via ioctl for 'ens33': Operation not permitted (1)
3/9/2018 -- 02:58:12 - - [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to set feature via ioctl for 'ens33': Operation not permitted (1)
3/9/2018 -- 02:58:12 - - all 2 packet processing threads, 4 management threads initialized, engine started.
docker images: ttbuge/suricata:4.5.2
run command: docker run -it --net=host -v $PWD/logs:/var/log/suricata ttbuge/suricata:4.5.2 suricata -i ens33
Any tips? thanks!
Try to run it with --privileged option.
For example:
docker run --privileged -it --net=host -v $PWD/logs:/var/log/suricata ttbuge/suricata:4.5.2 suricata -i ens33
Related
I am new to Scylla and I am following the instructions to try it in a container as per this page: https://hub.docker.com/r/scylladb/scylla/.
The following command ran fine.
docker run --name some-scylla --hostname some-scylla -d scylladb/scylla
I see the container is running.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e6c4e19ff1bd scylladb/scylla "/docker-entrypoint.…" 14 seconds ago Up 13 seconds 22/tcp, 7000-7001/tcp, 9042/tcp, 9160/tcp, 9180/tcp, 10000/tcp some-scylla
However, I'm unable to use nodetool or cqlsh. I get the following output.
$ docker exec -it some-scylla nodetool status
Using /etc/scylla/scylla.yaml as the config file
nodetool: Unable to connect to Scylla API server: java.net.ConnectException: Connection refused (Connection refused)
See 'nodetool help' or 'nodetool help <command>'.
and
$ docker exec -it some-scylla cqlsh
Connection error: ('Unable to connect to any servers', {'172.17.0.2': error(111, "Tried connecting to [('172.17.0.2', 9042)]. Last error: Connection refused")})
Any ideas?
Update
Looking at docker logs some-scylla I see some errors in the logs, the last one is as follows.
2021-10-03 07:51:04,771 INFO spawned: 'scylla' with pid 167
Scylla version 4.4.4-0.20210801.69daa9fd0 with build-id eb11cddd30e88ef39c32c847e70181b5cf786355 starting ...
command used: "/usr/bin/scylla --log-to-syslog 0 --log-to-stdout 1 --default-log-level info --network-stack posix --developer-mode=1 --overprovisioned --listen-address 172.17.0.2 --rpc-address 172.17.0.2 --seed-provider-parameters seeds=172.17.0.2 --blocked-reactor-notify-ms 999999999"
parsed command line options: [log-to-syslog: 0, log-to-stdout: 1, default-log-level: info, network-stack: posix, developer-mode: 1, overprovisioned, listen-address: 172.17.0.2, rpc-address: 172.17.0.2, seed-provider-parameters: seeds=172.17.0.2, blocked-reactor-notify-ms: 999999999]
ERROR 2021-10-03 07:51:05,203 [shard 6] seastar - Could not setup Async I/O: Resource temporarily unavailable. The most common cause is not enough request capacity in /proc/sys/fs/aio-max-nr. Try increasing that number or reducing the amount of logical CPUs available for your application
2021-10-03 07:51:05,316 INFO exited: scylla (exit status 1; not expected)
2021-10-03 07:51:06,318 INFO gave up: scylla entered FATAL state, too many start retries too quickly
Update 2
The reason for the error was described on the docker hub page linked above. I had to start container specifying the number of CPUs with --smp 1 as follows.
docker run --name some-scylla --hostname some-scylla -d scylladb/scylla --smp 1
According to the above page:
This command will start a Scylla single-node cluster in developer mode
(see --developer-mode 1) limited by a single CPU core (see --smp).
Production grade configuration requires tuning a few kernel parameters
such that limiting number of available cores (with --smp 1) is the
simplest way to go.
Multiple cores requires setting a proper value to the
/proc/sys/fs/aio-max-nr. On many non production systems it will be
equal to 65K. ...
As you have found out, in order to be able to use additional CPU cores you'll need to increase fs.aio-max-nr kernel parameter.
You may run as root:
# sysctl -w fs.aio-max-nr=65535
Which should be enough for most systems. Should you still have any error preventing it to use all of your CPU cores, increase its value further.
Do notice that the above configuration is not persistent. Edit /etc/sysctl.conf in order to make it persistent across reboots.
I want to run this docker hub image locally: https://hub.docker.com/r/jhipster/jhipster-sample-app (which normally runs with a npm start and gradlew) in W10home using Docker ToolBox (and it works fine)
I followed the instructions at: https://www.jhipster.tech/docker-compose/
and try to run a: $ docker-compose -f jhipster-sample-app/prod.yml up , but it gives me this error (although the image is there):
usuario#DESKTOP-GTCQCAR MINGW64 /c/Program Files/Docker Toolbox
$ docker-compose -f jhipster-sample-app/prod.yml up
ERROR: .FileNotFoundError: [Errno 2] No such file or directory: '.\\jhipster-sample-app/prod.yml'
NOTE: I also tried changing the tag, but with the same result. Why is it not finding the image that is for sure there?
I also tried to Quick launch: Run a simple jhipster application directly with Docker, in development profile: $ docker container run -d -p 8080:8080 -e SPRING_PROFILES_ACTIVE=dev jhipster/jhipster-sample-app
But, I could not access to the application at http://localhost:8080 (though the container is created and running).
I even try to run it: $ docker run jhipster/jhipster-sample-app getting this error:
2019-01-31 09:33:05.215 INFO 1 --- [ main]
i.g.j.s.JhipsterSampleApplicationApp : Starting JhipsterSampleApplicationApp on 596e926cb096 with PID 1 (/app.war started by root in /)
2019-01-31 09:33:05.252 INFO 1 --- [ main] i.g.j.s.JhipsterSampleApplicationApp : The following profiles are active: prod
2019-01-31 09:33:37.773 ERROR 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : Hikari - Exception during pool initialization.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
But I can run other images like $ docker run hello-world
So I feel kind of lost here and I do not know what I'm doing wrong. Thanks all! I'm new to Docker.
To run https://hub.docker.com/r/jhipster/jhipster-sample-app, you need to start the other containers such as the database. These are not packaged in the app container.
git clone https://github.com/jhipster/jhipster-sample-app.git
cd jhipster-sample-app
docker-compose -f src/main/docker/app.yml up -d
This will load the config from app.yml and start both the app and database containers.
I'm new with SaltStack.
I need to install NVIDIA on minion server running CentOS 7 with SaltStack only.
In the gpu/init.sls file:
install_nvidia:
cmd.script:
- source: salt://gpu/files/NVIDIA-Linux-x86_64-375.20.run
- user: root
- group: root
- shell: /bin/bash
- args: -a
I run:
sudo salt minion_name state.apply gpu
I get the error:
...
stderr:
Error opening terminal: unknown.
...
...
Summary for minion_name
------------
Succeeded: 0 (changed=1)
Failed: 1
How can I get more verbose information about the reason it failed?
I believe it wait to user input but I don't know what
Also how can I install NVIDIA on CentOS 7 with non interactive way?
Thanks.
You can get more verbose information about why a Salt state has failed by running it locally using salt-call -l debug.
salt-call -l debug state.apply gpu
In your case, you have to be aware that installing the NVIDIA driver on Linux will require you to run the installer without a graphical session present. The simplest way to do this will be to check if you're currently in a graphical session (with systemd) and then drop do multi-user.target if so:
enter-multiuser:
cmd.run:
- name: systemctl isolate multi-user.target
- onlyif: systemctl status graphical.target
Then, you can install the NVIDIA driver silently using something like
gpu-prerequisites:
pkg.installed:
- pkgs:
- kernel-devel
download-installer:
file.managed:
- name: /tmp/NVIDIA-Linux-x86_64-375.20.run
- source: salt://gpu/files/NVIDIA-Linux-x86_64-375.20.run
install-driver:
cmd.run:
- name: /tmp/NVIDIA-Linux-x86_64-375.20.run -a -s -Z -X
- require:
- file: download-installer
- pkg: gpu-prequisites
start-graphical:
cmd.run:
- name: systemctl start graphical.target
- unless: systemctl status graphical.target
- watch:
- cmd: install-driver
Unable to deploy chaincode example in my local hyperledger fabric.
system config: mac osx, Docker toolbox for mac
One validating peer is up and running using docker-compose.yaml
membersrvc:
image: hyperledger/fabric-membersrvc
command: membersrvc
vp0:
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ADDRESSAUTODETECT=false
- CORE_VM_ENDPOINT=http://172.17.0.1:2375
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=vp0
- CORE_SECURITY_ENROLLID=test_vp0
- CORE_SECURITY_ENROLLSECRET=MwYpmSRjupbT
links:
- membersrvc
command: sh -c "sleep 5; peer node start --peer-chaincodedev"
While deploying chaincode by running
CORE_CHAINCODE_ID_NAME=mycc CORE_PEER_ADDRESS=0.0.0.0:30303 ./test
Its showing the error as [shim] ERRO : Error trying to connect to local peer: grpc: timed out trying to connect
I tried replacing CORE_PEER_ADDRESS as suggested by grep timeout solution but no change in error.
First Validating peer output
Chaincode deployment error window
You need to use the correct port number on which peer process is listening to.
instead of using the following command,
CORE_CHAINCODE_ID_NAME=mycc CORE_PEER_ADDRESS=0.0.0.0:30303 ./test
try this instead,
CORE_CHAINCODE_ID_NAME=mycc CORE_PEER_ADDRESS=0.0.0.0:7052 ./test
if it doesn't work then run the following command to check your listening port and use that instead,
netstat -atp tcp | grep -i "listen"
My demo project is running OK already, but it is only one peer in network. I want to add more peers into network.
I followed this guide ==> https://github.com/hyperledger-archives/fabric/blob/540c4db5f64dba4bd1b18e896c96a8d17d7ec552/docs/dev-setup/devnet-setup.md.
Please kindly help to check the log below,
the directory was wrong? or what is the right way to run this start up of peer?
vagrant#hyperledger-devenv:v-:/opt/gopath/src/github.com/hyperledger/fabric$ docker run --rm -it -e CORE_VM_ENDPOINT=http://172.17.0.1:2375 -e CORE_PEER_ID=vp0 -e CORE_PEER_ADDRESSAUTODETECT=true hyperledger-peer peer node start
Unable to find image 'hyperledger-peer:latest' locally
Pulling repository docker.io/library/hyperledger-peer
docker: Error: image library/hyperledger-peer not found.
See 'docker run --help'.
Is it possible that first node in network was started not in Docker container? (For example it could be started as a process using peer node start)
We can verify which docker images are available in vagrant machine. Just run docker images command:
vagrant#hyperledger-devenv:v0.0.9-b4acc4b:$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hyperledger/fabric-baseimage latest c1d6f4800a55 27 hours ago 1.297 GB
hyperledger/fabric-baseimage x86_64-0.0.9 70328eed56aa 2 weeks ago 990.1 MB
busybox latest 47bcc53f74dc 9 weeks ago 1.113 MB
With such configuration, when “hyperledger-peer” image is not available, the Validation Peer will not be started because of “Unable to find image” error:
vagrant#hyperledger-devenv:v0.0.9-b4acc4b:/opt/gopath/src/github.com/hyperledger/fabric/peer$ docker run --rm -it -e CORE_VM_ENDPOINT=http://172.17.0.1:2375 -e CORE_PEER_ID=vp0 -e CORE_PEER_ADDRESSAUTODETECT=true hyperledger-peer peer node start
Unable to find image 'hyperledger-peer:latest' locally
Pulling repository docker.io/library/hyperledger-peer
docker: Error: image library/hyperledger-peer not found.
"hyperledger-peer:latest" image can be created using:
cd $GOPATH/src/github.com/hyperledger/fabric/core/container
go test -run BuildImage_Peer
Now docker images should show one more available image:
REPOSITORY TAG IMAGE ID CREATED SIZE
hyperledger-peer latest 438b65f18f21 8 seconds ago 1.418 GB
at this point Validation Peer should be started successfully:
vagrant#hyperledger-devenv:v0.0.9-b4acc4b:~$ docker run —rm -it -e CORE_VM_ENDPOINT=http://172.17.0.1:2375 -e CORE_PEER_ID=vp0 -e CORE_PEER_ADDRESSAUTODETECT=true hyperledger-peer peer node start
21:55:51.969 [crypto] main -> INFO 001 Log level recognized 'info', set to INFO
21:55:51.970 [peer] func1 -> INFO 002 Auto detected peer address: 172.17.0.2:30303
21:55:51.971 [peer] func1 -> INFO 003 Auto detected peer address: 172.17.0.2:30303
21:55:51.972 [peer] func1 -> INFO 004 Auto detected peer address: 172.17.0.2:30303
21:55:51.974 [main] serve -> INFO 005 Security enabled status: false
21:55:51.974 [main] serve -> INFO 006 Privacy enabled status: false
…