Setup:
Connect stm32f407vet6 and LAN8720, connect to ethernet
Code is here: https://github.com/RIOT-OS/RIOT/tree/master/examples/emcute_mqttsn
cd RIOT/examples/emcute_mqttsn
make BOARD=stm32f4discovery all flash term
With ifconfig , ethernet interface info not printed. Does RIOT support ethernet physical layer? Please see log:
> help
2019-05-30 17:23:32,578 - INFO # help
2019-05-30 17:23:32,581 - INFO # Command Description
2019-05-30 17:23:32,584 - INFO # ---------------------------------------
2019-05-30 17:23:32,587 - INFO # con connect to MQTT broker
2019-05-30 17:23:32,592 - INFO # discon disconnect from the current broker
2019-05-30 17:23:32,595 - INFO # pub publish something
2019-05-30 17:23:32,598 - INFO # sub subscribe topic
2019-05-30 17:23:32,603 - INFO # unsub unsubscribe from topic
2019-05-30 17:23:32,606 - INFO # will register a last will
2019-05-30 17:23:32,609 - INFO # reboot Reboot the node
2019-05-30 17:23:32,615 - INFO # ps Prints information about running threads.
2019-05-30 17:23:32,617 - INFO # ping6 Ping via ICMPv6
2019-05-30 17:23:32,623 - INFO # random_init initializes the PRNG
2019-05-30 17:23:32,628 - INFO # random_get returns 32 bit of pseudo randomness
2019-05-30 17:23:32,632 - INFO # nib Configure neighbor information base
2019-05-30 17:23:32,637 - INFO # ifconfig Configure network interfaces
> ifconfig
2019-05-30 17:23:36,554 - INFO # ifconfig
>
STM32 Ethernet is supported in RIOT since a month ago, but the board you specified (STM32F4Discovery) does not have an Ethernet interface, and thus the module for it is not enabled.
If your setup is similar to the discovery board, creating a board file for the stm32f407vet6 based on the discovery board and the nucleo-f767zi board that has support for STM32 Ethernet. It's not a great deal of work if you know your board, and the project would certainly appreciate a pull request with whatever you come up with.
STM32 Ethernet is supported in RIOT since a month ago, but the board you specified (STM32F4Discovery) does not have an Ethernet interface, and thus the module for it is not enabled.
If your setup is similar to the discovery board, creating a board file for the stm32f407vet6 based on the discovery board and the nucleo-f767zi board that has support for STM32 Ethernet. It's not a great deal of work if you know your board, and the project would certainly appreciate a pull request with whatever you come up with.
Related
I am trying to learn clustering rabbitmq nodes and I am following this tutorial as well as the official documentation.
I have 2 physical machines with rabbitmq deployed on them through docker. machine1 (192.168.1.2) is to be the cluster, and machine2 (192.168.1.3) is to join it.
When I attempt to run rabbitmqctl join_cluster rabbit#192.168.1.2 from machine2, this fails with the following message.
Clustering node rabbit#node2.rabbit with rabbit#192.168.1.2
Error: unable to perform an operation on node 'rabbit#192.168.1.2'. Please see diagnostics information and suggestions below.
Most common reasons for this are:
* Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
* CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
* Target node is not running
In addition to the diagnostics info below:
* See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
* Consult server logs on node rabbit#192.168.1.2
* If target node is configured to use long node names, don't forget to use --longnames with CLI tools
DIAGNOSTICS
===========
attempted to contact: ['rabbit#192.168.1.2']
rabbit#192.168.1.3:
* connected to epmd (port 4369) on 192.168.1.2
* epmd reports node 'rabbit' uses port 25672 for inter-node and CLI tool traffic
* TCP connection succeeded but Erlang distribution failed
* suggestion: check if the Erlang cookie identical for all server nodes and CLI tools
* suggestion: check if all server nodes and CLI tools use consistent hostnames when addressing each other
* suggestion: check if inter-node connections may be configured to use TLS. If so, all nodes and CLI tools must do that
* suggestion: see the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
Current node details:
* node name: 'rabbitmqcli-1352-rabbit#node2.rabbit'
* effective user's home directory: /var/lib/rabbitmq
* Erlang cookie hash: XXXXXXXXXXXXX
The error logs on machine1 show nothing related to such a connection attempt. I have verified the md5sum of the cookies on both docker containers and they are exactly the same. So are the permissions.
I assumed perhaps the port 4369 isn't reachable, but it is.
I am unsure what I am doing wrong. Can someone help here?
Additional information:
I am using the rabbitmq:3.85-management image. It uses Erlang/OTP 23 [erts-11.0.3].
I have been checking the troubleshooting guide, but I am unsure what seems wrong here. Please let me know if I can provide more information.
So thanks to #NeoAnderson and #José M, I was able to understand what happened.
The containers running RMQ need to be accessible via the hostname that Erlang uses within the service, across the network. Since the hostname of the containers were not accessible in a container on another machine, this clustering failed.
A simple fix would be to edit the /etc/hosts file on the containers so that it would point the IP to the "leader" node.
I was just doing this to avoid installing RMQ and not because I thought this was the best way to do this. Alternately, docker swarm or k8s would have provided the right networking for me.
But the root cause was definitely the nodename problem.
I'm having trouble getting control-center to work. Setted up a 3 node kafka cluster using the following docker image = confluentinc/cp-enterprise-kafka. On a separate machine I've downloaded confluent platform v5.0.1 and I've configured (tried) control-center to monitor the docker cluster.
The kafka broker I'm using for the control-center configuration is the same from the confluent platform v5.0.1, downloaded.(I start the whole stack via bin/confluent start)
But I keep getting the rocket launching page when clicking Monitoring > System health.
My setup : --------------------------------------------------------
3 node kafka cluster using docker images.
docker image used = confluentinc/cp-enterprise-kafka
kafka running on these hostnames for the 3-node cluster :
os0 / running on tcp/29092
os1 / running on tcp/39092
os2 / running on tcp/49092
Control-center is running on a separate machine whose hostname = sb1
Futhermore the brokers have the following directives defined as :
metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
confluent.metrics.reporter.bootstrap.servers=sb1:9092
For the control-center I added the 3 node cluster config :
confluent.controlcenter.kafka.osd.bootstrap.servers=os0:29092,os1:39092,os2:49092
I'm expecting the kafka brokers writing to the kafka broker # sb1 (used by control-center) topic _confluent-metrics
What I've tried/checked/debugged so far :
dumped the the topic _confluent-metrics, and I have messages being written there
I dont know if logs from control-center (# /tmp/confluent.QJ2C4BmE/control-center/control-center.stdout) do show anyhting useful (at least for what I can interpret)
I can see HTTP/200 for the cluster I'm trying to monitor written down in the blog.
at the log from the kafka brokers I also see written the following, which put me thinking the messages were written to the topic :
[2018-12-15 07:57:59,893] ERROR Failed to submit metrics to Kafka topic __confluent.support.metrics (due to exception): java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for __confluent.support.metrics-0: 30083 ms has passed since batch creation plus linger time (io.confluent.support.metrics.submitters.KafkaSubmitter)
[2018-12-15 07:58:01,088] INFO Successfully submitted metrics to Confluent via secure endpoint (io.confluent.support.metrics.submitters.ConfluentSubmitter)
I run out of viable solutions to debug this, any help would be appreciated.
thanks in advance.
I was accessing control center via an ssh tunnel. (This was a testing environment I was using to set up CC (control center)).
When I accessed directly to the ip:port of CC everything run smoothly.
I had to shutdown wso2server to test an other API management tool on the same machine. The other tool provided a quick setup running on Docker so I had to install docker.
Now, when I shutdown every docker services and start wso2server again, it looks like some services detect docker virtual interface IP (172.17.0.1) instead of using the real IP (10.22.106.101) :
[2016-11-04 16:33:21,452] INFO - CarbonUIServiceComponent Mgt Console URL : https://172.17.0.1:9443/carbon/
[2016-11-04 16:33:21,452] INFO - CarbonUIServiceComponent API Publisher Default Context : https://172.17.0.1:9443/publisher
[2016-11-04 16:33:21,452] INFO - CarbonUIServiceComponent API Store Default Context : https://172.17.0.1:9443/store
Log from a previous day with expected IP:
[2016-09-15 15:38:24,534] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - Mgt Console URL :
https://10.22.106.101:9443/carbon/ {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}
[2016-09-15 15:38:24,534] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - API Publisher Defa
ult Context : https://10.22.106.101:9443/publisher {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}
[2016-09-15 15:38:24,534] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - API Store Default
Context : https://10.22.106.101:9443/store {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}
This doesnt prevent WSO2 to start, but it prevents Swagger-UI to work because it's trying to reach services on 172.17.0.1 ending in timeout since this IP doesn't correspond to anything.
What can I do in order to have the real IP is detected again ?
You can set required IP in carbon.xml
<!--
Host name or IP address of the machine hosting this server
e.g. www.wso2.org, 192.168.1.10
This is will become part of the End Point Reference of the
services deployed on this server instance.
-->
<!--HostName>www.wso2.org</HostName-->
<!--
Host name to be used for the Carbon management console
-->
<!--MgtHostName>mgt.wso2.org</MgtHostName-->
And you may have to replace ${carbon.local.ip} in api-manager.xml by required IP too.
In addition to above we also need to edit the /wso2am-2.0.0/repository/conf/api-manager.xml file and change url value of <GatewayEndpoint> replace ${carbon.local.ip}
Hi I am new to BIOS and UEFI firmware. I am using PXE to download boot images for UEFI and BIOS.
I found that when I do network boot using BIOS, it broadcast UDP packets and my PXE server can process it.
But with same configuration if I do network boot using UEFI, target system does not broadcast UDP packets.
I have created a target system (bare metal system) on VMware ESX 5.5.
I am using wireshark to debug. I can see that In case of EFI based boot target does not get the ip address. Even though my DHCP server broadcast DHCPOFFER packets, target continuously sends the DHCPDISCOVER packets. I mean at some point in time target has to send DHCPREQUEST packet. But same thing works fine if I boot through BIOS.
Above problem get solved If I add bootfile name and nextaddress in dhcp.conf (DHCP Server is in Linux). But as per my requirement I can not hard code the nextAddress and bootfile name, it will be added on fly in PXE server.
Edit 2: So in my case I am adding all the required parameters at PXE side like next server address, boot file name etc.
But if I do that I am not getting reply (DHCPREQUEST) back from client (UEFI based client) . But if configure same parameter at DHCP server it works well.
In case of BIOS in same environment, I have configured all parameters in PXE server and I am getting reply back (DHCPREQUEST) from client.
Just a pointer do we need to enable something at UEFI client to listen PXE parameters (Options). In my case I have made "EFI NETWORK" as primary boot.
Please help me on this. Thanks.
in both cases when the target starts a network boot it will initially broadcast the DHCP DISCOVERY packets.
If you do not see them when net booting UEFI based targets then you are probably not really netbooting or you have some firewall issue.
Edit 1.
You have a DHCP and a PXE server both providing booting info?
that's not good. You can either have:
DHCP server offering PXE parameters
Regular DHCP server plus a proxyDHCP only offering PXE parameters.
read what a proxy server does here
If efi fails to get the IP it is because is not receiving a an IP "plus" the PXE parameters.
What I have; I have putty.exe to access a remote ubuntu machine xxx.xx.xxx.xx.
What I want; I want that when I run my test cases on this remote machine using jenkins it would launch a browser that I can see popping up on my windows machine.
What I have tried; I have tried to use firefox with Xvfb(both located on remote machine) but that is headless and I cant see errors, I cant get much help from reports. I want to see whats happening on UI
So I wanted to use remote webdriver. I tried to register on remote machine as;used putty.exe
sudo java -jar selenium-server-standalone-2.35.0.jar -role node -hub http://localhost/xxx.xx.xxx.xx:4444/grid/register
but that gave error :
Sep 27, 2013 9:24:24 AM org.openqa.grid.selenium.GridLauncher main
INFO: Launching a selenium grid node
Sep 27, 2013 9:24:24 AM org.openqa.grid.internal.utils.SelfRegisteringRemote startRemoteServer
WARNING: error getting the parameters from the hub. The node may end up with wrong timeouts.The target server failed to respond
09:24:24.961 INFO - Java: Oracle Corporation 23.7-b01
09:24:24.962 INFO - OS: Linux 3.5.0-21-generic amd64
09:24:24.971 INFO - v2.35.0, with Core v2.35.0. Built from revision c916b9d
09:24:25.111 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:5555/wd/hub
09:24:25.113 INFO - Version Jetty/5.1.x
09:24:25.114 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
09:24:25.115 INFO - Started HttpContext[/selenium-server,/selenium-server]
09:24:25.116 INFO - Started HttpContext[/,/]
09:24:36.415 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#49af0a45
09:24:36.416 INFO - Started HttpContext[/wd,/wd]
09:24:36.423 INFO - Started SocketListener on 0.0.0.0:5555
09:24:36.426 INFO - Started org.openqa.jetty.jetty.Server#1f4a8824
09:24:36.428 INFO - using the json request : {"class":"org.openqa.grid.common.RegistrationRequest","capabilities":[{"platform":"LINUX","seleniumProtocol":"Selenium","browserName":"*firefox","maxInstances":5},{"platform":"LINUX","seleniumProtocol":"Selenium","browserName":"*googlechrome","maxInstances":5},{"platform":"LINUX","seleniumProtocol":"Selenium","browserName":"*iexplore","maxInstances":1},{"platform":"LINUX","seleniumProtocol":"WebDriver","browserName":"firefox","maxInstances":5},{"platform":"LINUX","seleniumProtocol":"WebDriver","browserName":"chrome","maxInstances":5},{"platform":"LINUX","seleniumProtocol":"WebDriver","browserName":"internet explorer","maxInstances":1}],"configuration":{"port":5555,"register":true,"host":"10.158.96.150","proxy":"org.openqa.grid.selenium.proxy.DefaultRemoteProxy","maxSession":5,"role":"node","hubHost":"localhost","registerCycle":5000,"hub":"http://localhost/184.73.224.98:4444/grid/register","hubPort":-1,"url":"http://10.158.96.150:5555","remoteHost":"http://10.158.96.150:5555"}}
09:24:36.430 INFO - Starting auto register thread. Will try to register every 5000 ms.
09:24:36.430 INFO - Registering the node to hub :http://localhost:-1/grid/register
09:24:36.446 INFO - couldn't register this node : Error sending the registration request.
09:24:41.479 INFO - couldn't register this node : Hub is down or not responding: Hub is down or not responding.
I have already tried :
http://rationaleemotions.wordpress.com/2012/01/23/setting-up-grid2-and-working-with-it/
https://code.google.com/p/selenium/wiki/Grid2
Understanding Selenium Grid2 implementation running on EC2
but these failed on initial step when I have to register a node on remote ubuntu machine.
Its not entirely clear what your error is. This might help where you can vnc onto a headless Xvfb box using ssh tunnels
after installing any missing packages on your remote server
Xvfb -screen 0 800x600x16 -ac &
export DISPLAY=:0
xterm &
java -jar selenium-server-standalone-2.35.0.jar -role node -hub http://mywebsiteip:4444/grid/register
then tunnel to your server from your own machine
ssh -l kaltpost -L 5900:localhost:5900 mywebsiteip 'x11vnc -localhost -display :0'
which will wait, so in another terminal
vncviewer localhost&
Which is all taken from here http://gpio.kaltpost.de/?page_id=84
++edit
I just saw you started your selenium server with the ip of the host, this needs to be localhost and then mywebsite id when you connect your tests to it