i need to enable n mode of wifi (bgn). i know the wifi configuration is specified in hostapd. but i couldn't find anywhere how to enable 'only n mode' in hostapd. i am using the wl1273l chip , which is capable of bgna. Is there any lines that i need to specify in the hostapd ?.i tried the following
ieee80211=1
wmm_enabled=1
nl80211=1
but failed. i read the hostapd documentation but there is no data regarding the n mode.only regarding bga. if know anything please answer
These config lines pasted below are the relevant config params for n-mode.
You could check out this conf to lookup any param you want to use for hostapd:
http://w1.fi/gitweb/gitweb.cgi?p=hostap.git;a=blob_plain;f=hostapd/hostapd.conf
An interesting extra to my post:
Just configuring these didn't enable n-mode for my nix router. Turns out hostapd checks for neighboring SSID's. If there are a lot of those, it won't activate the secondary n channel.
A quick inSSIDer scan showed my my neighbours were in fact using the secondary channel. Apparently many routers don't have this check (as described in RFC) built in.
I used the patch described in the url below to disable the neigboring network check, and force hostapd to run in n-mode.
http://www.brunsware.de/blog/gentoo/hostapd-40mhz-disable-neighbor-check.html
These are the n-mode config lines:
# ieee80211n: Whether IEEE 802.11n (HT) is enabled
# 0 = disabled (default)
# 1 = enabled
# Note: You will also need to enable WMM for full HT functionality.
ieee80211n=1
#
# Default WMM parameters (IEEE 802.11 draft; 11-03-0504-03-000e):
# for 802.11a or 802.11g networks
# These parameters are sent to WMM clients when they associate.
# The parameters will be used by WMM clients for frames transmitted to the
# access point.
#
# note - txop_limit is in units of 32microseconds
# note - acm is admission control mandatory flag. 0 = admission control not
# required, 1 = mandatory
# note - here cwMin and cmMax are in exponent form. the actual cw value used
# will be (2^n)-1 where n is the value given here
#
wmm_enabled=1
wmm_ac_bk_cwmin=4
wmm_ac_bk_cwmax=10
wmm_ac_bk_aifs=7
wmm_ac_bk_txop_limit=0
wmm_ac_bk_acm=0
wmm_ac_be_aifs=3
wmm_ac_be_cwmin=4
wmm_ac_be_cwmax=10
wmm_ac_be_txop_limit=0
wmm_ac_be_acm=0
wmm_ac_vi_aifs=2
wmm_ac_vi_cwmin=3
wmm_ac_vi_cwmax=4
wmm_ac_vi_txop_limit=94
wmm_ac_vi_acm=0
wmm_ac_vo_aifs=2
wmm_ac_vo_cwmin=2
wmm_ac_vo_cwmax=3
wmm_ac_vo_txop_limit=47
wmm_ac_vo_acm=0
Related
We are setting up a federated scenario with Server and Client on different physical machines.
On the server, we have used the docker container to kickstart:
The above has been borrowed from Kubernetes tutorial. We believe this creates a 'local executor' [Ref 1] which helps create a gRPC server [Ref 2].
Ref 1:
Ref 2:
Next on the client 1, we are calling tff.framework.RemoteExecutor that connects to the gRPC server.
Our understanding based on the above is that the Remote Executor runs on the client which connects to the gRPC server.
Assuming the above is correct, how can we send a
tff.tf_computation
from the server to the client and print the output on the client side to ensure the whole setup works well.
Your understanding is definitely correct.
If you construct an ExecutorFactory directly, as seems to be the case in the code above, passing it to tff.framework.set_default_context will install your remote stack as the default mechanism for executing computations in the TFF runtime. You should additionally be able to pass the appropriate channels to tff.backends.native.set_remote_execution_context to handle the remote executor construction and context installation if desired, but the way you are doing it certainly works, and allows for greater customization.
Once you have set this up, running an example end-to-end should be fairly simple. We will set up a computation which takes a set of federated integers, prints on the clients, and sums the integers up. Let:
#tff.tf_computation(tf.int32)
def print_and_return(x):
# We must use tf.print here, as this logic will be
# serialized and run on the clients as TensorFlow.
tf.print('hello world')
return x
#tff.federated_computation(tff.FederatedType(tf.int32, tff.CLIENTS))
def print_and_sum(federated_arg):
same_ints = tff.federated_map(print_and_return, federated_arg)
return tff.federated_sum(same_ints)
Suppose we have N clients; we simply instantiate the set of federated integers, and invoke our computation.
federated_ints = [1] * N
total = print_and_sum(federated_ints)
assert total == N
This should cause the tf.prints defined above to run on the remote machine; as long as tf.print is directed to an output stream which you can monitor, you should be able to see it.
PS: you may note that the federated sum above is unnecessary; it certainly is. The same effect can be had by simply mapping the identity function with the serialized print.
I'm trying to get telegraf working with influxdb and I've just hit a wall. I added the following block in my telegraf config file:
[[inputs.win_perf_counters.object]]
# Process metrics, in this case for IIS only
ObjectName = "Process"
Instances = ["W3SVC"]
Counters = ["% Processor Time","Handle Count","Private Bytes","Thread Count","Virtual Bytes","Working Set"]
Measurement = "win_proc"
However, when I search my db, I never see that measurement. I know that process is running, so it should be outputting something. The problem is that even though I have logging turned on, there's no logfile. There's also nothing in the event viewer. Short of downloading the source code and running the program in a local debugger, I have no idea how to proceed. Does anyone have any ideas?
[agent]
## Default data collection interval for all inputs
interval = "10s"
## Log at debug level.
debug = true
## Log only error level messages.
quiet = false
## Log target controls the destination for logs and can be one of "file",
## "stderr" or, on Windows, "eventlog". When set to "file", the output file
## is determined by the "logfile" setting.
# logtarget = "file"
## Name of the file to be logged to when using the "file" logtarget. If set to
## the empty string then logs are written to stderr.
# logfile = ""
You can specify debug = true in the agent config to print the debug logs. If you don't specify any log file, the logs will be printed on terminal.
You have probably solved it by now, but for further reference. You could add a file output.
[[outputs.file]]
files = ["stdout"]
In your telegraf.conf and then watch the console (stdout) for output.
Keep forgetting that Logging chapter of the the Kernel User's Guide already has an answer.
Paraphrasing the 2.9 Example: Add a handler to log info events to file section in the Logging chapter of the Kernel User's Guide:
1. Set log level (default: notice)
Globally: logger:set_primary_config/2
For certain modules only: logger:set_module_level/2
Accepted log levels (from least severe to most):
debug, info, notice, warning, error, critical, alert, emergency
Note:
The default log level in the Erlang shell is notice, so if you leave it as is, but set a lower level (such as debug or info) when adding a log handler in the next step, those level of logs will never get through.
Example:
logger:set_primary_config(level, debug).
2. Configure and add log handler
Specify the handler configuration map, for example:
Config = #{config => #{file => "./sample.log"}, level => debug}.
And add the handler:
logger:add_handler(to_file_handler, logger_std_h, Config).
logger_std_h is the standard handler for Logger.
3. Filter out logs below a certain level on Erlang shell
Following the examples above, all levels of logs will be printed. To restore the notice default, but still save every level of logs in the file), use logger:set_handler_config/3.
logger:set_handler_config(default, level, notice).
Work in progress: Log each process' events into their own logfile
This module documents my (partially successful) attempts; will revisit and expand on this section when time permits. My use case was that the FreeSWITCH phone server would spawn an Erlang process to handle the call, and so logging each of them into their own files made sense at the time.
So the ui.R file is working perfectly. However, the server.R is what I suspect may be causing the issue here. The intended behavior is that I have data frames display above the embedded HTML charts on each one of my pages. However, the data frames are not generated. The intended goal is to use the google sheets package, read a google sheet, and then morph it into a data frame exposed on R Shiny.
I have tried placing the data frame function and definition above and below within the ui.R and the server.R. However, I am not getting any return on any of the output.
This is for a Shiny-Server hosted on Ubuntu 16.04 Server.
#
# This is the server logic of a Shiny web application. You can run the
# application by clicking 'Run App' above.
#
# Find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com/
#
library(shiny)
library(shinydashboard)
library(googlesheets)
library(googleCharts)
library(googleAuthR)
library(stats)
library(searchConsoleR)
library(googleAnalyticsR)
library(httr)
library(dplyr)
library(plyr)
library(mosaic)
library(DT)
library(httpuv)
library(htmltools)
# Google Sheets for Synced Keys with Data Master
# ===============================================
handover <- gs_key("1Wu8gJ#$%%#$%%###$##$#%###$%##%-VVHcB8c")
for_gs_sheet <- gs_read(handover)
str(for_gs_sheet)
# Define server logic required to draw a histogram
shinyServer(function(input, output) {
google_app <- oauth_app(
"google",
key = "3901########################m",
secret = "b#########################z"
)
#oauth2.0_token(google_app)
## ---------- Google Authentication ---------- ##
gs_auth(token = NULL ,new_user = FALSE,
key = getOption("################.com"),
secret = getOption("##############Ka5mz"),
cache = getOption("googlesheets.httr_oauth_cache"), verbose = TRUE)
for_gs_sheet <- gs_read(handover)
str(for_gs_sheet)
output$mytable = DT::renderDataTable({
df <- gs_read(handover)
})
})
The actual results should show output as related to the DT package. However, the data table is not being processed and/or is not made visible when called in the server output.
This stems from a service token issue.
The best way is to just create a service token and session that maintains an open connection and refreshes the token.
I fixed this issue by backing the token directly into the app via JSON and having the app call the JSON file within the directory the shiny app was stored in under the /srv/ directory. You can download a copy of the service account information and store it in the working directory of the app:
root#miradashboard1:/srv/shiny-server/Apps/CSM# ls
miradashboard-f89f243d0221.json server.R ui.R
Then make sure you call the service token within the server.R and ui.R.
service_token <- gar_auth_service(json_file="/srv/shiny-server/Apps/CSM/miradashboard-f89f243d0221.json")
I am quite new to C++ socket programming. Since I am in an FRC team, I need to communicate between my application and the Compact RIO via an interface known as "Network Tables". I need to communicate from my C++ vision application to our robot code in Java. How do I implement NetworkTables in regular C++?
So here is what I did in python but the concept is the same. The goal would be to move motors based on values (sensor data) from what you receive in your driver station? so, how do I accomplish this... data transfers will be done through network tables
first, initlize...
from networktables import NetworkTables
# As a client to connect to a robot
NetworkTables.initialize(server='roborio-XXX-frc.local')
creating the instance you will be able to access NetworkTables conections, configure settings, listeners and create table objects which is what is actually being used to send data
next,
sd = NetworkTables.getTable('SmartDashboard')
sd.putNumber('someNumber', 1234)
otherNumber = sd.getNumber('otherNumber')
Here, we're interacting with the SmartDashboard and calling two methods, to send and recieve values.
another example, from API docs
#!/usr/bin/env python3
#
# This is a NetworkTables server (eg, the robot or simulator side).
#
# On a real robot, you probably would create an instance of the
# wpilib.SmartDashboard object and use that instead -- but it's really
# just a passthru to the underlying NetworkTable object.
#
# When running, this will continue incrementing the value 'robotTime',
# and the value should be visible to networktables clients such as
# SmartDashboard. To view using the SmartDashboard, you can launch it
# like so:
#
# SmartDashboard.jar ip 127.0.0.1
#
import time
from networktables import NetworkTables
# To see messages from networktables, you must setup logging
import logging
logging.basicConfig(level=logging.DEBUG)
NetworkTables.initialize()
sd = NetworkTables.getTable("SmartDashboard")
i = 0
while True:
print("dsTime:", sd.getNumber("dsTime", -1))
sd.putNumber("robotTime", i)
time.sleep(1)
i += 1