invalid declarator before std::variant - c++17

I'm trying to implement an ad-hoc light weight state machine using std::variant. However, it seems that the variant fsm isn't declared right as it fails with the following errors:
<source>: In function 'int main()':
<source>:235:40: error: invalid declarator before 'fsm'
235 | std::variant<states::A, states::B> fsm{std::in_place_index<0>};
| ^~~
<source>:238:5: error: 'fsm' was not declared in this scope
238 | fsm = std::visit([&](auto&& state){ state.on_event(eWifi::connected); }, fsm);
| ^~~
ASM generation compiler returned: 1
<source>: In function 'int main()':
<source>:235:40: error: invalid declarator before 'fsm'
235 | std::variant<states::A, states::B> fsm{std::in_place_index<0>};
| ^~~
<source>:238:5: error: 'fsm' was not declared in this scope
238 | fsm = std::visit([&](auto&& state){ state.on_event(eWifi::connected); }, fsm);
| ^~~
I can't really get behind what is wrong. Here's the code:
#include <iostream>
#include <variant>
enum class eWifi {
connected,
disconnected,
};
enum class eMQTT {
connected,
disconnected,
};
int main()
{
// int a;
struct states {
struct A {
auto on_event(eWifi evt) {
std::cout << "on_event A" << std::endl;
return B{};
}
auto on_event(eMQTT evt) {
std::cout << "on_event A" << std::endl;
return B{};
}
};
struct B {
auto on_event(eWifi evt) {
std::cout << "on_event B" << std::endl;
return A{};
}
auto on_event(eMQTT evt) {
std::cout << "on_event B" << std::endl;
return A{};
}
};
}
std::variant<states::A, states::B> fsm{std::in_place_index<0>};
// while(1) {
fsm = std::visit([&](auto&& state){ state.on_event(eWifi::connected); }, fsm);
// }
}
What is the invalid declarator before 'fsm' all about?

You missed the semicolon in the declaration of struct states.
struct states {
struct A {
};
struct B {
};
};
You forgot to return the result in your lambda function.
fsm = std::visit([&](auto&& state) -> std::variant<states::A, states::B> {
return state.on_event(eWifi::connected);
}, fsm);
See demo.

Related

OpenMP tasking issues with Intel compilers (and clang)

The code below shows problems with with OpenMP tasking in ICL 2021.6.0 and in ICX 2022.1.0 (Clang based)
Firstly, I am wondering if I am doing something fundamentally wrong in my OpenMP code and it is just showing up differently when compiled by different compilers.
Assuming the code is valid OpenMP...
When the function fails_intel_icl() runs under ICL, the task execution is just wrong. Some task are run twice, some not at all. Compiled by ICX/Clang it executes as I expect.
When crash_icx_2022() is compiled under ICX it just crashes at runtime. I am testing using Visual Studio 20222/Debug/x64 and latest OneAPI Base and HPC installation.
Examples of incorrect runtime behaviour of the function fails_intel_icl() when compiled with ICL is as follows
Thread:12 launching task for 0,1 <--- you will note the task for pair 0,1 never runs.
Thread:12 launching task for 0,2
Thread:9 Executing task with pair 0,2
....
#include <iostream>
#include <vector>
#include <omp.h>
std::vector<std::pair<int, std::vector<int>>> data;
void setup()
{
std::vector<int> tmp({ 1,2,3,4,5 });
for (int i = 0; i < 5; i++)
{
data.push_back({ i,tmp });
}
}
void DoTask(int a, int b)
{
{
#pragma omp critical
std::cout << "Thread:" << omp_get_thread_num() << " Executing task with pair " << a << ',' << b << std::endl;
}
}
// runs correctly under icl, but crashes at runtime with icx and clang
void crash_icx_2022()
{
# pragma omp parallel
{
# pragma omp single
{
for (auto iter = data.begin(); iter != data.end(); ++iter)
{
const auto& a = iter->first;
const auto& b = iter->second;
for (const auto& aa : b)
{
if (aa != a)
{
{
#pragma omp critical
std::cout << "Thread:" << omp_get_thread_num() << " launching task for " << ' ' << a << ',' << aa << std::endl;
}
# pragma omp task
{
DoTask(a, aa);
}
}
}
}
}
}
}
// this compiles and runs incorrectly under icl but runs correctly with icx or clang
void fails_intel_icl()
{
# pragma omp parallel
{
# pragma omp single
{
for (auto iter = data.begin(); iter != data.end(); ++iter)
{
const auto a = iter->first;
const auto b = iter->second;
for (const auto aa : b)
{
if (aa != a)
{
{
#pragma omp critical
std::cout << "Thread:" << omp_get_thread_num() << " launching task for " << ' ' << a << ',' << aa << std::endl;
}
# pragma omp task
{
DoTask(a, aa);
}
}
}
}
}
}
}
void testTaskingBug()
{
setup();
std::cout << "\nStarting test using copies\n" << std::endl;
fails_intel_icl();
std::cout << "\nStarting test using references" << std::endl;
crash_icx_2022();
}
int main()
{
testTaskingBug();
return 0;
}
The following C++17 code will not compile under clang. Not sure if the error is real.
void clang_wont_compile()
{
# pragma omp parallel
{
# pragma omp single
{
for (const auto& [a, b] : data)
{
for (const auto& aa : b)
{
if (aa != a)
{
# pragma omp task
DoTask(a, aa);
}
}
}
}
}
}
thanks for pointing this out. It does look like it should be valid OMP code. Maybe something on the backend with the task + critical which is throwing off the compiler and/or if it was not allowed per the spec but doesn’t seem to be the case.
Double checking with some OpenMP folks to see if we have a bug on this (or a better explanation as to the behavior).
So after more investigation I seem to have answers
the OpenMP code is valid and all variations of the functions should
run correctly
icl (intel classic) and icx (clang based) have some bugs as of the versions I have tested with
A newer clang compiler I able to test with (14.0.6) has
resolved the issues and executes correctly.

Libqmi - glib callback function not getting called

I am new to libqmi and wanted to start by just opening a new device. But the callback function is never getting called and therefore no device object returned.
I running the code on Ubuntu 64 Bit.
On this website: https://developer.gnome.org/gio/stable/GAsyncResult.html
I found how this should be handled and programmed it that way, but it still doesn't work.
#include <iostream>
#include <libqmi-glib/libqmi-glib.h>
#include <gio/gio.h>
using namespace std;
void device_create_start(const char* device_file);
void device_create_stop(GObject* obj, GAsyncResult* res, gpointer data);
int something = 0;
int main()
{
cout << "Start\n";
device_create_start("/dev/cdc-wdm0");
cout << "DEBUG: Something: " << something << "\n";
cout << "Stop\n";
return 0;
}
void device_create_start(const char* device_file)
{
GFile* file = g_file_new_for_path(device_file);
if(file)
{
GCancellable* cancellable = g_cancellable_new();
GAsyncReadyCallback callback = device_create_stop;
gpointer user_data = NULL;
cout << "INFO: qmi_device_new starting!\n";
qmi_device_new(file, cancellable, callback, user_data);
cout << "INFO: qmi_device_new started!\n";
cout << "INFO: Waiting!\n";
usleep(10000);
cout << "INFO: Is cancelled?: " << g_cancellable_is_cancelled(cancellable) << "\n";
cout << "INFO: canceling!\n";
g_cancellable_cancel(cancellable);
cout << "INFO: Waiting again!\n";
usleep(100000);
cout << "INFO: Is cancelled?: " << g_cancellable_is_cancelled(cancellable) << "\n";
something = 1;
}
else
{
cout << "ERROR: Could not create device file!\n";
}
}
void device_create_stop(GObject* obj, GAsyncResult* res, gpointer data)
{
cout << "INFO: device_create_stop\n";
something = 2;
cout << "INFO: qmi_device_new_finish starting\n";
GError *error;
QmiDevice* device = qmi_device_new_finish(res, &error);
cout << "INFO: qmi_device_new_finish started\n";
if(device == NULL)
{
cout << "ERROR: Could not create device!\n";
}
else
{
cout << "INFO: Device created!\n";
//device_open(device);
}
}
When I run this code the output is:
Start
INFO: qmi_device_new starting!
INFO: qmi_device_new started!
INFO: Waiting!
INFO: Is cancelled?: 0
INFO: canceling!
INFO: Waiting again!
INFO: Is cancelled?: 1
DEBUG: Something: 1
Stop
The code in the callback function is never called.
Update 1
I simplified the code and changed some things that I oversaw on the gnome reference site, like a static callback function. But this doesn't work either
#include <iostream>
#include <libqmi-glib/libqmi-glib.h>
#include <gio/gio.h>
#include <glib/gprintf.h>
using namespace std;
void device_create_start(const char* device_file);
static void device_create_stop(GObject* obj, GAsyncResult* res, gpointer data);
int something = 0;
int main()
{
g_printf ("Start\n");
device_create_start("/dev/cdc-wdm0");
cout << "DEBUG: Something: " << something << "\n";
while(true)
{
;
}
cout << "Stop\n";
return 0;
}
void device_create_start(const char* device_file)
{
GFile* file = g_file_new_for_path(device_file);
if(file)
{
cout << "INFO: qmi_device_new starting!\n";
qmi_device_new(file, NULL, device_create_stop, NULL);
cout << "INFO: qmi_device_new started!\n";
something = 1;
}
else
{
cout << "ERROR: Could not create device!\n";
}
}
static void device_create_stop(GObject* obj, GAsyncResult* res, gpointer data)
{
g_printf ("Hurray!\n");
something = 2;
}
The new output:
Start
INFO: qmi_device_new starting!
INFO: qmi_device_new started!
DEBUG: Something: 1
Does anyone has a clue why this is not working?
As Philip said (hey Philip!), you're missing the main loop. The qmi_device_new() function is an method that finishes asynchronously, and once finished, the result of the operation is provided in the callback function you provide. In order for the asynchronous function to even do something, you need to have a GMainLoop running for as long as your program logic runs.

Google Protocol Buffer C++ on ubuntu

I want to use Google protocol buffer in c++ on Ubuntu in first step I created .proto file
package business;
message Employee
{
required string first_name = 1;
required string last_name = 2;
required string email = 3;
}
message Company
{
required string name = 1;
optional string url = 2;
repeated Employee employee = 3;
}
I can easily translate it to the C++ data access classes by calling:
protoc -I=. --cpp_out=. business.proto
after this step protoc create to file
business.pb.h
business.pb.cc
when I want compile this code I see error
#include <iostream>
#include <fstream>
#include "business.pb.h"
using namespace std;
/// Saves a demo company object to 'company.bin'.
void save()
{
business::Company company;
company.set_name("Example Ltd.");
company.set_url("http://www.example.com");
// 1st employee
{
business::Employee *employee = company.add_employee();
employee->set_first_name("John");
employee->set_last_name("Doe");
employee->set_email("john.doe#example.com");
}
// 2nd employee
{
business::Employee *employee = company.add_employee();
employee->set_first_name("Jane");
employee->set_last_name("Roe");
employee->set_email("jane.roe#example.com");
}
fstream output("company.bin", ios::out | ios::trunc | ios::binary);
company.SerializeToOstream(&output);
}
/// Loads a demo company object from 'company.bin' and dumps its data.
void load()
{
business::Company company;
fstream input("company.bin", ios::in | ios::binary);
company.ParseFromIstream(&input);
cout << "Company: " << company.name() << "\n";
cout << "URL: " << (company.has_url() ? company.url() : "N/A") << "\n";
cout << "\nEmployees: \n\n";
for(int i = 0, n = company.employee_size(); i < n; ++i)
{
const business::Employee &employee = company.employee(i);
cout << "First name: " << employee.first_name() << "\n";
cout << "Last name: " << employee.last_name() << "\n";
cout << "Email: " << employee.email() << "\n";
cout << "\n";
}
}
int main()
{
save();
load();
return 0;
}
for compile I use this command
g++ p1.cpp business.pb.cc `pkg-config --cflags --libs protobuf`
but I see this error
https://i.stack.imgur.com/soQ3Z.png
i solved the problem
1)uninstall old version Google Protocol Buffer
2) instal new version Google Protocol Buffer

Boost Asio SSL Certification on iOS

I am trying to use Boost Asio on iOS, and have figured out everything, but how to check the certificate of the server I am connecting to.
How do you check the connecting server's certificate in iOS with Boost Asio?
In another answer of mine you can see a simple SSL client.
In this code you'll quickly note verify_certificate which you can use to (additionally) verify the server certificate.
Sidenote
Note that I don't know which libraries are underlying the Asio SSL implementation iOS, but keep in mind verifying (or even pinning) theserver certificate could be rather useless. It would only verify the authenticity of the certificate presented. In the light of yesterday's security debacle I don't think this helps much, because unless properly patched the server could have presented a valid certificate, but still use unrelated encryption keys - this still allows a MiTM scenario
Just noting this in case your question is somehow related to this situration
From A: HTTPS POST request with boost asio
#define DEMO_USING_SSL
#define BOOST_ASIO_ENABLE_HANDLER_TRACKING
#include <iostream>
#include <iomanip>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/asio/ssl.hpp>
class client
{
public:
client(boost::asio::io_service& io_service,
boost::asio::ssl::context& context,
boost::asio::ip::tcp::resolver::iterator endpoint_iterator)
: socket_(io_service
#ifdef DEMO_USING_SSL
, context)
{
socket_.set_verify_mode(boost::asio::ssl::verify_peer);
socket_.set_verify_callback(
boost::bind(&client::verify_certificate, this, _1, _2));
#else
)
{
(void) context;
#endif
boost::asio::async_connect(socket_.lowest_layer(), endpoint_iterator,
boost::bind(&client::handle_connect, this,
boost::asio::placeholders::error));
}
bool verify_certificate(bool preverified,
boost::asio::ssl::verify_context& ctx)
{
// The verify callback can be used to check whether the certificate that is
// being presented is valid for the peer. For example, RFC 2818 describes
// the steps involved in doing this for HTTPS. Consult the OpenSSL
// documentation for more details. Note that the callback is called once
// for each certificate in the certificate chain, starting from the root
// certificate authority.
// In this example we will simply print the certificate's subject name.
char subject_name[256];
X509* cert = X509_STORE_CTX_get_current_cert(ctx.native_handle());
X509_NAME_oneline(X509_get_subject_name(cert), subject_name, 256);
std::cout << "Verifying " << subject_name << "\n";
return preverified;
}
void handle_connect(const boost::system::error_code& error)
{
#ifdef DEMO_USING_SSL
if (!error)
{
socket_.async_handshake(boost::asio::ssl::stream_base::client,
boost::bind(&client::handle_handshake, this,
boost::asio::placeholders::error));
}
else
{
std::cout << "Connect failed: " << error.message() << "\n";
}
#else
handle_handshake(error);
#endif
}
void handle_handshake(const boost::system::error_code& error)
{
if (!error)
{
std::cout << "Enter message: ";
static char const raw[] = "POST / HTTP/1.1\r\nHost: www.example.com\r\nConnection: close\r\n\r\n";
static_assert(sizeof(raw)<=sizeof(request_), "too large");
size_t request_length = strlen(raw);
std::copy(raw, raw+request_length, request_);
{
// used this for debugging:
std::ostream hexos(std::cout.rdbuf());
for(auto it = raw; it != raw+request_length; ++it)
hexos << std::hex << std::setw(2) << std::setfill('0') << std::showbase << ((short unsigned) *it) << " ";
std::cout << "\n";
}
boost::asio::async_write(socket_,
boost::asio::buffer(request_, request_length),
boost::bind(&client::handle_write, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else
{
std::cout << "Handshake failed: " << error.message() << "\n";
}
}
void handle_write(const boost::system::error_code& error,
size_t /*bytes_transferred*/)
{
if (!error)
{
std::cout << "starting read loop\n";
boost::asio::async_read_until(socket_,
//boost::asio::buffer(reply_, sizeof(reply_)),
reply_, '\n',
boost::bind(&client::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else
{
std::cout << "Write failed: " << error.message() << "\n";
}
}
void handle_read(const boost::system::error_code& error, size_t /*bytes_transferred*/)
{
if (!error)
{
std::cout << "Reply: " << &reply_ << "\n";
}
else
{
std::cout << "Read failed: " << error.message() << "\n";
}
}
private:
#ifdef DEMO_USING_SSL
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> socket_;
#else
boost::asio::ip::tcp::socket socket_;
#endif
char request_[1024];
boost::asio::streambuf reply_;
};
int main(int argc, char* argv[])
{
try
{
if (argc != 3)
{
std::cerr << "Usage: client <host> <port>\n";
return 1;
}
boost::asio::io_service io_service;
boost::asio::ip::tcp::resolver resolver(io_service);
boost::asio::ip::tcp::resolver::query query(argv[1], argv[2]);
boost::asio::ip::tcp::resolver::iterator iterator = resolver.resolve(query);
boost::asio::ssl::context ctx(boost::asio::ssl::context::sslv23);
ctx.set_default_verify_paths();
client c(io_service, ctx, iterator);
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}

How does a Lex & Yacc parser output values?

So for a project that I'm working on, I am using Lex and Yacc to parse a FTP configuration file. The configuration files look something like this:
global {
num_daemons = 10
etc = /etc/ftpd
};
host "ftp-1.foobar.com" {
ftproot = /var/ftp/server1
max_out_bandwidth = 20.7
};
host "ftp-2.foobar.com" {
ftproot = /var/ftp/server2
exclude = /var/ftp/server2/private
};
host "ftp-3.foobar.com" {
ftproot = /var/ftp/server3
};
Now, my question is, how do I obtain this information in a usable way? Let's say I wanted to put things like the address after the host token into a struct. How would I do that? Also, how would I simply print out the values that I've parsed to the command line? Also, to run it, do I just cat the config file and pipe in the compiled c program? Thanks in advance for any help!
Here is my code:
%{
// tokens.l
#include <stdio.h>
#include <stdlib.h>
#include "y.tab.h"
int yyparse();
%}
%option noyywrap
%x OPTION
%x OPTID
%%
<INITIAL>global { return GLOBAL; }
<INITIAL>host { return HOST; }
<INITIAL>"[a-zA-z1-9./-]+" { return NAME; }
<INITIAL>\{ { return CURLY_OPEN; BEGIN OPTION; }
<INITIAL>\n { return EOLN; }
<INITIAL><<EOF>> { return EOFTOK; }
<OPTION>[a-zA-z1-9./-_]+ { return ID_NAME; BEGIN OPTID; }
<OPTION>[\t] {}
<OPTION>[\};] { return OPTION_CLOSE; BEGIN INITIAL;}
<OPTID>[a-zA-z1-9./-]+ { return ID_STRING; BEGIN OPTION; }
<OPTID>[0-9.]+ { return ID_NUM; BEGIN OPTION; }
<OPTID>[\n] { return EOLN; }
%%
int main(int argc, char **argv) {
// Where I am confused..
}
and my yacc file:
%{
// parse.y
#include <stdio.h>
#include <stdlib.h>
int yyerror(char *);
int yylex(void);
%}
%token ERROR EOLN EOFTOK
%token OPTION_CLOSE GLOBAL HOST NAME ID_NAME ID_STRING ID_NUM CURLY_OPEN
%%
input
: lines EOFTOK { YYACCEPT; }
;
lines
:
| lines line
;
line
: option
| opident
| OPTION_CLOSE
;
option
: GLOBAL CURLY_OPEN
| HOST NAME CURLY_OPEN
;
opident
: ID_NAME '=' ID_STRING
| ID_NAME '=' ID_NUM
;
%%
int yyerror(char *msg) {}
You would generally have variables which were accessible and set up before calling the parser, like a linked list of key/value pairs:
typedef struct sNode {
char *key;
char *val;
struct sNode *next;
} tNode;
tNode *lookupHead = NULL;
Then, in your Yacc code, something like:
opident
: ID_NAME '=' ID_STRING { addLookupStr (lookupHead, $1, $3); }
| ID_NAME '=' ID_NUM { other function call here }
;
This would basically execute that code as the rules are found (replacing the $ variables with the item in the rule, $1 is the value for the ID_NAME token, $2 is the =, and so on).
The function would be something like:
void addLookupStr (char *newkey, char *newval) {
// Check for duplicate keys, then attempt to add. All premature returns
// should also be logging errors and setting error flags as needed.
tNode *curr = lookupHead;
while (curr != NULL) {
if (strcmp (curr->key, newkey) == 0)
return;
curr = curr->next;
}
if ((curr = malloc (sizeof (tNode))) == NULL)
return;
if ((curr->key = strdup (newkey)) == NULL) {
free (curr);
return;
}
if ((curr->val = strdup (newval)) == NULL) {
free (curr->newkey);
free (curr);
return;
}
// All possibly-failing ops complete, insert at head of list.
curr->next = lookupHead;
lookupHead = curr;
}

Resources