F# LiteDB [<BsonIgnore>] is Ignored? - f#

Context: Running F# in a containerized environment
$ dotnet --version 2.2.203
$ uname -a
Linux SAFE 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 GNU/Linux
on Ubuntu 18.04 desktop machine.
This simple code ignores the [<BsonIgnore>] attribute. The value of Ignore is stored in LiteDB. Am I doing something wrong?
I've checked it by looking with the console command $ dotnet LiteDB.Shell.dll micompany.db:
...
> open micompany.db
> db.computers.find
[1]: {"_id":11,"Ignore":"ignore","Manufacturer":"Computers Inc.","Disks":[{"SizeGb":100},{"SizeGb":250},{"SizeGb":500}]}
This is the code:
open System
open LiteDB
open LiteDB.FSharp
[<StructuredFormatDisplay("{SizeGb}GB")>]
[<CLIMutable>]
type Disk =
{ SizeGb : int }
[<StructuredFormatDisplay("Computer #{Id}: {Manufacturer}/{Disks}")>]
[<CLIMutable>]
type Computer =
{ Id: int
[<BsonIgnore>] Ignore : string
Manufacturer: string
Disks: Disk list }
[<EntryPoint>]
let main argv =
let myPc =
{ Id = 0
Ignore = "ignore"
Manufacturer = "Computers Inc."
Disks =
[ { SizeGb = 100 }
{ SizeGb = 250 }
{ SizeGb = 500 } ] }
let mapper = FSharpBsonMapper()
use db = new LiteDatabase("micompany.db", mapper)
let computers = db.GetCollection<Computer>("computers")
// Insert & Print
computers.Insert(myPc) |> ignore
printfn "%A" myPc
0 // return an integer exit code
On the other hand, the [<StructuredFormatDisplay>] attribute of the Disk record is not correctly written (only a few dots appear in the console), when the Computer record is printed with printfn "%A" myPc. The output is:
Computer #12: Computers Inc./[...GB; ...GB; ...GB]
Is there anything else I need to say?

Related

Getting error with Telegraf "Could not find module named IF-MIB"

I am trying to configure telegraf to collect snmp data. But getting below error :
2022-03-12T11:17:58Z W! [inputs.snmp] module miblist.txt could not be loaded
2022-03-12T11:17:58Z E! [telegraf] Error running agent: could not initialize input inputs.snmp: initializing table interface: translating: Could not find module named IF-MIB
"snmpwalk" is working fine here.
Sample output snmpwalk :
root#netopslab:/etc/telegraf/telegraf.d# snmpwalk -v 2c -c public 172.30.1.100
SNMPv2-MIB::sysDescr.0 = STRING: Cisco IOS XR Software (Cisco IOS XRv Series), Version 6.1.3[Default]
Copyright (c) 2017 by Cisco Systems, Inc.
SNMPv2-MIB::sysObjectID.0 = OID: SNMPv2-SMI::enterprises.9.1.613
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (164173) 0:27:21.73
SNMPv2-MIB::sysContact.0 = STRING:
SNMPv2-MIB::sysName.0 = STRING: IOSXR-TEST.ios-xr.local
SNMPv2-MIB::sysLocation.0 = STRING:
SNMPv2-MIB::sysServices.0 = INTEGER: 78
IF-MIB::ifNumber.0 = INTEGER: 5
IF-MIB::ifIndex.2 = INTEGER: 2
IF-MIB::ifIndex.3 = INTEGER: 3
IF-MIB::ifIndex.4 = INTEGER: 4
IF-MIB::ifIndex.5 = INTEGER: 5
IF-MIB::ifIndex.6 = INTEGER: 6
IF-MIB::ifDescr.2 = STRING: Null0
IF-MIB::ifDescr.3 = STRING: GigabitEthernet0/0/0/0
IF-MIB::ifDescr.4 = STRING: GigabitEthernet0/0/0/1

How can I have an actor running on one process send a message to another actor running on a separate process?

I want to have actors running on various processes (or nodes) send messages to other actors running off of different processes (or nodes), all while maintaining fault-tolerance and load balancing. I am currently attempting to use Akka.Cluster's Sharding feature to accomplish this.
However, I am not sure how to accomplish this...
I have the following code that reflects my seed node:
let configurePort port =
let config = Configuration.parse ("""
akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
serializers {
hyperion = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
}
serialization-bindings {
"System.Object" = hyperion
}
}
remote {
helios.tcp {
public-hostname = "localhost"
hostname = "localhost"
port = """ + port.ToString() + """
}
}
cluster {
auto-down-unreachable-after = 5s
seed-nodes = [ "akka.tcp://cluster-system#localhost:2551/" ]
}
persistence {
journal.plugin = "akka.persistence.journal.inmem"
snapshot-store.plugin = "akka.persistence.snapshot-store.local"
}
}
""")
config.WithFallback(ClusterSingletonManager.DefaultConfig())
let consumer (actor:Actor<_>) msg = printfn "\n%A received %s" (actor.Self.Path.ToStringWithAddress()) msg |> ignored
// spawn two separate systems with shard regions on each of them
let system1 = System.create "cluster-system" (configurePort 2551)
let shardRegion1 = spawnSharded id system1 "shardRegion1" <| props (actorOf2 consumer)
System.Threading.Thread.Sleep(1000)
let system2 = System.create "cluster-system" (configurePort 2552)
let shardRegion2 = spawnSharded id system2 "shardRegion2" <| props (actorOf2 consumer)
System.Threading.Thread.Sleep(1000)
let system3 = System.create "cluster-system" (configurePort 2553)
let shardRegion3 = spawnSharded id system3 "shardRegion3" <| props (actorOf2 consumer)
System.Threading.Thread.Sleep(3000)
// NOTE: Even thou we sent all messages through single shard region,
// some of them will be executed on the second and third one thanks to shard balancing
System.Threading.Thread.Sleep(3000)
shardRegion1 <! ("shard-1", "entity-1", "hello world 1")
shardRegion1 <! ("shard-1", "entity-2", "hello world 2")
shardRegion1 <! ("shard-2", "entity-3", "hello world 3")
shardRegion1 <! ("shard-2", "entity-4", "hello world 4")
System.Threading.Thread.Sleep(1000)
let printShards shardRegion =
async {
let! (reply:AskResult<ShardRegionStats>) = (retype shardRegion) <? GetShardRegionStats.Instance
let (stats: ShardRegionStats) = reply.Value
for kv in stats.Stats do
printfn "\tShard '%s' has %d entities on it" kv.Key kv.Value
} |> Async.RunSynchronously
let printNodes() =
printfn "\nShards active on node 'localhost:2551':"
printShards shardRegion1
printfn "\nShards active on node 'localhost:2552':"
printShards shardRegion2
printfn "\nShards active on node 'localhost:2553':"
printShards shardRegion3
printNodes()
The output looks something like this:
Shards active on node 'localhost:2551':
Shard 'shard-1' has 2 entities on it
Shard 'shard-2' has 2 entities on it
Shards active on node 'localhost:2552':
I then have a separate process that executes the following code:
let configurePort port =
let config = Configuration.parse ("""
akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
serializers {
hyperion = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
}
serialization-bindings {
"System.Object" = hyperion
}
}
remote {
helios.tcp {
public-hostname = "localhost"
hostname = "localhost"
port = "0"
}
}
cluster {
auto-down-unreachable-after = 5s
seed-nodes = [ "akka.tcp://cluster-system#localhost:2551/" ]
}
persistence {
journal.plugin = "akka.persistence.journal.inmem"
snapshot-store.plugin = "akka.persistence.snapshot-store.local"
}
}
""")
config.WithFallback(ClusterSingletonManager.DefaultConfig())
let consumer (actor:Actor<_>) msg = printfn "\n%A received %s" (actor.Self.Path.ToStringWithAddress()) msg |> ignored
// spawn two separate systems with shard regions on each of them
let system1 = System.create "cluster-system" (configurePort 2554)
let shardRegion1 = spawnSharded id system1 "printer" <| props (actorOf2 consumer)
System.Threading.Thread.Sleep(1000)
let system2 = System.create "cluster-system" (configurePort 2555)
let shardRegion2 = spawnSharded id system2 "printer" <| props (actorOf2 consumer)
System.Threading.Thread.Sleep(1000)
let system3 = System.create "cluster-system" (configurePort 2556)
let shardRegion3 = spawnSharded id system3 "printer" <| props (actorOf2 consumer)
My cluster system (running on a separate process) recognizes new nodes that are joining:
> [INFO][3/15/2017 9:12:13 PM][Thread 0054][[akka://cluster-system/system/cluster/core/daemon#2086121649]] Node [akka.tcp://cluster-system#localhost:52953] is JOINING, roles []
[INFO][3/15/2017 9:12:14 PM][Thread 0006][[akka://cluster-system/system/cluster/core/daemon#2086121649]] Node [akka.tcp://cluster-system#localhost:52956] is JOINING, roles []
[INFO][3/15/2017 9:12:15 PM][Thread 0054][[akka://cluster-system/system/cluster/core/daemon#2086121649]] Node [akka.tcp://cluster-system#localhost:52961] is JOINING, roles []
[INFO][3/15/2017 9:12:18 PM][Thread 0055][[akka://cluster-system/system/cluster/core/daemon#2086121649]] Leader is moving node [akka.tcp://cluster-system#localhost:52953] to [Up]
[INFO][3/15/2017 9:12:18 PM][Thread 0055][[akka://cluster-system/system/cluster/core/daemon#2086121649]] Leader is moving node [akka.tcp://cluster-system#localhost:52956] to [Up]
[INFO][3/15/2017 9:12:18 PM][Thread 0055][[akka://cluster-system/system/cluster/core/daemon#2086121649]] Leader is moving node [akka.tcp://cluster-system#localhost:52961] to [Up]
Conclusion:
In conclusion, I want to have actors running on various processes (or nodes) send messages to other actors running off of different processes (or nodes) while maintaining fault-tolerance and load balancing. I am currently attempting to use Akka.Cluster's Sharding feature to accomplish this.
Appendix:
open System
open System.IO
#if INTERACTIVE
let cd = Path.Combine(__SOURCE_DIRECTORY__, "../src/Akkling.Cluster.Sharding/bin/Debug")
System.IO.Directory.SetCurrentDirectory(cd)
#endif
#r "../src/Akkling.Cluster.Sharding/bin/Debug/System.Collections.Immutable.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Akka.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Hyperion.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Newtonsoft.Json.dll"
#r #"C:\Users\Snimrod\Documents\Visual Studio 2015\Projects\Temp\packages\Akka.FSharp.1.1.3\lib\net45\Akka.FSharp.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/FSharp.PowerPack.Linq.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Helios.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/FsPickler.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Google.ProtocolBuffers.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Google.ProtocolBuffers.Serialization.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Akka.Remote.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Google.ProtocolBuffers.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Akka.Persistence.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Akka.Cluster.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Akka.Cluster.Tools.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Akka.Cluster.Sharding.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Akka.Serialization.Hyperion.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Akkling.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Akkling.Persistence.dll"
#r "../src/Akkling.Cluster.Sharding/bin/Debug/Akkling.Cluster.Sharding.dll"
open Akka.Actor
open Akka.Configuration
open Akka.Cluster
open Akka.Cluster.Tools.Singleton
open Akka.Cluster.Sharding
open Akka.Persistence
open Akkling
open Akkling.Persistence
open Akkling.Cluster
open Akkling.Cluster.Sharding
open Hyperion
In order to maintain consistent view of the shards and their locations, Akka.Cluster.Sharding persistent backend must point to a database that is visible to all of the processes. In your configuration, you're using akka.persistence.journal.inmem which is in-memory data store (used only for tests and development). It won't be visible from another processes.
You'll need to configure a persistent backend in order for shards to be visible between nodes living on different machines/processes. You can do that i.e. by using Akka.Persistence.SqlServer or any other plugin. This is the most basic configuration for your persistence backend used only by sharding:
akka.persistence {
journal {
plugin = "akka.persistence.journal.sql-server"
sql-server {
connection-string = "<connection-string>"
auto-initialize = on
}
}
snapshot-store {
plugin = "akka.persistence.snapshot-store.sql-server"
sql-server {
connection-string = "<connection-string>"
auto-initialize = on
}
}
}
For something more practical, please refer to this article.
Also keep in mind that both Akka.Cluster.Sharding and Akka.Persistence plugins are available only in prerelease mode (so you need to install-package with -pre flag).

F# Yaml type provider

I have tried using a Yaml collection of maps in my config file :
Companies:
- code: 11
name: A
country: FR
functionalCurrency: EUR
- code: 12
name: B
country: GB
functionalCurrency: GBP
However, when trying to read it with the type provider, it only finds the first result of the list.
With :
open FSharp.Configuration
type CompaniesConfig = YamlConfig<"Config.yaml">
let config = CompaniesConfig()
the output is :
val config : CompaniesConfig =
Companies:
- code: 11
name: A
country: FR
functionalCurrency: EUR
Trying to parse the code online worked, hence I wonder if that is a library limitation or... ?
Thanks for your help
You need to actually load the file, not only get the schema if you want to work with it directly: config.Load(yamlFile). This should probably more explicit in the documentation. I used the sample file in the link.
#if INTERACTIVE
#r #"..\packages\FSharp.Configuration.0.6.1\lib\net40\FSharp.Configuration.dll"
#endif
open FSharp.Configuration
open System.IO
/// https://github.com/fsprojects/FSharp.Configuration/blob/master/tests/FSharp.Configuration.Tests/Lists.yaml
[<Literal>]
let yamlFile = __SOURCE_DIRECTORY__ + "..\Lists.yaml"
File.Exists yamlFile
type TestConfig = YamlConfig<yamlFile>
let config = TestConfig()
config.Load(yamlFile)
config.items.Count
config.items
And I get both items:
>
val it : int = 2
>
val it : System.Collections.Generic.IList<TestConfig.items_Item_Type> =
seq
[FSharp.Configuration.TestConfig+items_Item_Type
{descrip = "Water Bucket (Filled)";
part_no = "A4786";
price = 147;
quantity = 4;};
FSharp.Configuration.TestConfig+items_Item_Type
{descrip = "High Heeled "Ruby" Slippers";
part_no = "E1628";
price = 10027;
quantity = 1;}]
>

Rewrite variable in Erlang

I am playing with records and list. Please, I want to know how to use one variable twice. When I assign any values into variable _list and after that I try rewrite this variable then raising error:
** exception error: no match of right hand side value
-module(hello).
-author("anx00040").
-record(car, {evc, type, color}).
-record(person, {name, phone, addresa, rc}).
-record(driver, {rc, evc}).
-record(list, {cars = [], persons = [], drivers = []} ).
%% API
-export([helloIF/1, helloCase/1, helloResult/1, helloList/0, map/2, filter/2, helloListCaA/0, createCar/3, createPerson/4, createDriver/2, helloRecords/0, empty_list/0, any_data/0, del_Person/1, get_persons/1, do_it_hard/0, add_person/2]).
createCar(P_evc, P_type, P_color) -> _car = #car{evc = P_evc, type = P_type, color = P_color}, _car
.
createPerson(P_name, P_phone, P_addres, P_rc) -> _person= #person{name = P_name, phone = P_phone, addresa = P_addres, rc = P_rc}, _person
.
createDriver(P_evc, P_rc) -> _driver = #driver{rc = P_rc, evc = P_evc}, _driver
.
empty_list() ->
#list{}.
any_data() ->
_car1 = hello:createCar("BL 4", "Skoda octavia", "White"),
_person1 = hello:createPerson("Eduard B.","+421 917 111 711","Kr, 81107 Bratislava1", "8811235"),
_driver1 = hello:createDriver(_car1#car.evc, _person1#person.rc),
_car2 = hello:createCar("BL 111 HK", "BMW M1", "Red"),
_person2 = hello:createPerson("Lenka M","+421 917 111 111","Krizn0, 81107 Bratislava1", "8811167695"),
_driver2 = hello:createDriver(_car2#car.evc, _person2#person.rc),
_car3 = hello:createCar("BL 123 AB", "Audi A1 S", "Black"),
_person3 = hello:createPerson("Stela Ba.","+421 918 111 711","Azna 20, 81107 Bratislava1", "8811167695"),
_driver3 = hello:createDriver(_car3#car.evc, _person3#person.rc),
_list = #list{
cars = [_car1,_car2,_car3],
persons = [_person1, _person2, _person3],
drivers = [_driver1, _driver2, _driver3]},
_list.
add_person(List, Person) ->
List#list{persons = lists:append([Person], List#list.persons) }.
get_persons(#list{persons = P}) -> P.
do_it_hard()->
empty_list(),
_list = add_person(any_data(), #person{name = "Test",phone = "+421Test", addresa = "Testova 20 81101 Testovo", rc =88113545}),
io:fwrite("\n"),
get_persons(add_person(_list, #person{name = "Test2",phone = "+421Test2", addresa = "Testova 20 81101 Testovo2", rc =991135455}))
.
But it raising error when i use variable _list twice:
do_it_hard()->
empty_list(),
_list = add_person(any_data(), #person{name = "Test",phone = "+421Test", addresa = "Testova 20 81101 Testovo", rc =88113545}),
_list =add_person(_list, #person{name = "Test2",phone = "+421Test2", addresa = "Testova 20 81101 Testovo2", rc =991135455}),
get_persons(_list)
.
In the REPL, it can be convenient to experiment with things while re-using variable names. There, you can do f(A). to have Erlang "forget" the current assignment of A.
1> Result = connect("goooogle.com").
{error, "server not found"}
2> % oops! I misspelled the server name
2> f(Result).
ok
3> Result = connect("google.com").
{ok, <<"contents of the page">>}
Note that this is only a REPL convenience feature. You can't do this in actual code.
In actual code, variables can only be assigned once. In a procedural language (C, Java, Python, etc), the typical use-case for reassignment is loops:
for (int i = 0; i < max; i++) {
conn = connect(servers[i]);
reply = send_data(conn);
print(reply);
}
In the above, the variables i, conn, and reply are reassigned in each iteration of the loop.
Functional languages use recursion to perform their loops:
send_all(Max, Servers) ->
send_loop(1, Max, Servers).
send_loop(Current, Max, _Servers) when Current =:= Max->
ok;
send_loop(Current, Max, Servers) ->
Conn = connect(lists:nth(Current, Servers)),
Reply = send_data(Conn),
print(Reply).
This isn't very idiomatic Erlang; I'm trying to make it mirror the procedural code above.
As you can see, I'm getting the same effect, but my assignments within a function are fixed.
As a side note, you are using a lot of variable names beginning with underscore. In Erlang this is a way of hinting that you will not be using the value of these variables. (Like in the above example, when I've reached the end of my list, I don't care about the list of servers.) Using a leading underscore as in your code turns off some useful compiler warnings and will confuse any other developers who look at your code.
In some situations it is convenient to use use SeqBind:
SeqBind is a parse transformation that auto-numbers all occurrences of these bindings following the suffix # (creating L#0, L#1, Req#0, Req#1) and so on.
Simple example:
...
-compile({parse_transform,seqbind}).
...
List# = lists:seq(0, 100),
List# = lists:filter(fun (X) -> X rem 2 == 0 end, List#)
...
I used google...
Erlang is a single-assignment language. That is, once a variable has been given a value, it cannot be given a different value. In this sense it is like algebra rather than like most conventional programming languages.
http://www.cis.upenn.edu/~matuszek/General/ConciseGuides/concise-erlang.html

How to parse text in Groovy

I need to parse a text (output from a svn command) in order to retrieve a number (svn revision).
This is my code. Note that I need to retrieve all the output stream as a text to do other operations.
def proc = cmdLine.execute() // Call *execute* on the strin
proc.waitFor() // Wait for the command to finish
def output = proc.in.text
//other stuff happening here
output.eachLine {
line ->
def revisionPrefix = "Last Changed Rev: "
if (line.startsWith(revisionPrefix)) res = new Integer(line.substring(revisionPrefix.length()).trim())
}
This code is working fine, but since I'm still a novice in Groovy, I'm wondering if there were a better idiomatic way to avoid the ugly if...
Example of svn output (but of course the problem is more general)
Path: .
Working Copy Root Path: /svn
URL: svn+ssh://svn.company.com/opt/svnserve/repos/project/trunk
Repository Root: svn+ssh://svn.company.com/opt/svnserve/repos
Repository UUID: 516c549e-805d-4d3d-bafa-98aea39579ae
Revision: 25447
Node Kind: directory
Schedule: normal
Last Changed Author: ubi
Last Changed Rev: 25362
Last Changed Date: 2012-11-22 10:27:00 +0000 (Thu, 22 Nov 2012)
I've got inspiration from the answer below and I solved using find(). My solution is:
def revisionPrefix = "Last Changed Rev: "
def line = output.readLines().find { line -> line.startsWith(revisionPrefix) }
def res = new Integer(line?.substring(revisionPrefix.length())?.trim()?:"0")
3 lines, no if, very clean
One possible alternative is:
def output = cmdLine.execute().text
Integer res = output.readLines().findResult { line ->
(line =~ /^Last Changed Rev: (\d+)$/).with { m ->
if( m.matches() ) {
m[ 0 ][ 1 ] as Integer
}
}
}
Not sure it's better or not. I'm sure others will have different alternatives
Edit:
Also, beware of using proc.text. if your proc outputs a lot of stuff, then you could end up blocking when the inputstream gets full...
Here is a heavily commented alternative, using consumeProcessOutput:
// Run the command
String output = cmdLine.execute().with { proc ->
// Then, with a StringWriter
new StringWriter().with { sw ->
// Consume the output of the process
proc.consumeProcessOutput( sw, System.err )
// Make sure we worked
assert proc.waitFor() == 0
// Return the output (goes into `output` var)
sw.toString()
}
}
// Extract the version from by looking through all the lines
Integer version = output.readLines().findResult { line ->
// Pass the line through a regular expression
(line =~ /Last Changed Rev: (\d+)/).with { m ->
// And if it matches
if( m.matches() ) {
// Return the \d+ part as an Integer
m[ 0 ][ 1 ] as Integer
}
}
}

Resources