Why does Uniswap interface use multiple Web3Provider? - uniswap

I was learning how to develop a front end for dApps by going through the code for Uniswap interface.
Then I found that there are two Web3Providers used in the app like below.
<Web3ReactProvider getLibrary={getLibrary}> // 1st
<Web3ProviderNetwork getLibrary={getLibrary}> // 2nd
<Blocklist>
// some other child nodes
</Blocklist>
<Web3ProviderNetwork>
</Web3ReactProvider>
As for Web3ReactProvider, it uses the component provided by the web3-react package, while Web3ProviderNetwork is created with a key "NETWORK" on the same file those providers are written.
Uniswap has a method useActiveWeb3React which returns web3context values depending on its active variable value.
In other words, default web3Context is returned if active variable is true, otherwise Web3Context with Network is returned. (code below)
export default function useActiveWeb3React() {
const interfaceContext = useWeb3React<Web3Provider>()
const interfaceNetworkContext = useWeb3React<Web3Provider>(
process.env.REACT_APP_IS_WIDGET ? undefined : NetworkContextName
)
if (interfaceContext.active) {
return interfaceContext
}
return interfaceNetworkContext
}
Does anyone know why they switch providers depending on the active variable?
Does it make any difference?

Related

CDK Aspects and Tokens which resolve to undefined

I bumped into a problem where we want to make sure some conventions are being followed in the naming of CloudFormation resources. The idea is that we use CDK Aspects to process resources. A simple example:
export class BucketConvention implements cdk.IAspect {
private readonly ctx: Context;
constructor(ctx: Context) {
this.ctx = ctx;
}
public visit(node: cdk.IConstruct): void {
if (cdk.CfnResource.isCfnResource(node) && node.cfnResourceType == 'AWS::S3::Bucket') {
const resource = node as s3.CfnBucket;
const resourceId = resource.bucketName ? resource.bucketName : cdk.Stack.of(node).getLogicalId(node);
resource.addPropertyOverride('BucketName', `${ctx.project}-${ctx.environment}-${resourceId}`);
}
}
}
The Context interface simply holds some variables used to create names. The problem with this snippet is that we are trying to interpolate the bucket name if it has been set, if not use the logical ID. Now the method to obtain the logical ID works, however resource.BucketName will return a token of which the resolved value could be undefined (i.e. the user didn't pass a bucket name in constructing the bucket, which happens a lot in high level constructs). So the logical ID will actually never trigger, since a token is always defined. If you would log the interpolation output you could get something like
myproject-myenvironment-${Token[TOKEN.104]}
My question, how can we make this work such that the interpolation happens with the bucket name if it has been supplied and if not use the logical ID? Is there a way to peek whether the token will give an undefined value during synthesis time?
And found the answer to my problem... similar to the logical ID you can use
cdk.Stack.of(resource).resolve(resource.bucketName)

What is nodeInterface, nodeField and nodeDefinitions in Relay?

I am currently doing the facebook relayjs tutorial and I need help understanding this part of the tutorial, it states
Next, let's define a node interface and type. We need only provide a
way for Relay to map from an object to the GraphQL type associated
with that object, and from a global ID to the object it points to
const {nodeInterface, nodeField} = nodeDefinitions(
(globalId) => {
const {type, id} = fromGlobalId(globalId);
if (type === 'Game') {
return getGame(id);
} else if (type === 'HidingSpot') {
return getHidingSpot(id);
} else {
return null;
}
},
(obj) => {
if (obj instanceof Game) {
return gameType;
} else if (obj instanceof HidingSpot) {
return hidingSpotType;
} else {
return null;
}
}
);
On the first argument on nodeDefinition,where did it get its' globalId? is Game and HidingSpot a name on the GraphQLSchema? What does this 'const {type, id} = fromGlobalId(globalId);' do? and also what is the 2nd argument? I need help understanding nodeDefinitions, somehow I can't find nodeDefinitions on the official documentation. Thank you.
If you were writing a GraphQL server without Relay, you'd define a number of entry points on the Query type, eg:
type Query {
picture(id: Int!): Picture
user(id: Int!): User
...etc
}
So when you want to get a User, you can easily get it because user is available as an entry point into the graph. When you build a query for your page/screen, it'll typically be several levels deep, you might go user -> followers -> pictures.
Sometimes you want to be able to refetch only part of your query, perhaps you're paginating over a connection, or you've run a mutation. What Relay's Node interface does is give you a standard way to fetch any type that implements it via a globally unique ID. Relay is capable of recognising such nodes in its queries, and will use them if possible to make refetching and paginating more efficient. We add the node type to the root Query type:
type Query {
picture(id: Int!): Picture
user(id: Int!): User
...etc
node(id: ID!): Node
}
Now for nodeDefinitions. Essentially this function lets us define two things:
How to return an object given its globalId.
How to return a type given an object.
The first is used to take the ID argument of the node field and use it to resolve an object. The second allows your GraphQL server to work out which type of object was returned - this is necessary in order for us to be able to define fragments on specific types when querying node, so that we can actually get the data we want. Without this, we couldn't be able to successfully execute a query such as this:
query Test {
node(id: 'something') {
...fragment on Picture {
url
}
...fragment on User {
username
}
}
}
Relay uses global object identification, which means, in my understanding, if your application ever try to search for an object. In your example, try to look for a game, or try to look for a hidingSpot. Relay will try to fetches objects in the standard node interface. i.e. find by {id: 123} of the Game, or find by {id:abc} of the hidingSpot. If your schema (Game, HidingSpot) doesn't set up the node interface, Relay will not be able to fetch an object.
Therefore, if your application requires a search in a "Game", in the schema, you need to define the node interfaces.
By using graphql-relay helper, use nodeDefinitions function only once in your application to basically map globally defined Ids into actual data objects and their GraphQL types.
The first argument receives the globalId, we map the globalId into its corresponding data object. And the globalId can actually be used to read the type of the object using fromGlobalId function.
The second function receives the result object and Relay uses that to map an object to its GraphQL data type. So if the object is an instance of Game, it will return gameType, etc.
Hope it will help you understand. I am on my way learning, too.

IBM Integration Bus: How to read user defined node (Java) complex (table) property in Java extension code

I created Java user defined node in IntegrationToolkit (9.0.0.1) and assigned it with several properties. Two of the node properties are simple (of String type) and one property is complex (table property with predefined type of User-defined) that is consisted of another two simple properties.
By following the documentation I was able to read two simple properties in my Java extension class (that extends MbNode and implements MbNodeInterface) by making getters and setters that match the names of the two simple properties. Documentation also states that getters and setters should return and set String values whatever the real simple type of a property may be. Obviously, this would not work for my complex node property.
I was also able to read User Defined Properties that are defined on the message flow level, by using CMP (Integration Buss API) classes, which was another impossible thing to do from user defined node without CMP. At one point I began to think that my complex property would be among User Defined Properties, (although UDPs are defined on the flow level and my property is defined on the custom node level) based on some other random documentation and some other forum discussion.
I finally deduced that the complex property should map to MbTable type (as it is so stated in that type's description), but I was not able to use that.
Does anyone know how to access user defined node's complex(table) property value from Java?
I recently started working with WebSphere Message Broker v 8.0.0.5 for one of my projects and I was going to ask the same question until SO suggested your question which answered my question. It might be a little late for this question but it may help others having similar questions.
After many frustrating hours consulting IBM documentation this is what I found following your thread:
You're correct about the properties being available as user-defined properties (UDP) but only at the node level.
According to the JavaDoc for MbTable class (emphasis added to call out the relevant parts):
MbTable is a complex data type which contains one or more rows of simple data types. It structure is very similar to a * standard java record set. It can not be constructed in a node but instead is returned by the getUserDefinedAttribute() on the MbNode class. Its primary use is in allowing complex attributes to be defined on nodes instead of the normal static simple types. It can only be used in the runtime if a version of the toolkit that supports complex properties is being used.
You have to call com.ibm.broker.plugin.MbNode.getUserDefinedAttribute which will return an instance of com.ibm.broker.plugin.MbTable. However, the broker runtime doesn't call any setter methods for the complex attributes during the node initialization process like it does for simple properties. Also, you cannot access the complex attributes in either the constructor or the setter methods of other simple properties in the node class. These are available only in the run or evaluate method.
The following is the decompiled method definition of com.ibm.broker.plugin.MbNode.getUserDefinedAttribute.
public Object getUserDefinedAttribute(String string) {
Object object;
String string2 = "addDynamicTerminals";
if (Trace.isOn) {
Trace.logNamedEntry((Object)this, (String)string2);
}
if ((object = this.getUDA(string)) != null && object.getClass() == MbElement.class) {
try {
MbTable mbTable;
MbElement mbElement = (MbElement)object;
object = mbTable = new MbTable(mbElement);
}
catch (MbException var4_5) {
if (Trace.isOn) {
Trace.logStackTrace((Object)this, (String)string2, (Throwable)var4_5);
}
object = null;
}
}
if (Trace.isOn) {
Trace.logNamedExit((Object)this, (String)string2);
}
return object;
}
As you can see it always returns an instance of MbTable if the attribute is found.
I was able to access the complex attributes with the following code in my node definition:
#Override
public void evaluate(MbMessageAssembly inAssembly, MbInputTerminal inTerminal) throws MbException {
checkUserDefinedProperties();
}
/**
* #throws MbException
*/
private void checkUserDefinedProperties() throws MbException {
Object obj = getUserDefinedAttribute("geoLocations");
if (obj instanceof MbTable) {
MbTable table = (MbTable) obj;
int size = table.size();
int i = 0;
table.moveToRow(i);
for (; i < size; i++, table.next()) {
String latitude = (String) table.getValue("latitube");
String longitude = (String) table.getValue("longitude");
}
}
}
The documentation for declaring attributes for user-defined extensions in Java is surprisingly silent on this little bit of detail.
Please note that all the references and code are for WebSphere Message Broker v 8.0.0 and should be relevant for IBM Integration Bus 9.0.0.1 too.

Grails integration test - domain object equality

Setting up some integration tests, I'm having issues with domain class equality. The equality works as expected during normal execution, but when testing the Service methods through an integration test, the test for equality is coming back false.
One service (called in the setUp() of the Test Case) puts a Domain object into the session
SomeService {
setSessionVehicle(String name) {
Vehicle vehicle = Vehicle.findByName(name)
session.setAttribute("SessionVehicle", vehicle)
}
getSessionVehicle() {
return session.getAttribute("SessionVehicle")
}
}
Elsewhere in another service, I load an object and make sure the associated attribute object matches the session value:
OtherService {
getEngine(long id) {
Vehicle current = session.getAttribute("SessionVehicle")
Engine engine = Engine.get(id)
if(!engine.vehicle.equals(current)) throw Exception("Blah blah")
}
}
This works as expected during normal operation, preventing from loading the wrong engine (Ok, I sanitized the class names, pretend it makes sense). But in the integration test, that .equals() fails when it should succeed:
Vehicle testVehicle
setUp() {
Vehicle v = new Vehicle("Default")
v.save()
someService.setSessionVehicle("Default")
testVehicle = someService.getSessionVehicle()
}
testGetEngine() {
List<Engine> engines = Engine.findAllByVehicle(testVehicle)
//some assertions and checks
Engine e = otherService.getEngine(engines.get(0).id)
}
The findAll() call is correctly returning the list of all Engines associated with the vehicle in the session, but when I try to look up an individual Engine by ID, the equality check for session Vehicle vs Vehicle on the found Engine fails. Only a single vehicle has been created at this point and the Exception message displays that the session Vehicle and the Engine.Vehicle exist and are the same value.
If I attempt this equality check in the testCase itself, it fails, but I'm able to change the testCase to check if(vehicle.id == sessionVehicle.id) which succeeds, but I'm not keen on changing my production code in order to satisfy an integration test.
Am I doing something wrong when setting up these domain objects in my test case that I should be doing differently?
First of all, the equality check you are doing is just checking the reference. You should not use the default equals method for your check, better override the equals method in domain class.
There are two ways you can override the equals method:
1) you can use your IDE to auto-generate code for equals method(a lot of null checking etc..).
2) Preferred way: You can use EqualsBuilder and HashCodeBuilder classes from the Apache Commons project. The library should be already available to your application, or download the JAR file and place in lib. Here is sample code for using EqualsBuilder:
boolean equals(o) {
if ( !(o instanceof Vehicle) ) {
return false
}
def eb = new EqualsBuilder()
eb.append(id, o.id)
eb.append(name, o.name)
eb.append(otherProperties, o.otherProperties)
....
return eb.isEquals()
}
Another point is: how you are getting session in service? from RequestContextHolder? Its a good practice to not access session directly from service, rather send the value as method parameter in the service.

Linux module: being notified about task creation and destruction

for Mach kernel API emulation on Linux, I need for my kernel module to get called when a task has been just created or is being terminated.
In my kernel module, this could most nicely be done via Linux Security Modules, but a couple of years ago, they prevented external modules from acting as a LSM by unexporting the needed symbols.
The only other way I could find was to make my module act like a rootkit. Find the syscall table and hook it in there.
Patching the kernel is out of the question. I need my app to be installed easily. Is there any other way?
You can use Kprobes, which enables you to dynamically hook into code in the kernel. You will need to find the right function among the ones involves in creating and destroying processes that give you the information you need. For instance, for tasks created, do_fork() in fork.c would be a good place to start. For tasks destroyed, do_exit. You would want to write a retprobe, which is a kind of kprobe that additionally gives you control at the end of the execution of the function, before it returns. The reason you want control before the function returns is to check if it succeeded in creating the process by checking the return value. If there was an error, then the function will return a negative value or in some cases possibly 0.
You would do this by creating a kretprobe struct:
static struct kretprobe do_fork_probe = {
.entry_handler = (kprobe_opcode_t *) my_do_fork_entry,
.handler = (kprobe_opcode_t *) my_do_fork_ret,
.maxactive = 20,
.data_size = sizeof(struct do_fork_ctx)
};
my_do_fork_entry gets executed when control enters the hooked function, and my_do_fork_ret gets executed just before it returns. You would hook it in as follows:
do_fork_probe.kp.addr =
(kprobe_opcode_t *) kallsyms_lookup_name("do_fork");
if ((ret = register_kretprobe(&do_fork_probe)) <0) {
// handle error
}
In the implementation of your hooks, it's a bit unwieldy to get the arguments and return value. You get these via the saved registers pt_regs data structure. Let's look at the return hook, where on x86 you get the return value via regs->ax.
static int my_do_fork_ret(struct kretprobe_instance *ri, struct pt_regs *regs)
{
struct do_fork_ctx *ctx = (struct do_fork_ctx *) ri->data;
int ret = regs->ax; // This is on x86
if (ret > 0) {
// It's not an error, probably a valid process
}
}
In the entry point, you can get access to the arguments via the registers. e.g. on x86, regs->di is the first argument, regs->si is the second etc. You can google to get the full list. Note that you shouldn't rely on these registers for the arguments in the return hook as the registers may have been overwritten for other computations.
You will surely have to jump many hoops in getting this working, but hopefully this note should set you off in the right direction.

Resources