In one of my Controllers, I have multiple URLs that will ultimately render in the same way. For example, this method scans the network on which the server resides, caches a String representation of each connected device and each device listening on a specific port, and then sends that information to another method to render:
public static void networkScan(String networkTarget, String port)
{
//These two lists will never have more than 256 total entries
List<InetSocketAddress> listeningDevices;
Map<String, String> allDevices;
...Logic for discovering network devices...
//Store the results in a cache, for history preservation in the browser
Cache.set(session.getId() + "listeningDevices", listeningDevices);
Cache.set(session.getId() + "allDevices", allDevices);
showScan(listeningDevices, allDevices);
}
public static void showScan(List<InetSocketAddress> listeningDevices, Map<String, String> allDevices)
{
render(listeningDevices, allDevices);
}
public static void getCachedScan()
{
List<InetSocketAddress> listeningDevices = (List<InetSocketAddress>)Cache.get(session.getId() + "listeningDevices");
Map<String, String> allDevices = (Map<String, String>)Cache.get(session.getId() + "allDevices");
if(listeningDevices == null)
listeningDevices = new ArrayList<InetSocketAddress>();
if(allDevices == null)
allDevices = new TreeMap<String, String>();
renderScan(listeningDevices, allDevices);
}
Doing it this way results in Play doing some weird array copying that ends up taking infinite memory. If I were to change my call of showScan() to simply render() and create a view with the name networkScan.html, it all works just fine, no memory bugs.
I have several other methods that also use showScan, based on different caching settings. I don't want lots of views that are all essentially copies of each other, so I'm trying to go through just one method with one corresponding view.
This won't work:
showScan(listeningDevices, allDevices);
}
public static void showScan(List<InetSocketAddress> listeningDevices, Map<String, String> allDevices)
{
as play will serialize listeningDevices + allDevices to Strings and tries to build a url out of it.
either directly render the results in networkScan() or store the contents in the cache under a specific key
like you already do and then do something like
public static void networkScan(String networkTarget, String port)
{
//These two lists will never have more than 256 total entries
List<InetSocketAddress> listeningDevices;
Map<String, String> allDevices;
...Logic for discovering network devices...
//Store the results in a cache, for history preservation in the browser
Cache.set(session.getId() + "listeningDevices", listeningDevices);
Cache.set(session.getId() + "allDevices", allDevices);
showScan(session.getId());
}
public static void showScan(String sessionId)
{
List<InetSocketAddress> listeningDevices = Cache.get(sessionId + "listeningDevices");
Map<String, String> allDevices = Cache.get(sessionId + "allDevices");
render(listeningDevices, allDevices);
}
Turns out that calling an action method creates a redirect event, which resulted in all sorts of copying objects into URLs. I still don't understand how that mushroomed into using over a gigabyte of memory for a collection of Strings that rarely numbered above 100, and never above 256, but I found a way of avoiding the redirect event.
As I was directed to do in an answer on Google Groups, I made use of the #Util interceptor on the showScan method:
#Util
public static void showScan(List<InetSocketAddress> listeningDevices, Map<String, String> allDevices)
{
renderTemplate("Admin/showScan.html", listeningDevices, allDevices);
}
Marking a method with #Util unfortunately makes it use the template of the calling method, but the call to renderTemplate() allows me to use a single template that I specify.
Related
private MapState<String, EventsHistory> eventsMap = null;
public void processElement2(Event event,
Context context,
Collector<JoinedEvent> collector) throws Exception {
String name = event.getExperimentName();
if (eventsMap.get(name) == null) {
eventsMap.put(name, new EventsHistory());
}
eventsMap.get(name).put(event.getEventTime(), event);
}
class EventsHistory {
private final Map<Long, Event> events = new HashMap<>();
public Map<Long, Event> getEvents() {
return events;
}
public void put(final Long eventTime, final Event event) {
events.put(eventTime, event);
}
}
I have the above code and would like to use Flink's MapState to maintain a map of maps.
When I test this locally, I can see the state update fine. But when I run it in a cluster, the eventsMap is always empty.
Is it valid to use a map of maps in MapState? Is there a better way to achieve this?
As an alternate, I tried the below version, where I do the grouping myself. Strangely enough this works.
private MapState<EventKey, Event> assignmentEventsMap = null;
public final class EventKey {
private String name;
private long eventTime;
}
public void processElement2(Event event,
Context context,
Collector<JoinedEvent> collector) throws Exception {
String name = event.getExperimentName();
eventsMap
.put(new EventKey(event.getName(), event.getEventTime()),
event);
}
The code you have shared is difficult to understand, but perhaps you have misunderstood what MapState is. ValueState provides a sharded key/value store, distributed across the cluster. MapState gives you a sharded key/value store, where the values themselves are nested Maps.
In other words, MapState is always map of maps. You ended up trying to create a map of maps of maps -- which is one level too far.
I'm assuming you are trying to build this structure, where you effectively have a map from experiment names to nested maps of timestamps to events:
name -> (time -> event)
Assuming that your stream of events has already been keyed by the experiment name, then rather than using MapState<String, EventsHistory> eventsMap, what you really want is MapState<Long, Event> eventsMap, and rather than
eventsMap.get(name).put(event.getEventTime(), event);
you should be doing
eventsMap.put(event.getEventTime(), event);
See the tutorial about ValueState and an example using MapState in the Flink docs for more background how to work with these mechanisms.
So,
Here is the code setup.
There is a driver application, which starts the HTTP server(ASP.NET core Web API project).
The method called by driver application for starting HTTP server is
this:
public class Http_Server
{
public static ConcurrentQueue<Object> cq = new ConcurrentQueue<Object>();
public static void InitHttpServer(ConcurrentQueue<Object> queue)
{
cq = queue;
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.UseApplicationInsights()
.Build();
host.Run();
}
}
Controller Action Task:
[HttpPost]
[Route("XYZ")]
public virtual IActionResult AddXYZ([FromBody]List<Resourcemembers> resourcemembers)
{
//add something to ds
Http_Server.cq.Enqueue(new object());
//respond back
return new ObjectResult(example);
}
The data structure(a concurrent queue) being passed is to be made visible at controller level(like a global variable accessible across all controllers).
Is it fine to make the ds a static variable and access it across controllers?
Or Is there a way to pass this ds across to different layers?
This is my go at a better solution for this. What you are trying to do doesn't seem like the best way to approach this.
First, you want to enable caching in the application by calling the AddMemoryCache in the application StartUp.ConfigureServices method.
public void ConfigureServices(IServiceCollection services)
{
// Add framework services.
services.AddMemoryCache();
...
}
Then, you want to use the cache. Something like this should get you going in the right direction.
public class XYZController : Controller {
private IMemoryCache _memoryCache;
private const string xyzCacheKey = "XYZ";
public XYZController(IMemoryCache memoryCache)
{
_memoryCache = memoryCache;
}
[HttpPost("XYZ")]
public IActionResult AddXYZ([FromBody]ResourceMember[] resourceMembers)
{
try
{
if (!_memoryCache.TryGetValue(xyzCacheKey, out ConcurrentQueue<Object> xyz))
{
xyz = new ConcurrentQueue<Object>();
_memoryCache.Set(xyzCacheKey, xyz, new MemoryCacheEntryOptions()
{
SlidingExpiration = new TimeSpan(24, 0, 0)
});
}
xyz.Enqueue(resourceMembers);
return Ok();
}
catch (Exception ex)
{
return BadRequest(ex);
}
}
}
public class ResourceMember { }
What this does is allow you to use a Memory Cache to hold your object(s), and what ever you Enqueue in the ConcurrentQueue object, should you stay with that as your main object within the Cache. Now, you can cache any object type in the MemoryCache, and pull the value when you need to based on the key you gave it when you added it to the cache. In the case above, I created a const named xyzCacheKey with a string value of XYZ to use as the key.
That static global variable thing you are trying is just not... good.
If this doesn't help, let me know in a comment, I will delete the answer.
Good luck!
i have serialization problem with session(as i described here) so i used static Dictionary instead of session asp.net mvc
public static Dictionary<string, object> FlightDict;
FlightDict.Add("I_ShoppingClient", client);
in this case user will override their values?are there any problem with that
because they says with static variable users data can be overrided
Yes, you can change the static variables in the site, But You need to use this to change the data but that is not enough you need to lock this data until you have done.
public static Dictionary<string, object> CacheItems
{
get{ return cacheItems; }
set{ cacheItems= value; }
}
How to Lock?
The approach you need to use to lock all actions of add or remove until you done is:
private static Dictionary<string, object> cacheItems = new Dictionary<string, object>();
private static object locker = new object();
public Dictionary<string, object> CacheItems
{
get{ return cacheItems; }
set{ cacheItems = value;}
}
YourFunction()
{
lock(locker)
{
CacheItems["VariableName"] = SomeObject;
}
}
for manipulating the data on application state you need to use the global lock of it Application.Lock(); and Application.UnLock();. i.e
Application.Lock();
Application["PageRequestCount"] = ((int)Application["PageRequestCount"])+1;
Application.UnLock();
Last: Avoid Application State and use the Static Variable to Manage the Data across the Application for Faster Performance
Note: you can add one lock at the time only so remove it before you are trying to change it
Keep in Mind : The static variables will be shared between requests. Moreover they will be initialized when application starts, so if the AppDomain, thus application gets restarted, their values will be reinitialized.
I'm working with the multiple row selection to give a user ability to delete the selecting records. According to the PDF documentation, and the ShowCase Labs, I must use the code translated to the Java like that:
final DataTable = new DataTable();
...
// (1)
dataTable.setSelectionMode("multiple");
// (2)
dataTable.setValueExpression("selection", createValueExpression(DbeBean.class, "selection", Object[].class));
// (3)
dataTable.setValueExpression("rowKey", createValueExpression("#{" + VARIABLE + ".indexKey}", Object.class));
...
final ClientBehaviorHolder dataTableAsHolder = dataTable;
...
// (4)
dataTableAsHolder.addClientBehavior("rowSelect", createAjaxBehavior(createMethodExpression(metaData.controllerBeanType, "onRowSelect", void.class, new Class<?>[] {SelectEvent.class})));
multiple - This line features the multiple selection, works fine visually at the front-end.
selection - Being invoked, the #{dbeBean.selection} is really bound and the public void setSelection(T[] selection) is only invoked.
rowKey - Being invoked, works fine, the getIndexKey() is invoked and returns the necessary result.
rowSelect - This event handler is invoked too, DbeBean.onRowSelect(SelectEvent e).
I also use lazy data model (I don't really believe it may be the reason but who knows?; by the way, it returns List<T> though setSelection() requires T[] -- why it's like that?):
public abstract class AbstractLazyDataSource<T extends IIndexable<K>, K> extends LazyDataModel<T> {
...
#Override
public final List<T> load(int first, int pageSize, String sortField, SortOrder sortOrder, Map<String, String> filters) {
...
final IResultContainer<T> resultContainer = getData(querySpecifier);
final List<T> data = resultContainer.getData();
setRowCount(resultContainer.getTotalEntitiesCount());
return getPage(data, first, pageSize);
}
...
#Override
public final K getRowKey(T object) {
return object.getIndexKey(); // T instanceof IIndexable<K>, have to return a unique ID
}
...
However, the handlers do not work as they are expected to work. Please help me to understand why (2) DbeBean.setSelection(T[] selection) & (4) DbeBean.onRowSelect(SelectEvent e) get only the null value: T[] selection = null, and SelectEvent: e.getObject = null, respectively. What am I doing wrong?
Thanks in advance.
PrimeFaces 3.2
Mojarra 2.1.7
I've got it to work: I simply removed the rowKey property during to the dynamic p:dataTable creation (DataTable), and simply overloaded getRowData in lazy data model. Now it works.
This extremely cool article written in the winter of 2007 shows me this code:
public static class TempDataExtensions
{
public static void PopulateFrom(this TempDataDictionary tempData, object o)
{
foreach (PropertyValue property in o.GetProperties())
{
tempData[property.Name] = property.Value;
}
}
public static void PopulateFrom(this TempDataDictionary tempData
, NameValueCollection nameValueCollection)
{
foreach (string key in nameValueCollection.Keys)
tempData[key] = nameValueCollection[key];
}
public static void PopulateFrom(this TempDataDictionary tempData
, IDictionary<string, object> dictionary)
{
foreach (string key in dictionary.Keys)
tempData[key] = dictionary[key];
}
public static string SafeGet(this TempDataDictionary tempData, string key)
{
object value;
if (!tempData.TryGetValue(key, out value))
return string.Empty;
return value.ToString();
}
}
I'm not seeing any code like this in the MVCContrib source or in MVC2 source. This makes me think that I can still use this pattern now without fear of the equivalent functionality already living in the current MVC2 release (might be in MVC3 Preview 1?).
I did not see any update edits to the article. Does this MVC code from 2007 stand the test of time? Is it still ready for now?
Yes, this will work and this functionality is not replaced.
One caveat. In MVC 1 Temp data stayed around for one request only. With MVC 2 tempdata now stays around until you access it or manually clear it. This could complicate things if your redirect fails or never reads the tempdata.
The new dynamic keyword will also provide similar functionality maybe the new new C# 4.0 dynamic type may clean things up a little.