I have a method that takes a list of items, makes a web request and then returns items that failed processing (some items may have been processed correctly, some items might have failed and only failed ones are returned or an empty list):
// returns list of of items that failed processing or empty list.
private List<String> sendsItems(List<String> items);
I want to use Mono and retry sending only failed items for up to N times. Using a blocking code, it would look like this:
public void sendWithRetries() {
List<String> items = List.of("one", "two", "three");
int retryNo = 0;
while (!items.isEmpty() && retryNo < 3) {
items = sendsItems(items);
retryNo++;
}
}
I have really hard time translating that to code using reactor. I've been thinking about using Mono.repeat or Mono.retry family of functions but I do not see any nice way of passing failed items back to sendItems except doing something ugly like:
var items = new AtomicReference<>(List.of("one", "two", "three"));
Mono.fromCallable(() -> {
List<String> failedItems = sendsItems(items.get());
items.set(failedItems);
if (!failedItems.isEmpty()) {
throw new RuntimeException("retry me");
}
return Collections.emptyList();
})
.retry(3)
Is there a better way of doing that?
note: the real use case I need it for is kinesis put records api call. It returns info about which records failed to be processed and I want to retry sending only them.
You can use expand operator to maintain state:
Mono.fromCallable(() -> sendsItems(new Attempt(List.of("one", "two", "three"), 0)))
.expand(attempt -> {
if (attempt.getItems().isEmpty() || attempt.getRetry() > 3) {
return Mono.empty();
} else {
return Mono.fromCallable(() -> sendsItems(attempt));
}
});
private Attempt sendsItems(Attempt previousAttempt) {
System.out.println(previousAttempt);
// implement actual sending logic here, below is just some dummy
List<String> failedItems = previousAttempt.getItems().subList(0, previousAttempt.getItems().size() - 1);
return new Attempt(failedItems, previousAttempt.retry + 1);
}
#Value
static class Attempt {
List<String> items;
int retry;
}
Related
public Mono<Response> createSomething(PostRequest request, Option option, SiteId siteId) {
Predicate<DeliveryPromiseResponse> isSplitShipment = pudoDPEResponse -> pudoDPEResponse.getShipments().size() > 1;
return Mono.just(request)
.doFirst(() -> log.debug("Processing request for siteId :", request.getSiteId()))
.flatMap(shipRequest -> converter.apply(shipRequest, siteId))
.filter(dpeResponse -> {
if (isSplitShipment.test(dpeResponse)) {
log.info("Received Shipment response from DPE for siteId :", request.getSiteId());
return false;
}
return true;
})};
Although there might be another ways to achieve this behavior, I would personally refactor in two steps:
1. Extract filtering to separate step:
.filter(dpeResponse -> !isSplitShipment.test(dpeResponse))
To improve readibility, I would personally extract it to method called something like isNotSplitShipment.
2. Extract side effect (logging) to doOnSuccess method:
.doOnSuccess(dpeResponse -> {
if (dpeResponse == null)
log.info("Received Shipment response from DPE for siteId :", request.getSiteId());
})
Of course, this one could be extracted to a separate method, as well.
All together, will look, as follows:
return Mono.just(request)
.doFirst(() -> log.debug("Processing request for siteId :", request.getSiteId()))
.flatMap(shipRequest -> converter.apply(shipRequest, siteId))
.filter(dpeResponse -> !isSplitShipment.test(dpeResponse))
.doOnSuccess(dpeResponse -> {
if (dpeResponse == null)
log.info("Received Shipment response from DPE for siteId :", request.getSiteId());
});
This way, we avoid mixing filtering with logging. Moreover, return false/true are not required anymore.
Nevertheless, it's worth mentioning here, that the doOnSuccess should be used cautiously. It should not invoke another Mono/Fluxes as it will not propagate errors to the main sequence.
I have implemented a dummy reactive repository but I am struggling with the update method:
#Override
public Mono<User> updateUser(int id, Mono<User> updateMono) {
return //todo with getUser
}
#Override
public Mono<User> getUser(int id) {
return Mono.justOrEmpty(this.users.get(id));
}
From one hand I have incoming publisher Mono<User> updateMono, from the other hand I have another publisher during Mono.justOrEmpty(this.users.get(id)).
How to combine it together, make update, and give back just one publisher?
The only thing come to my mind is:
#Override
public Mono<User> updateUser(int id, Mono<User> updateMono) {
return getUser(id).doOnNext(user -> {
updateMono.subscribe(update -> {
users.put(id, new User(id, update.getName(), update.getAge()));
System.out.format("Updated user with id %d to %s%n", id, update);
});
});
}
Is it correct?
See the reference guide on finding the right operator
Notably, for Mono you have and, when, then (note this last one will become flatMap in 3.1.0, and flatmap will become flatMapMany)
doOnNext is more for side operations like logging or stats gathering. Subscribe inside subscribe is another bad form; generally you want flatMap or similar instead.
I have played Spring 5 Reactive Streams features in these days, and have written down some sample codes(not public via blog or twitter yet, I still need more practice on Reactor).
I have encountered the same problems and finally used a Mono.zip to update the existing item in MongoDB.
https://github.com/hantsy/spring-reactive-sample/blob/master/boot-routes/src/main/java/com/example/demo/DemoApplication.java
public Mono<ServerResponse> update(ServerRequest req) {
return Mono
.zip(
(data) -> {
Post p = (Post) data[0];
Post p2 = (Post) data[1];
p.setTitle(p2.getTitle());
p.setContent(p2.getContent());
return p;
},
this.posts.findById(req.pathVariable("id")),
req.bodyToMono(Post.class)
)
.cast(Post.class)
.flatMap(post -> this.posts.save(post))
.flatMap(post -> ServerResponse.noContent().build());
}
Update: Another working version written in Kotlin.
fun update(req: ServerRequest): Mono<ServerResponse> {
return this.posts.findById(req.pathVariable("id"))
.and(req.bodyToMono(Post::class.java))
.map { it.t1.copy(title = it.t2.title, content = it.t2.content) }
.flatMap { this.posts.save(it) }
.flatMap { noContent().build() }
}
Vaadin 8.1 introduced the TreeGrid component. It does not have the collapseItemsRecursively and expandItemsRecursively methods anymore (as available in the now legacy Tree component). Do i miss something or do you need to develop your own implementation? If so, what is a recommended way of doing this?
As I'm sure you've noticed, the TreeGrid is a rather new component, currently being developed and available starting with v8.1.alphaX (current stable version is v8.0.6). As such, it probably has only some basic functionalities for the time being, with the rest to follow sometime in the future, although there are no guarantee. For example this similar feature request for the older TreeTable component has been in open state since 2011.
Either way, even if they're probably not the optimum solutions, there are a couple of work-arounds that you can use to achieve this behavior. I'm shamelessly using as a base sample, a slightly modified version of the code currently available in the vaadin-sampler for TreeGrid.
public class RecursiveExpansionTreeGrid extends VerticalLayout {
private Random random = new Random();
public RecursiveExpansionTreeGrid() {
// common setup with some dummy data
TreeGrid<Project> treeGrid = new TreeGrid<>();
treeGrid.setItems(generateProjectsForYears(2010, 2016), Project::getSubProjects);
treeGrid.addColumn(Project::getName).setCaption("Project Name").setId("name-column");
treeGrid.addColumn(Project::getHoursDone).setCaption("Hours Done");
treeGrid.addColumn(Project::getLastModified).setCaption("Last Modified");
addComponent(treeGrid);
}
// generate some dummy data to display in the tree grid
private List<Project> generateProjectsForYears(int startYear, int endYear) {
List<Project> projects = new ArrayList<>();
for (int year = startYear; year <= endYear; year++) {
Project yearProject = new Project("Year " + year);
for (int i = 1; i < 2 + random.nextInt(5); i++) {
Project customerProject = new Project("Customer Project " + i);
customerProject.setSubProjects(Arrays.asList(
new LeafProject("Implementation", random.nextInt(100), year),
new LeafProject("Planning", random.nextInt(10), year),
new LeafProject("Prototyping", random.nextInt(20), year)));
yearProject.addSubProject(customerProject);
}
projects.add(yearProject);
}
return projects;
}
// POJO for easy binding
public class Project {
private List<Project> subProjects = new ArrayList<>();
private String name;
public Project(String name) {
this.name = name;
}
public String getName() {
return name;
}
public List<Project> getSubProjects() {
return subProjects;
}
public void setSubProjects(List<Project> subProjects) {
this.subProjects = subProjects;
}
public void addSubProject(Project subProject) {
subProjects.add(subProject);
}
public int getHoursDone() {
return getSubProjects().stream().map(project -> project.getHoursDone()).reduce(0, Integer::sum);
}
public Date getLastModified() {
return getSubProjects().stream().map(project -> project.getLastModified()).max(Date::compareTo).orElse(null);
}
}
// Second POJO for easy binding
public class LeafProject extends Project {
private int hoursDone;
private Date lastModified;
public LeafProject(String name, int hoursDone, int year) {
super(name);
this.hoursDone = hoursDone;
lastModified = new Date(year - 1900, random.nextInt(12), random.nextInt(10));
}
#Override
public int getHoursDone() {
return hoursDone;
}
#Override
public Date getLastModified() {
return lastModified;
}
}
}
Next, recursively expanding or collapsing the nodes depends a bit on your scenario, but basically it breaks down to the same thing: making sure each node from the root to the deepest leaf is expanded/collapsed.The simplest way of doing it is to flatten your hierarchy into a list of nodes, and call the appropriate method, expand(List<T> items) or expand(T ... items) (the second delegates to the first and is probably a convenience method eg expand(myItem)).
For simplicity, I've added a flatten method in our Project implementation. If you can't do that for some reason, then create a recursive method that creates a list starting with the selected node and includes all the children, of the children, of the children.... well, you get the idea.
public Stream<Project> flatten() {
return Stream.concat(Stream.of(this), getSubProjects().stream().flatMap(Project::flatten));
}
Possible scenarios:
Automatically expand the entire hierarchy when expanding the root - add listeners, and expand/collapse the whole flattened hierarchy:
treeGrid.addCollapseListener(event -> {
if (event.isUserOriginated()) {
// event is triggered by all collapse calls, so only do it the first time, when the user clicks in the UI
// and ignore the programmatic calls
treeGrid.collapse(event.getCollapsedItem().flatten().collect(Collectors.toList()));
}
});
treeGrid.addExpandListener(event -> {
if (event.isUserOriginated()) {
// event is triggered by all expand calls, so only do it the first time, when the user clicks in the UI
// and ignore the programmatic calls
treeGrid.expand(event.getExpandedItem().flatten().collect(Collectors.toList()));
}
});
Expanding the hierarchy or part of it with a custom action, such as a context menu
GridContextMenu<Project> contextMenu = new GridContextMenu<>(treeGrid);
contextMenu.addGridBodyContextMenuListener(contextEvent -> {
contextMenu.removeItems();
if (contextEvent.getItem() != null) {
Project project = (Project) contextEvent.getItem();
// update selection
treeGrid.select(project);
// show option for expanding
contextMenu.addItem("Expand all", VaadinIcons.PLUS, event -> treeGrid.expand((project).flatten().collect(Collectors.toList())));
// show option for collapsing
contextMenu.addItem("Collapse all", VaadinIcons.MINUS, event -> treeGrid.collapse((project).flatten().collect(Collectors.toList())));
}
});
In the end, you should be getting this effect:
From the docs for treegrid, you can use the methods, collapse and expand, by passing a list or array of the treegrid's data items to expand or collapse:
treeGrid.expand(someTreeGridItem1, someTreeGridItem2);
treeGrid.collapse(someTreeGridItem1);
Also worthy of note, is a section showing the ability to prevent certain items from ever being collapsed
In Swashbuckle there is a setting called OrderActionGroupsBy which is supposed to change the ordering within the API, but nothing I do is working and I'm can't determine whether this is a Swashbuckle problem, or due to my IComparer any idea what I'm doing wrong?
This is setting the configurations
config.EnableSwagger(c =>
{
...
c.OrderActionGroupsBy(new CustomStringComparer());
c.GroupActionsBy(apiDesc => GroupBy(apiDesc));
...
}
This is grouping the actions by type instead of controllerName.
private static string GroupBy(ApiDescription apiDesc)
{
var controllerName = apiDesc.ActionDescriptor.ControllerDescriptor.ControllerName;
var path = apiDesc.RelativePath;
if (controllerName.Contains("Original"))
{
controllerName = controllerName.Replace("Original", "");
}
// Check if it is one of the entities if so group by that
// Otherwise group by controller
var entities = new List<string>() { "Users", "Apps", "Groups" };
var e = entities.Where(x => attr.Contains(x.ToLower())).FirstOrDefault();
if (e != null)
{
return e;
}
return controllerName;
}
This is my attempt at an IComparer I want Users first and then after that alphabetical
class CustomStringComparer : IComparer<string>
{
public int Compare(string x, string y)
{
if (x.CompareTo(y) == 0)
return 0;
if (x.CompareTo("Users") == 0)
return -1;
if (y.CompareTo("Users") == 0)
return 1;
return x.CompareTo(y);
}
}
}
This isn't working it always defaults to alphabetical no matter what I do.
Looks like this is a bug with Swashbuckle/Swagger-ui
Using OrderActionGroupsBy is correctly sorting the JSON file, but then swagger ui automatically resorts this to alphabetical order.
I have filed bugs with both Swashbuckle and swagger-ui since this seems to go against what is said in swagger-ui's doc regarding apisSorter.
Apply a sort to the API/tags list. It can be 'alpha' (sort by name) or
a function (see Array.prototype.sort() to know how sort function
works). Default is the order returned by the server unchanged.
Swashbuckle issue
swagger-ui issue
swagger-ui specific stackoverflow question
Imagine you've got n pages, each of which approximately shares the same sort of model but that model has to be in a particular state before you can access certain pages.
So if the user types in a URL to take them to page m, but this page is not accessible at the moment, the controller adds an error message to a collection of errors in TempData then redirects to page m-1.
The problem is when page m-1 is also not accessible. If we add a message to the same collection (with the same key) in TempData, we don't see it on page m-2 as it gets removed from TempData before the request for page m-2 gets underway.
I can imagine a solution where we have multiple error keys and each time we want to add an error or get errors back we try each key in turn, but has anyone got any better ideas? (I know that in theory I could work out the correct page to redirect to straight off but that is going to take a lot of rework and I don't have much time!)
EDIT:
This is the sort of thing I was thinking about:
protected void AddError(string error)
{
int keyCounter;
var errors = GetErrors(out keyCounter);
errors.Add(error);
TempData.Remove(GetKey(keyCounter + 1));
TempData[GetKey(keyCounter + 1)] = errors;
}
protected List<string> GetErrors()
{
int jnk;
return GetErrors(out jnk);
}
private string GetKey(int i)
{
return string.Format("ErrorKey:{0}", i);
}
private List<string> GetErrors(out int keyCounter)
{
keyCounter = 0;
List<string> errors = null;
for (int ii = 0; ii < MaxErrorKeyCounter; ii++)
{
string tryKey = GetKey(ii);
if (TempData.ContainsKey(tryKey))
{
keyCounter = ii;
errors = (List<string>)TempData[tryKey];
}
}
if (errors == null)
errors = new List<string>();
return errors;
}
Why not just use the Session?