## A recipe for dynamic programming in R: Solving a “quadruel” case from PuzzlOR

As also described in Cormen, et al (2009) p. 65, in algorithm design, divide-and-conquer paradigm incorporates a recursive approach in which the main problem is:

• Divided into smaller sub-problems (divide),
• The sub-problems are solved (conquer),
• And the solutions to sub-problems are combined to solve the original and “bigger” problem (combine).

Instead of constructing indefinite number of nested loops destroying the readability of the code and the performance of execution, the “recursive” way utilizes just one block of code which calls itself (hence the term “recursive”) for the smaller problem. The main point is to define a “stop” rule, so that the function does not sink into an infinite recursion depth. While nested loops modify the same object (or address space in the low level sense), recursion moves the “stack pointer”, so each recursion depth uses a different part of the stack (a copy of the objects will be created for each recursion). This illustrates a well-known trade-off in algorithm design: Memory versus performance; recursion enhances performance at the expense of using more memory.

Given the benefits of the recursive divide-and-conquer paradigm, however, the performance of the naive recursion may not still be as good as we may want, since there may be too many smaller problems to navigate through recursion. Sometimes, the same or a similar sub-problem may be visited multiple times and each sub-problem paves way to its sub-problems down to the stop rule – so a naive recursive approach solves the same sub-problems many and many times. The recipe to enhance performance is to “memoize” solved sub-problems: Simply to keep solutions in a look-up table, and first check the table if the value was memoized before commencing with the “divide-and-conquer”. Divide-and-conquer paradigm on steroids with memoization is known as “dynamic programming” or “dp” for short (Cormen, et al. 2009, p. 359).

Now we will illustrate an application of the “dynamic programming” approach using a question from the  PuzzlOR site – a site which publishes bi-monthly decision support puzzles for applied mathematicians. Since the answers to expired questions are disclosed by the website itself, there is no problem in describing a solution in detail. The question that appeared in October 2014 is called “Fighters”. The question involves a quadruel: A kind of duel fight that is among four participants instead of the classical two participant case.

The question goes as follows:
Continue reading “A recipe for dynamic programming in R: Solving a “quadruel” case from PuzzlOR”

## How brute is your force?: Three ways to get radicals in batch using R

Brute force approach to solving a problem is usually the way of running through all possible combinations and repeating a certain calculation. Until a more elegant solution is devised – for example finding a numerical formula – brute forcing is the only tool at hand. Provided that the nature of the calculation is not unfeasibly inefficient and the universe of combinations / possibilities to search through is not unfeasibly large, brute forcing may be sufficient. However, in many instances, the brute force algorithm can be enhanced so that it bears more “elegance” and “intelligence” to some extent.

The methods to enhance a brute force algorithm may be to restrict the set of values to be searched, to rewrite the code in order to choose more efficient ways that run with fewer instructions at the hardware level or to incorporate multicore or GPU parallelism.

In this post, I will illustrate three “brute force” methods to solve a problem, enhancing in each successive method so that we experience a performance gain of more than 100x times in the third method as compared to the first one. A knowledge of the low level mechanics of the R interpreter is important in selecting the optimal method. The problem I choose is the 124th problem in Project Euler. Since Project Euler requests participants not to disclose the solutions publicly, I will just share portions of the algorithm in order to emphasize the efficiency gains from choosing the right tools and methods: