The nice thing about having a “thoughts” section is that unlike a technical blog, I don’t feel pressured to make my posts particularly technically impressive. I can just write about random stuff that popped up in my life that’s kind of short and silly.
I’ve always had a tenuous working relationship with monads; I know just enough to use them for basic things and get through the monad homework for a little type theory survey course that I took, but I wouldn’t say that I have any great understanding. So it’s always pleasant when I find that I can sprinkle in a little bit of monadic thinking into my code. I am under the impression that they make one look terribly clever.
Anyway, wanting some relaxation after a somewhat taxing final exam, I decided to write a tiny bit of Haskell to solve a problem.1 The problem is the -sum problem:
Fix an integer . Given a list of integers, are there (not necessarily distinct) elements that sum to ?
Assuming the integers are bounded by in magnitude, there’s a very obvious strategy (the factors may arise from arithmetic operations), which is to just try every possible combination of elements.2 Here’s a short snippet which does the slightly more general task of seeing if any elements sum to a specified target
value:
{- computes all combinations of k elements of a list, with replacement -}
combinations :: (Integral t) => [a] -> t -> [[a]]
0 = []
combinations list 1 = map return list
combinations list =
combinations list k -1) >>= \elt -> map (:elt) list
combinations list (k
{- solves the d-sum problem inefficiently -}
dSum :: (Integral t) => [t] -> t -> t -> Bool
=
dSum list d target any (== target) $ map sum $ combinations list d
I think the use of the return
and >>=
operators on the list monad make the code a little prettier, and at least on toy problems, Haskell is always a joy to read. (Though of course, (:[])
is a little shorter than return
for lists, but I like the longer word.) But then I saw this and thought: hey, wouldn’t it be nice to make it even more monadic? One of the best things about Haskell is that it’s flexible enough to allow you to write very elegant code. One of the worst things about Haskell is that it’s also flexible enough to allow you to write nearly unreadable code. Here’s a more generalized variant of the same combinations
function:
import Control.Monad
import Control.Applicative
import Control.Functor
combinations' ::
Integral t,
(Monad m,
Alternative m,
Monoid (m (m a))) => m a -> t -> m (m a)
0 = mempty
combinations' list 1 = fmap return list
combinations' list =
combinations' list k -1) >>= (list <&>) . (flip $ (<|>).return) combinations' list (k
These behave identically, as one would expect, when you give them a real list:
> combinations' [1,2,3] 3 == combinations [1,2,3] 3
λTrue
> (length $ combinations (take 5 [1..]) 4) == 5^4
λTrue
I spent way too much time looking up the right <&>
, <|>
, <*>
, etc. operators to do what I wanted.
In fact, coming up with (though not implementing) an efficient solution was part of a homework problem that I had. I have a bad history with implementing solutions to homework problems; once, Evan and I were working in Gates at 1:00 am, all because I decided, for fun, to try to actually implement a certain dynamic programming algorithm for 15-251, and then I couldn’t figure out a bug in my code. I eventually realized what the bug was several months later, when working on a similar problem for another class.↩︎
In fact, there is a much better solution. One part of the aforementioned homework problem was to find a solution, and the second was to prove that, assuming the exponential time hypothesis for 3-SAT, there is a lower bound for any . Unfortunately, since this is a homework problem, I won’t be able to share the more efficient solution, but you can try it out on your own if you’d like.↩︎
Comments