Monday, March 23, 2009

Learning Haskell - Laziness makes you think differently

Haskell is pervasively lazy, lazy-by-default, unless otherwise forced to be strict. Now this is a big change from where I am coming. Java, well, nothing lazy to write home about. There are some constructs that are lazy by default, but does not support any user defined operation to be lazy. Scala offers a call by name semantics along with a nice syntactic sugar that allows creation of user defined control structures that look like built-in ones. Scala also allows immutable variables to be annotated with "lazy" keyword, which are evaluated on-demand and memoized as well, for subsequent usage.

Haskell is about purity, and once they decided to be lazy, the purity had to be enforced. Of course it has some great consequences ..

Infinite data structures ..

Prelude> take 5 [1..]
[1,2,3,4,5]

or the beautiful fibs to generate all fibonacci numbers ..

Prelude> let fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
Prelude> take 10 fibs
[0,1,1,2,3,5,8,13,21,34]


And infinite data structures enable us to write code that are almost as descriptive as the specification in English ..

primes :: [Int] -> [Int]
primes (n:ns) = n : primes (filter (\v -> v `mod` n /= 0) ns)


Or the beautiful call-by-need semantics to process recursive data structures (aka Tying the knot) efficiently and in a single pass for operations that would otherwise need multiple passes in a purely functional language. I had talked about the same trick in solving the Josephus Flavius game in an earlier post.

In summary, purity of functions and lazy evaluation offer better composition capabilities that include compiler optimizations that help reduce intermediate tree structures.

But purity and lazy evaluation of Haskell has also its share of warts particularly when it comes to a newcomer ..

In languages like Scala or OCaml that offer optional laziness, what I want to be lazy with, is made explicit. And since purity is not enforced by the compiler, I, the programmer, need to ensure that side-effects don't ruin my attempts at deforestation. Nothing gets done as an undercurrent that comes up only as an ugly head in the form of a non linear stack growth during runtime. Otherwise execution is based on a strict evaluation, stack space is deterministic, if you stick to the basics of the best practices like tail recursivity of your functions.

It is not that Haskell's lazy-by-default is worse, but for a new journeyman making his foray into the world of Haskell, this requires a different way of thinking. There is a price of purity that you have to pay, where the programming model may not always map to the performance characteristics of execution. Modularity of code may not result in modularity of the execution model. This may not be bad as well, but it requires quite a bit of experience in Haskell to appreciate this orthogonality. I remember Dan Piponi touch upon this subject in one of the related discussions .. can't find it now ;-(.

Now look at this seemingly efficient tail recursive factorial ..

fact n acc | n < 1     = acc
           | otherwise = fact (n-1) (acc*n)


Lazy evaluation builds up the thunks which do not get evaluated unless actually used - hence making tail recursion an ineffective vehicle for space optimization. You need to use foldl', the stricter version of combinator as ..

fact n = foldl' (*) 1 [1..n]

Have a look at this exercise by Don Stewart, where he tries to solve the memory bumping of the apparently intuitive simple function for calculating the mean of a list of doubles.

mean :: [Double] -> Double
mean xs = sum xs / fromIntegral (length xs)


has to be transformed to

mean :: [Double] -> Double
mean = go 0 0
    where
        go :: Double -> Int -> [Double] -> Double
        go s l []     = s / fromIntegral l
        go s l (x:xs) = go (s+x) (l+1) xs


to ensure the program runs in constant space with lazy lists. The exercise is not trivial to the eye of a beginner, when you have to sidestep your intuition for a more complex alternative.

Many experienced Haskellers also struggle with space optimization. The only thing to keep in mind is that the compiler is your friend. And unless you are careful enough to check out the "core" Haskell that ghc produces as part of its optimizations-by-transformations phase, these thunks can get out of bounds. Space which would otherwise have been reclaimed much earlier in strict evaluation are left allocated in the form of thunks for lazy evaluation.

6 comments:

Don Stewart said...

I'm not sure the example he cites of mine is the best for illustrating laziness, actually.

After all, "double-traversal" scenario is problematic under either strict or lazy scenarios. In the strict case, we allocate up front, and traverse twice. In the lazy case, we allocate incrementally, and traverse twice.

After the refactoring, in the lazy case we can process all the data in constant space, with one traversal, and process data greater than fits in memory (which was the goal).

Cedric said...

It seems to me that laziness makes for great one-liners in blog posts but absolute nightmares to debug.

Do you have experience in this area?

Unknown said...

Hi Don -

Double traversal is a problem for inefficiency in both cases - strict or lazy. Admitted. But, as you also mentioned in your post, the problem with lazy evaluation is that the garbage collector cannot deallocate the list after the 'sum' is computed, since 'length' still refers to the head of the list. It needs a different way of thinking to overcome this problem with the classical recursive solution that builds the list incrementally, computes the sum and the length, all in a single pass, as you have demonstrated in your post.

Thanks.

Unknown said...

Hi Cedric -

I am only learning Haskell, I am no experienced dude on this ;-). Hence I wanted to share my concerns and feelings regarding lazy evaluation model of Haskell. There are pros and cons, and I guess, the compiler is the greatest debugger in Haskell. You need to check the *core* generated by ghc to get your mind into what tricks laziness is playing on you. It needs a different way of thinking to get accustomed into the lazy evaluation semantics. But there are great and elegant stuff as well, some of which I mentioned in the post. Lots of algorithm speedups are also possible in some cases through lazy composition, which I plan to cover in a separate post.

Thanks.

John Wiegley said...

On a very related note: http://www.newartisans.com/2009/03/functional-yet-lazy.html

Mark Bradley said...

thinking about the double traversal problem, i came to thinking that there should be a general way to solve the problem of maintaining multiple accumulators while traversing a list.

so i wrote this:
multiFoldl :: [a -> b -> b] -> [b] -> [a] -> [b]
multiFoldl xs orig [] = orig
multiFoldl xs orig (a:args) = apply xs a multiFoldl xs orig args
where
apply [] a _ = []
apply _ a [] = []
apply (fn:fns) a (b:bs) = fn a b : apply fns a bs

theres probably a nicer way to write this using the control libraries but it seems to do the job

i'll probably write a blog post about it sometime (takes me a while sometimes).