Monday, July 30, 2012

Does category theory make you a better programmer ?

How much of category theory knowledge should a working programmer have ? I guess this depends on what kind of language the programmer uses in his daily life. Given the proliferation of functional languages today, specifically typed functional languages (Haskell, Scala etc.) that embeds the typed lambda calculus in some form or the other, the question looks relevant to me. And apparently to a few others as well. In one of his courses on Category Theory, Graham Hutton mentioned the following points when talking about the usefulness of the theory :

  • Building bridges—exploring relationships between various mathematical objects, e.g., Products and Function
  • Unifying ideas - abstracting from unnecessary details to give general definitions and results, e.g., Functors
  • High level language - focusing on how things behave rather than what their implementation details are e.g. specification vs implementation
  • Type safety - using types to ensure that things are combined only in sensible ways e.g. (f: A -> B g: B -> C) => (g o f: A -> C)
  • Equational proofs—performing proofs in a purely equational style of reasoning
Many of the above points can be related to the experience that we encounter while programming in a functional language today. We use Product and Sum types, we use Functors to abstract our computation, we marry types together to encode domain logic within the structures that we build and many of us use equational reasoning to optimize algorithms and data structures.

But how much do we need to care about how category theory models these structures and how that model maps to the ones that we use in our programming model ?

Let's start with the classical definition of a Category. [Pierce] defines a Category as comprising of:

  1. a collection of objects
  2. a collection of arrows (often called morphisms)
  3. operations assigning to each arrow f an object dom f, its domain, and an object cod f, its codomain (f: A → B, where dom f = A and cod f = B
  4. a composition operator assigning to each pair of arrows f and g with cod f = dom g, a composite arrow g o f: dom f → cod g, satisfying the following associative law:
  5. for any arrows f: A → B, g: B → C, and h: C → D, h o (g o f) = (h o g) o f
  6. for each object A, an identity arrow idA: A → A satisfying the following identity law:
  7. for any arrow f: A → B, idB o f = f and f o idA = f

Translating to Scala


Ok let's see how this definition can be mapped to your daily programming chores. If we consider Haskell, there's a category of Haskell types called Hask, which makes the collection of objects of the Category. For this post, I will use Scala, and for all practical purposes assume that we use Scala's pure functional capabilities. In our model we consider the Scala types forming the objects of our category.

You define any function in Scala from type A to type B (A => B) and you have an example of a morphism. For every function we have a domain and a co-domain. In our example, val foo: A => B = //.. we have the type A as the domain and the type B as the co-domain.

Of course we can define composition of arrows or functions in Scala, as can be demonstrated with the following REPL session ..

scala> val f: Int => String = _.toString
f: Int => String = <function1>

scala> val g: String => Int = _.length
g: String => Int = <function1>

scala> f compose g
res23: String => String = <function1>

and it's very easy to verify that the composition satisfies the associative law.

And now the identity law, which is, of course, a specialized version of composition. Let's define some functions and play around with the identity in the REPL ..

scala> val foo: Int => String = _.toString
foo: Int => String = <function1>

scala> val idInt: Int => Int = identity(_: Int)
idInt: Int => Int = <function1>

scala> val idString: String => String = identity(_: String)
idString: String => String = <function1>

scala> idString compose foo
res24: Int => String = <function1>

scala> foo compose idInt
res25: Int => String = <function1>

Ok .. so we have the identity law of the Category verified above.

Category theory & programming languages


Now that we understand the most basic correspondence between category theory and programming language theory, it's time to dig a bit deeper into some of the implicit correspondences. We will definitely come back to the more explicit ones very soon when we talk about products, co-products, functors and natural transformations.

Do you really think that understanding category theory helps you understand the programming language theory better ? It all depends how much of the *theory* do you really care about. If you are doing enterprise software development and/or really don't care to learn a language outside your comfort zone, then possibly you come back with a resounding *no* as the answer. Category theory is a subject that provides a uniform model of set theory, algebra, logic and computation. And many of the concepts of category theory map quite nicely to structures in programming (particularly in a language that offers a decent type system and preferably has some underpinnings of the typed lambda calculus).

Categorical reasoning helps you reason about your programs, if they are written using a typed functional language like Haskell or Scala. Some of the basic structures that you encounter in your everyday programming (like Product types or Sum types) have their correspondences in category theory. Analyzing them from CT point of view often illustrates various properties that we tend to overlook (or take for granted) while programming. And this is not coincidental. It has been shown that there's indeed a strong correspondence between typed lambda calculus and cartesian closed categories. And Haskell is essentially an encoding of the typed lambda calculus.

Here's an example of how we can explain the properties of a data type in terms of its categorical model. Consider the category of Products of elements and for simplicity let's take the example of cartesian products from the category of Sets. A cartesian product of 2 sets A and B is defined by:

A X B = {(a, b) | a ∈ A and b ∈ B}

So we have the tuples as the objects in the category. What could be the relevant morphisms ? In case of products, the applicable arrows (or morphisms) are the projection functions π1: A X B → A and π2: A X B → B. Now if we draw a category diagram where C is the product type, then we have 2 functions f: C → A and g: C→ B as the projection functions and the product function is represented by : C → A X B and is defined as <F, G>(x) = (f(x), g(x)). Here's the diagram corresponding to the above category ..


and according to the category theory definition of a Product, the above diagram commutes. Note, by commuting we mean that for every pair of vertices X and Y, all paths in the diagram from X to Y are equal in the sense that each path forms an arrow and these arrows are equal in the category. So here commutativity of the diagram gives
π1 o <F, G> = f and
π2 o <F, G> = g.

Let's now define each of the functions above in Scala and see how the results of commutativity of the above diagram maps to the programming domain. As a programmer we use the projection functions (_1 and _2 in Scala's Tuple2 or fst and snd in Haskell Pair) on a regular basis. The above category diagram, as we will see gives some additional insights into the abstraction and helps understand some of the mathematical properties of how a cartesian product of Sets translates to the composition of functions in the programming model.

scala> val ip = (10, "debasish")
ip: (Int, java.lang.String) = (10,debasish)

scala> val pi1: ((Int, String)) => Int = (p => p._1)
pi1: ((Int, String)) => Int = <function1>

scala> val pi2: ((Int, String)) => String = (p => p._2)
pi2: ((Int, String)) => String = <function1>

scala> val f: Int => Int = (_ * 2)
f: Int => Int = <function1>

scala> val g: Int => String = _.toString
g: Int => String = <function1>

scala> val `<f, g>`: Int => (Int, String) = (x => (f(x), g(x)))
<f, g>: Int => (Int, String) = <function1>

scala> pi1 compose `<f, g>`
res26: Int => Int = <function1>

scala> pi2 compose `<f, g>`
res27: Int => String = <function1>

So, as we claim from the commutativity of the diagram, we see that pi1 compose `<f, g>` is typewise equal to f and pi2 compose `<f, g>` is typewise equal to g. Now the definition of a Product in Category Theory says that the morphism between C and A X B is unique and that A X B is defined upto isomorphism. And the uniqueness is indicated by the symbol ! in the diagram. I am going to skip the proof, since it's quite trivial and follows from the definition of what a Product of 2 objects mean. This makes sense intuitively in the programming model as well, we can have one unique type consisting of the Pair of A and B.

Now for some differences in semantics between the categorical model and the programming model. If you consider an eager (or eager-by-default) language like Scala, the Product type fails miserably in presence of the Bottom data type (_|_) represented by Nothing. For Haskell, the non-strict language, it also fails when we consider the fact that a Product type needs to satisfy the equations (fst(p), snd(p)) == p and we apply the Bottom (_|_) for p. So, the programming model remains true only when we eliminate the Bottom type from the equation. Have a look at this comment from Dan Doel in James Iry's blog post on sum and product types.

This is an instance where a programmer can benefit from knwoledge of category theory. It's actually a bidirectional win-win when knowledge of category theory helps more in understanding of data types in real life programming.

Interface driven modeling


One other aspect where category theory maps very closely with the programming model is its focus on the arrows rather than the objects. This corresponds to the notion of an interface in programming. Category theory typically "abstracts away from elements, treating objects as black boxes with unexamined internal structure and focusing attention on the properties of arrows between objects" [Pierce]. In programming also we encourage interface driven modeling, where the implementation is typically abstracted away from the client. When we talk about objects upto isomorphism, we focus solely on the arrows rather than what the objects are made of. Learning programming and category theory in an iterative manner serves to enrich your knowledge on both. If you know what a Functor means in category theory, then when you are designing something that looks like a Functor, you can immediately make it generic enough so that it composes seamlessly with all other functors out there in the world.

Thinking generically


Category theory talks about objects and morphisms and how arrows compose. A special kind of morphism is Identity morphism, which maps to the Identity function in programming. This is 0 when we talk about addition, 1 when we talk about multiplication, and so on. Category theory generalizes this concept by using the same vocabulary (morphism) to denote both stuff that do some operations and those that don't. And it sets this up nicely by saying that for every object X, there exists a morphism idX : X → X called the identity morphism on X, such that for every morphism f: A → B we have idB o f = f = f o idA. This (the concept of a generic zero) has been a great lesson at least for me when I identify structures like monoids in my programming today.

Duality


In the programming model, many dualities are not explicit. Category theory has an explicit way of teaching you the dualities in the form of category diagrams. Consider the example of Sum type (also known as Coproduct) and Product type. We have abundance of these in languages like Scala and Haskell, but programmers, particularly people coming from the imperative programming world, are not often aware of this duality. But have a look at the category diagram of the sum type A + B for objects A and B ..


It's the same diagram as the Product only with the arrows reversed. Indeed a Sum type A + B is the categorical dual of Product type A X B. In Scala we model it as the union type like Either where the value of the sum type comes either from the left or the right. Studying the category diagram and deriving the properties that come out of its commutativity helps understand a lot of theory behind the design of the data type.

In the next part of this discussion I will explore some other structures like Functors and Natural Transformation and how they map to important concepts in programming which we use on a daily basis. So far, my feeling has been that if you use a typed functional language, a basic knowledge of category theory helps a lot in designing generic abstractions and make them compose with related ones out there in the world.

Monday, July 23, 2012

Property based testing for domain models


One of the challenges that we face building a non trivial domain model is to write proper tests that verify the domain rules that the model implements. The domain rules can be quite complex, may have a number of edge cases which the developer himself may fail to take care of. When you use an implementation language that supports a decent type system, many of the rules and invariants can be encoded statically within the type system itself. This makes it impossible for the programmer to write any code that violates those constraints. But you can only encode some of the domain rules as part of your static type based constraints - you need to complement them with a testing procedure that validates the semantic behavior of the model.

If you are doing testing manually, you are doing it wrong. And if you use xUnit based testing procedures, there's enough scope for you to grow up towards better frameworks that offer true automation not only with respect to executing your tests, but generating the data as well.

Frameworks like QuickCheck in Haskell or ScalaCheck in Scala allows you to write property specifications that the model should satisfy and then generate data to guide evaluation of test executions for correctness as well as completeness. Here's what Bryan O'Sullivan et al mentions when discussing property based testing in their book Real World Haskell ..


Property-based testing encourages a high level approach to testing in the form of abstract invariants functions should satisfy universally, with the actual test data generated for the programmer by the testing library. In this way code can be hammered with thousands of tests that would be infeasible to write by hand, often uncovering subtle corner cases that wouldn't be found otherwise.


I have been doing lots of domain modeling on securities trading system. The model is a quite complex one and like the most of us, I started with xUnit based testing to verify some of the invariants and constraints that the system needs to honor. But very quickly I found that properties offer a more succinct way to abstract the constraints and invariants of my model. Hence using a tool like ScalaCheck makes it much simpler once I give it a proper data generator as input. Consider the simple property in the following example ..

A trade needs to be enriched in its lifecycle with tax/fee values and other attributes. We can then compute its net value which should be a positive numeric quantity.


property("Enrichment of a trade should result in netvalue > 0") {
  forAll((a: Trade) =>
    enrichTrade(a).netAmount.get should be > (BigDecimal(0)))
}



Here I don't need to construct concrete data by hand (in fact this is what makes xUnit based frameworks a sophisticated manual testing system - it only automates the execution part of your testing). Instead what I supply is a trade generator which gives ScalaCheck enough information based on which it can generate loads of trades. A typical simplified version of the trade generator is the following ..


implicit lazy val arbTrade: Arbitrary[Trade] =
  Arbitrary {
    for {
      a <- Gen.oneOf("acc-01", "acc-02", "acc-03", "acc-04")
      i <- Gen.oneOf("ins-01", "ins-02", "ins-03", "ins-04")
      r <- Gen.oneOf("r-001", "r-002", "r-003")
      m <- arbitrary[Market]
      u <- Gen.oneOf(BigDecimal(1.5), BigDecimal(2), BigDecimal(10))
      q <- Gen.oneOf(BigDecimal(100), BigDecimal(200), BigDecimal(300))
    } yield Trade(a, i, r, m, u, q)
  }



I can make more sophisticated properties and ask the system to check whether the model satisfies the property or not ..


property("Enrichment should mean netValue equals principal + taxes") {
  forAll((a: Trade) => {
    val et = enrichTrade(a)
    et.netAmount should equal (et.taxFees.map(_.foldLeft(principal(et))((a, b) => a + b._2)))
  })
}



This is a direct encoding of the business rule, which the domain model needs to satisfy. And using properties we can encode the verification in a more declarative fashion, and that too without bothering about how to generate concrete data for all the test cases.

Crosscutting Property Verification

When we talk about properties of the model, we can go all the way and write specifications for properties at all levels of granularity. It can be a local property limited to one module or it can be a system wide property, an invariant that holds good across multiple components. What I want to say is that property based testing can be very effective in system testing (or integration testing) as well.

In our system, we have Client Orders coming in that get executed in the stock exchanges resulting in broker side executions, which then get allocated to client accounts resulting in client trades. A succinct implementation of this entire flow can be the following ..


def tradeGeneration(market: Market, broker: Account, clientAccounts: List[Account]) =
  // client orders           executed at market by broker        & allocated to client accounts
  kleisli(clientOrders) >=> kleisli(execute(market)(broker)) >=> kleisli(allocate(clientAccounts))


We can use property based specification to test this entire lifecycle using ScalaCheck. Let's check for one of the invariants in this entire process -

Invariant: "The total quantity of order will be equal to the total quantity traded"

and here's the specification of the property in ScalaCheck ..


property("Client trade allocation in the trade pipeline should maintain quantity invariant") {
  forAll { (clientOrders: List[ClientOrder], args: (Market, Account, List[Account])) =>
    whenever (clientOrders.size > 0 && args._2.size > 0 && args._3.size > 0) {
      val trades = tradeGeneration(args._1, args._2, args._3)(clientOrders)
      trades.size should be > 0
      (trades.sequence[({type λ[a]=Validation[NonEmptyList[String],a]})#λ, Trade]) match {
        case Success(l) => {
          val tradeQuantity = l.map(_.quantity).sum
          val orderQuantity = fromClientOrders(clientOrders).map(_.items).flatten.map(_.qty).sum
          tradeQuantity should equal(orderQuantity)
        }
        case _ => fail("should get a list of size > 0")
      }
    }
  }
}
The properties are the domain rules and these can be prepared by the domain experts. I am a firm believer in collaborative involvement of domain experts in building the model. I have advocated the use of domain specific languages as a thin linguistic abstraction on top of a domain model that should be built iteratively with the domain experts. Property based testing is the third piece that completes the solution framework of developing complete domain models. Property based testing strengthens this view of increased involvement of the domain people in building the software. At the end of the day, leave everything to the respective experts - the domain people specifies properties and invariants, the developer implements the specification and the underlying test framework does the heavy lifting of generating enough data and automate the execution process.