Functional Programing in Javascript

Introduction

Recently I went through this fascinating article Professor Frisby’s mostly adequate guide to Functional Programming and would like to summarise my understanding in this post.

f(x)

In imperative programming, you get things done by giving the computer a sequence of tasks and then it executes them. While executing them, it can change state.  In purely functional programming you don’t tell the computer what to do as such but rather you tell it what stuff is.

// Imperative

var original = [1,2,3,4,5]
var squared = []

for(var i = 0; i < original.length; i++) {  
  var squaredNumber = original[i] * original[i]  
  squared.push(squaredNumber) 
} 

console.log(squared) //=> [1, 4, 9, 16, 25]
// Functional

var original = [1,2,3,4,5]

var squared = original.map(n => n * n);

console.log(squared) //=> [1, 4, 9, 16, 25]

Functions are first-class

In Javascript, functions are “first class”, we mean they are just like everyone else for example, like a Number, String etc.,

Functions are better if they are pure

A pure function is a function that, given the same input, 
will always return the same output and does not have any 
observable side effect.

Side effects may include but not limited to,

Changing the file system
Inserting/Updating/Deleting a record into a database
Making a http call
Mutations
Printing to the screen/logging
Obtaining User Input
Querying the DOM
Accessing system state

8th grade math

From mathisfun.com:

A function is a special relationship between values: 
Each of its input values gives back exactly one output value.

Take a function which calculates square

function square(x) {
  return x * x;
}

If function is just a special relationship between values as mentioned above, we can visualize the function as a simple table:

screen-shot-2017-01-10-at-10-54-08-amThere’s no need for implementation details if the input dictates the output. Pure functions are mathematical functions and they’re what functional programming is all about.

A pure function is:

  • Cacheable
  • Self Documenting
  • Testable
  • Reasonable (Referential transparent: A spot of code is referentially transparent when it can be substituted for its evaluated value without changing the behavior of the program.)
  • Can be made to execute in parallel

Curried Functions

The concept of curried functions is, you can call a function with fewer arguments than it expects. It returns a function that takes the remaining arguments.

For eg., a curried add function can be,

function add = a => b => {
    return a+b;
}

add(1)(2) // Result: 3

As explained earlier, a function is a special relationship between values: Each of its input values gives back exactly one output value. Using curried functions, we can make new, useful functions on the fly simply by passing in a few arguments and as a bonus, we’ve retained the mathematical function definition of each input mapping to exactly one output despite multiple arguments. This is the reason, why every function in Haskell officially only takes only one parameter. All the functions that accepted several parameters will be curried functions.

Composition – Holy Grail

compose function can be defined as:

var compose = function(f, g) {
  return function(x) {
    return f(g(x));
  };
};

f and g are functions and x is the value being piped through them.

The most important building block in FP is functions. Composition allows us to combine these smaller building blocks to build larger programs. If you look at the function definition, compose function takes 2 functions f and g and creates a brand new function which takes a value x calls g and then passes the result to function fand returns the result.

Functional composition is associative.

// associativity

var associative = 
compose(f, compose(g, h)) == compose(compose(f, g), h);

// true

This means that it doesn’t matter how we group our calls to compose, the result will be the same. This allows us to write a variadic compose (which is implemented in libraries like lodash, ramda etc.,).

The best analogy to think about the power of composition is UNIX pipes. 2 of the important points in Unix  philosophy are:

  1. Write programs that do one thing and do it well.
  2. Write programs to work together.

For eg;

find - walk a file hierarchy
cat - concatenate and print files
grep - file pattern searcher
xargs - construct argument list(s) and execute utility

Each program mentioned above exactly does one job very well. 
But when we combine/compose them together you get powerful programs.

find . -name "fp.txt" | xargs cat | grep "FP"

Find a file called "fp.txt" searching from the current directory
and pipe the contents to cat through xargs and then pipe the 
contents to grep to search for the text "FP"

In FP, we write pure functions that do one thing and do it well. We use composition to combine/compose functions so that they work together.

Hindley-Milner

In functional world, it won’t be long before we find ourself knee deep in type signatures. Types are the meta language that enables people from all different backgrounds to communicate succinctly and effectively. For the most part, they’re written with a system called “Hindley-Milner”.

Type signatures for the below functions are as follows:

:t capitalize
capitalize :: String -> String

:t strLength
strLength :: String -> Number

:t head
head :: [a] -> a

:t reverse
reverse :: [a] -> [a]

:t sort
sort :: Ord a => [a] -> [a]

:t foldl
foldl :: Foldable t => (b -> a -> b) -> b -> t a -> b

Once a type variable is introduced, there emerges a curious property called parametricity. This property states that a function will act on all types in a uniform manner which makes the possible behavior of the function massively narrowed due to its polymorphic type.

Container/Box/Computational Context

We have seen how to apply functions to a value.

function addOne(x) {
  return x + 1;
}

var r = addOne(10) // r = 11

We will extend this idea by saying that any value can be in a context. For now we can think of this as a CONTAINER or BOX or COMPUTATIONAL CONTEXT. 

You can create this type with the following definition:

var Container = function(x) {
  this.__value = x;
}

Container.of = function(x) { return new Container(x); };

We will take a simple type called Maybewhich can be in 2 different states: One state containing a value and the other Nothing. In Haskell, this type is represented as

data Maybe a = Nothing | Just a

In Scala, this type is called Option, which will be in 2 different states: Some and None.

In Javascript, in the below example we will just define a  Maybe class which encapsulates both the states. In practice, we should mirror the Haskell or Scala or some other language have separate type constructors for each of the states.

var Maybe = function(x) {
  this.__value = x;
};

Maybe.of = function(x) {
  return new Maybe(x);
};

Maybe.prototype.isNothing = function() {
  return (this.__value === null || this.__value === undefined);
};

Maybe.of("JS rocks!!"); // State containing a value
Maybe.of(null); // State containing null

Functor

The definition of functor:

map :: Functor f => (a -> b) -> f a -> f b

From the signature, we can clearly see that Functor defines a map method and its implementors must provide an implementation to the map function. From the signature, we can see that map method allows us to do a data transformationof the value that is contained in the computational context. Another very important property of map function is it preserves structure.

A Functor is a type that implements map and obeys some laws

Maybe is a Functorand it provides a map implementation.

Maybe.prototype.map = function(f) {
  return this.isNothing() ? Maybe.of(null) : Maybe.of(f(this.__value));
};

Maybe.of('Mayakumar').map(match(/a/ig));
// => Maybe['a', 'a', 'a']

Functor Laws

// identity
map(id) === id;

// composition
compose(map(f), map(g)) === map(compose(f, g));

Diagrammatically, what happens under the hood of Functor's map method can be shown as below:

functor

Monad

Pointy Functor Factory

of method is to place values in what’s called a default minimal context part of an important interface called Pointed.

A pointed functor is a functor with an of method
Maybe.of(100).map(add(1));
// Maybe(101)

When the map function has a signature a => M[B] we will get nested container types. If we had to get to the value, we have to map as many times  as the wrapped container types which is not great from the caller perspective.

We somehow need to remove the extra container/box/computation context, M[M[a] to M[a]

var mmo = Maybe.of(Maybe.of('value'));
// Maybe(Maybe('value'))

mmo.join();
// Maybe('value')
Monads are pointed functors that can flatten

Any functor which defines a join method, has an of method, and obeys a few laws is a monad. Defining join is not too difficult so let’s do so for Maybe:

Maybe.prototype.join = function() {
  return this.isNothing() ? Maybe.of(null) : this.__value;
}

We can call join right after map which can be abstracted in a function called chain.

//  chain :: Monad m => (a -> m b) -> m a -> m b
var chain = curry(function(f, m){
  return m.map(f).join(); // or compose(join, map(f))(m)
});

For Maybe:

Maybe.prototype.chain = function(f) {
  return this.isNothing() ? Maybe.of(null) : this.map(f).join();
}

chain nests effects and we can capture both sequence and variable assignment in a purely functional way.

For eg:

var result = Maybe.of(3).chain(a => {
   return Maybe.of(30).chain(b => {
      return Maybe.of(300).chain(c => {
        return Maybe.of(3000).map(d => a + b + c + d);
      });
   })
});

// => 3333

Monad Laws

// associativity
compose(join, map(join)) == compose(join, join);

// identity for all (M a)
compose(join, of) === compose(join, map(of)) === id

var mcompose = function(f, g) {
  return compose(chain(f), g);
};

// left identity
mcompose(M, f) == f;

// right identity
mcompose(f, M) == f;

// associativity
mcompose(mcompose(f, g), h) === mcompose(f, mcompose(g, h));

Diagrammatically, what happens under the hood of Monad's chain method  can be shown as below:

function even(x) {
   if(x % 2 === 0) {
      return Maybe.of(x/2);
    } else {
      return Maybe.of(null);
    }
}

monad

Applicatives

Applicatives provide the ability to apply functors to each other.  We will take an example to understand better:

var add = a => b => {
 return a + b;
}

add(Maybe.of(2), Maybe.of(3));
// Not possible

var maybe_of_add_2 = map(add, Maybe.of(2));
// Maybe(add(2))

We have ourselves a Maybe with a partially applied function inside. More specifically, we have a Maybe(add(2)) and we’d like to apply its add(2) to the 3 in Maybe(3) to complete the call. In other words, we’d like to apply one functor to another.  We can achieve this using chain and map functions as defined below:

Maybe.of(2).chain(function(two) {
 return Maybe.of(3).map(add(two));
});

The issue here is that we are stuck in the sequential world of monads. We have ourselves two strong, independent values and we should think it unnecessary to delay the creation of Maybe(3) merely to satisfy the monad’s sequential demands. So applicatives provides the functionality where we need to apply functions within a computational context to values in a computational context but the values are independent and hence there is no need to sequence them. A good example to think of the use of Applicatives is when we do parallel independent asynchronous computations and apply the results of each to a function contained in a functor.

Maybe.prototype.ap = function(other_container) {
 return this.isNothing() ? this : other_container.map(this.__value);
}

Maybe.of(add).ap(Maybe.of(2)).ap(Maybe.of(3));
// Maybe 5
An applicative functor is a pointed functor with an ap method

Applicative Laws

// identity
A.of(id).ap(v) == v

// homomorphism
A.of(f).ap(A.of(x)) == A.of(f(x))

// interchange
v.ap(A.of(x)) == A.of(function(f) { return f(x) }).ap(v)

// composition
A.of(compose).ap(u).ap(v).ap(w) == u.ap(v.ap(w));

So a good use case for applicatives is when one has multiple functor arguments. They give us the ability to apply functions to arguments all within the functor world. Though we could already do so with monads, we should prefer applicative functors when we aren’t in need of monadic specific functionality.

Diagrammatically, what happens under the hood of Applicative's ap method  can be shown as below:

applicatives

map/of/chain/ap

We have explained the concepts behind Functor, Monad, Applicatives. There are so many structures that obey/satisfy the properties of being a functor, monad and applicative. From the above example we can see that Maybe is a Functor, Monad and Applicative. There are many structures that satisfy functor, monad, applicatives which include:

List
Maybe
IO
Task (https://github.com/folktale/data.task)
etc.,

If a type is a Monad it has to be both an Appliacative and a Functor. If a type is an Applicative it has to be a Functor.

Summary

Professor Frisby’s mostly adequate guide to Functional Programming is one of the best articles that I have read recently. From the article, the author wonderfully shows how powerful and deep functional programming constructs are and also explains how they can all be implemented with Javascript language.

References

  1. https://www.gitbook.com/book/drboolean/mostly-adequate-guide/details
  2. http://learnyouahaskell.com/
  3. http://adit.io/posts/2013-04-17-functors,_applicatives,_and_monads_in_pictures.html (The diagrammatic representation for Functor, Monad, Applicatives are inspired from this post).
Advertisements

Redux Middleware

Introduction

In this post, I will write about Redux Middleware by viewing it in 2 different angles, one from the Object Oriented way and the other from the functional way as how it is implemented in Redux source code.

Basics of Redux

The core function of Redux is:


dispatch(action)

When the client calls dispatch(action),  it internally calls the reducer function which is a pure function (a function which does not have any side effects) whose signature is (currentstate, action) => nextstate. The client can call getState to get the current state value any time.

Middleware

As stated earlier, dispatch(action) is the core function of Redux which does the next state calculation using the reducer function. What if an application wants to log the getState() function before and after the call to dispatch(action) (or) would like to know how long it took for dispatch(action) to finish completion (or)  would like to make an IO operation before calling the actual dispatch(action)?  How do we solve this problem? I was reminded of a famous quote,

History doesn’t repeat itself but it often rhymes

I have seen this similar problem in a couple of instances from the Object Oriented world.

  • Java Filters: A filter is an object that performs filtering tasks on either the request to a resource (a servlet or static content), or on the response from a resource, or both.
  • Java IO: InputStream is an abstract class. Most concrete implementations like BufferedInputStream, GzipInputStream, ObjectInputStream, etc. have a constructor that takes an instance of the same abstract class. That’s the recognition key of the decorator pattern (this also applies to constructors taking an instance of the same interface).

Both the above examples are typically solved by a popular Gang of Four Design Pattern: Decorator Design Pattern. The basic idea of Decorator Design pattern is to augment (or) decorate the behavior of the underlying base component without the need to change/extend the base component.

So the original problem of how to augment/decorate dispatch(action) perfectly fits the bill of GoF Decorator Design Pattern. GoF Decorator design pattern provides the solution in Object Oriented way, but the general abstract motivation of using Decorator Design pattern perfectly maps to how we will solve the problem of implementing Middlewares in Redux whether in Object Oriented Way (or) Functional way.

Square Calculator

To understand middleware, we will try to view it through the lens of a very simple example:Square calculator. If the input is x the output is x*x, which basically forms the pure reducer function. Redux stores the last computed square value as its state. Additionally we would like to do 3 middleware operations:

  1. Validate the input to make sure it is a number by calling a service (extremely hypothetical example), but in essence would like to sprinkle some impurity through an IO operation.
  2. Calculate the time taken to perform the original dispatch(action)
  3. Log the getState() before and after the original dispatch(action)

[If you look closely, not only 1 is impure because of an IO operation, even 2 and 3 are also impure since accessing the system clock to get the time and writing to the console are also impure operations.]

screen-shot-2016-12-27-at-1-56-48-pm

Implementation of Square Calculator – Object Oriented Way

As stated earlier, it is implemented using Decorator design pattern:Source Code in Java Using OO way

The UML diagram for the crux of the implementation can be shown as below:

screen-shot-2016-12-27-at-11-26-42-pm


Store globalStore;
globalStore = new Thunk(new TimerStore(new LoggingStore(baseStore)));

As shown above, we are adhering to an important principle, “Code to an interface rather than an implementation” and the client ties to the interface called Store and the implementation is a fully constructed chain of decorators wrapping the base Store and the client is not aware of any of the implementation details.

Implementation of Square Calculator – Functional Way

If we look at Redux source code, the implementation of middleware uses a lot of functional constructs. I have created a slimmed down version of it to implement the logic for Square Calculator. Source Code in Javascript Using Functional way

I would like to touch upon some of the important functional programming constructs that are being employed to construct the wiring for middleware.

If you look at the above source code you will see the following important properties being employed:

  1. In Functional Programming, functions are first class citizens. Meaning, functions can be stored in variables and can be passed around like numbers, strings etc.
  2. Use of Higher Order Functions – Functions can take functions as parameters and return functions as return values. A function that does either of those is called a higher order function. It turns out that if you want to define computations by defining what stuff is instead of defining steps that change some state and maybe looping them, higher order functions are indispensable. They’re a really powerful way of solving problems and thinking about programs.
  3. Currying –  Basically if you take a simple function like adding 2 numbers, its signature for being a curried function would be var add = a => b => a + b. The caller of this function would have to do,  add(1) (2) to get 3. A good analogy would be, “If I can give you a carrot for an apple and a banana, and you already gave me an apple, you just have to give me a banana and I’ll give you a carrot.” Translating to the add example, if we had called the function add with just 1 and stored the result like var addOne = add(1). Now if the client wants to add any number to one, they can simply just do addOne(2) , so If I can give you a 3 for 1 and 2 and you already gave me 1 you just have to give me 2 and I will give you 3
  4. Use of map function in the List – Usage of Functor
  5. Use of folds through reduce function – extremely powerful abstract API.
  6. Use of functional composition – In computer science, function composition (not to be confused with object composition) is an act or mechanism to combine simple functions to build more complicated ones. Like the usual composition of functions in mathematics, the result of each function is passed as the argument of the next, and the result of the last one is the result of the whole.

All the above 6 properties are used in just 30 lines of code 🙂

function compose(...funcs) {
  funcs = funcs.filter(func => typeof func === 'function')

  if (funcs.length === 0) {
    return arg => arg
  }

  if (funcs.length === 1) {
    return funcs[0]
  }

  return funcs.reduce((a, b) => (...args) => a(b(...args)))
}

function applyMiddleware(...middlewares) {
  return (createStore) => (reducer, preloadedState, enhancer) => {
    var store = createStore(reducer, preloadedState, enhancer)
    var dispatch = store.dispatch
    var chain = []

    var middlewareAPI = {
      getState: store.getState,
      dispatch: (action) => dispatch(action)
    }
    chain = middlewares.map(middleware => middleware(middlewareAPI))
    dispatch = compose(...chain)(store.dispatch)

    return {
      ...store,
      dispatch
    }
  }
}

Conclusion

As you can see, Redux Middleware is a very simple but powerful concept where the core idea of decorating the base behavior of dispatch(action)function with wrappers is seen in many other instances in the Object Oriented world as well.

Referential Transparency, Redux and React

Introduction

I started my career doing C/C++ programming for Embedded systems and then transitioned to do Java/J2EE for a very long time. For the last year or two, I have been exposed to the world of functional programming through Scala, and for the last couple of months I have been doing actively development in the front end stack using Node.js on the server side and Redux/React on the client side. It has been a great experience and would like to share the connections I can see between the concept of “referential transparency” in functional programming world to Redux/React.

Referential Transparency

[Source Used for reference: Functional programming in Scala book]

Functional programming (FP) is based on a simple premise that we construct our programs using only pure functions – functions that have no side effects. What are side effects? A function has a side effect if it does something other than simply return a result. For eg:

  • Modifying a variable
  • Throwing an exception or halting with error
  • Making IO operations

Functional programming provides many techniques to do the above set of operations as well without side effects. I have written about a couple of patterns here:

Kleisli Monad Transformer

Magic of State Monad

The key idea is that majority of logic that deals with side effects lives at the edges of the application, while the core remains pure. Why do we need to have “pure functions” in the first place? Well, the answer turns out to be simple: Pure functions are easier to reason about. A function f with input type A and output type B, function of type A => B, is a computation that relates every value a of type A to exactly one value b of type B such that b is determined solely by the value of a.

We can formalize this idea of pure functions using the concept of “referential transparency”(RT). This is a property of expressions in general and not just functions. For eg., if we have an expression 2 + 3 in different parts of the program; This expression has no side effect and evaluation of this expression results in the same value 5 every time. In fact, if we saw 2+3 in a program we could simply replace it with the value 5 and it would not change a thing about the meaning of the program. The above case is true for eg., if we have a function called add(int, int): int which takes 2 integers and returns an integer as a result. If we saw for example, add(2, 3) in different parts of the program we could again simply replace it with the value 5 and the meaning of the program stays the same.

Referential Transparency examples
String class and its methods in Java are examples of referential 
transparency, as they do not have any side effect.

String s1 = "Hello"
int length = s1.length() // always going to be 5

Non Referential Transparency examples
StringBuilder class and its methods in Java are examples of non referential
transparency as they have side effect.

StringBuilder s1 = new StringBuilder("Hello");
s1.append(" , World"); // side effecting operation

Pure functions as mentioned earlier are easier to reason about and also has an additional important property that they are inherently thread safe and are very easy to parallelize. In the above example, StringBuilder is not thread safe as it has side effecting operations.

Redux

Redux is a predictable state container for JavaScript apps. The core philosophy of Redux is to have a “single state tree” and they have a reducer function whose signature is (currentstate, action) => nextstate[ReduxSourceCode] and reducer has to be a “pure function”. This basically means, that reducer function is referentially transparent and where ever we see the reducer function call we can replace the function call with the result of making the call and the meaning of the program stays exactly the same.

In the case of UI, state comes from 2 sources:

  • State produced by making a backend service call, which is an IO operation which is solved by Thunk Middleware. Thunk Middleware makes side effects living at the edges of the program within redux, where as the reducer function within the core can remain pure.
  • UI state like checkbox selected, button clicked etc.,

Because of reducers being a pure function and thus referentially transparent, the UI application state has become so easy to reason about.

React

ReactJs, a javascript library for UI interfaces, follows declarative and component based approach. Since the browser DOM operations are impure and causes side effect, it makes reasoning extremely hard. React brings in a notion of virtual dom through “ReactElement” which is a light, stateless, immutable, virtual representation of a DOM Element. A React component is expected to have a render(props, state) function to describe its UI. The signature is not completely accurate, but it captures the essence of the render function. Props are read only and if we push state from the React component into Redux state tree, then the render function has become render(props) which is a pure function and it becomes referentially transparent. This is called “Stateless functional components” in React. So in essence we can have a referentially transparent react component render method using a referentially transparent redux reducer function.

React Redux , which provides react bindings for redux, provides a function called connect(mapStateToProps, mapDispatchToProps) which acts as a bridge connecting the redux state to the react component props and also dispatching actions from react component to trigger the next state calculation using reducer function thus making sure the React Component UI stays up-to-date when its relevant state changes.

The mental model of how React, Redux, React-Redux fits in can be shown as below:

ReactRedux

Conclusion

As we can see, how a core philosophy of “referential transparency” in the world of functional programming is being applied to client side programming using React/Redux which makes UI code very easy to reason about.

 

 

 

Kleisli Monad Transformer

Introduction

In Scalaz library, a wrapper exists for functions of type A => M[B] where M is a Monad called Kleisli. This blog post is an explanation about this wrapper.

Why need Kleisli?

As an example is always a great starting point to convey any concept, I will explain a simple use case where we can use Kleisli. As explained above, Kleisli is a wrapper for functions of type A => M[B] where M is a Monad. Imagine an enterprise application, which serves a REQUEST Url /foo/bar?a=1, and returns back response after talking to 3 Services S1, S2, S3 and performing 2 DB operations D1 and D2 in sequence. So we essentially do 5 IO operations to fulfill this client request. As you know, if a function has a side-effect it makes the function impure and makes it harder to compose. Functional composition is the core of doing functional programming. How do we get around this problem? Well the solution is to “Describe the IO Computation and then finally executing it”. We describe IO operations in an IO Monad[Yes, scalaz has an IO Monad type, by looking at the type, we know that there will be a side-effect], but the key take away is we just describe the IO computations and then combine/compose them and finally execute all of them at one shot using unsafePerformIO method on the IO Monad. Now coming back to the original problem, we are doing 5 IO operations in sequence, how do we compose/combine functions which take some input of type A and return IO[?]? Well, the answer is to use Kleisi as it is a wrapper for functions of type A => M[B]

Code Example

Below code sample shows a simple use case where”Given some Key we will get Some Name assuming that we need to make a database call to get Something details querying with the some key”:

DAO


import scalaz._
import Scalaz._

trait MSomething[M[_]] {
 def findSomethingByKey(key: String): M[Option[Something]]
}

object MSomething {
 def findSomethingByKey[M[_]](key: String)(implicit M: MSomething[M]): M[Option[Something]] =
 M.findSomethingByKey(key)
}

trait MSomethingMongo extends MSomething[({type λ[α] = Kleisli[IO, Configuration, α]})#λ] {
 def findSomethingByKey(key: String) =
 // You return a Kleisli Monad which wraps a function taking a config
 // and returns an IO Monad (config =&amp;amp;amp;amp;gt; IO).
 Kleisli(config =>
 IO {
 // Assume that we make an actual Mongo db call
 Some(
Something(id = key, name = s"the $key, queried from ${config.s}")
 )
 }
 )
}

object MSomethingMongo extends MSomethingMongo

Intermediate Layer accessing DAO


import scalaz._
import Scalaz._
import effect._

object Intermediate {
def get[M[+_]: Monad : MSomething] (somethingName: String): M[Option[Something]] = {
for {
something <- MSomething.findSomethingByKey[M](somethingName)
} yield something
}
}

Model


sealed case class Something(
  id: String,
  name: String
)
// AppConfig is like the uber configuration which every IO monad gets access to.
case class AppConfig(s: String)

Runner – Main Application


import scalaz._
import Scalaz._
import effect._
import kleisliEffect._

object Runner extends App {
  val appConfig = AppConfig("appConfig")
  println(MSomethingMongo.findSomethingByKey("Hello")(appConfig).unsafePerformIO())
  implicit val a = MSomethingMongo   

 type KIO[+A] = Kleisli[IO, AppConfig, A]

  val resultKleisli: KIO[Option[Something]]  = Intermediate.get[KIO]("Hello")
  println(resultKleisli(appConfig).unsafePerformIO())

}

Magic of State Monad

Introduction

I have been doing Object Oriented Programming in Java for almost a decade. Recently for the past six months, I have been doing Functional Programming in Scala. This blog is about the reasoning behind why we need “State Monad”.

Modifying a variable – SIDE EFFECT

If “Design Patterns: Elements of Reusable Object-Oriented Software” is the bible for Object Oriented Design Patterns, I would say “Functional programming in Scala” book is the bible for learning Functional Programming.

“FP in scala” book starts by defining, “Functional Programming (FP) is based on a simple premise with far-reaching implications: we construct our programs using only pure functions – in other words, functions that have no side effects. What are side effects? A function has a side effect if it does something other than simply return a result, for example: Modifying a variable is a “SIDE EFFECT”.

When I first read that Modifying a variable is a “SIDE EFFECT”, I was completely surprised and puzzled, since we do it all the time in OOP. for eg., i++ is a side effecting operation. I was wondering how do we make state changes in FP? Well, the answer is State Monad.

State Monad

Chapter 6 “Purely functional state” in “FP in Scala” book, starts with the problem of Random number generation.

Random Number with Side effect:

val rng = new scala.util.Random // Create an instance of Random
rng.nextInt  // Call nextInt function to get a random integer
rng.nextInt // Call nextInt function to get a random integer

Everytime you call nextInt function, it doles out a random value since rng has some internal state that gets updated after each invocation, since we would otherwise get the same value each time we called nextInt.

Basically, if you think about the implementation, it holds an internal STATE using which it generates a NEW RANDOM VALUE when you call the function. To make the implementation pure, accept the STATE as a function parameter asking to be passed by the caller every time when they need a value.

Random Number without Side effect:

trait RNG {
def nextInt: (Int, RNG)
}
case class SimpleRNG(seed: Long) extends RNG {
def nextInt: (Int, RNG) = {
val newSeed = (seed * 0x5DEECE66DL+ 0xBL) & 0xFFFFFFFFFFFFL;
val nextRNG = SimpleRNG(newSeed);
val n = (newSeed >>> 16).toInt
(n, nextRNG)
}
}

The common abstraction for making stateful APIs pure is the essence of State Monad. The essence of the signature:

case class StateMonad[S, A](run: (S => (A, S)))

Basically, it encapsulates a run function which takes a STATE argument and return a TUPLE capturing the (VALUE, NEXT STATE). The problem has been inverted in a way where the client needs to pass the “NEXT STATE” to generate the next (VALUE,  STATE).

SIMPLE USE CASE

Assume that there is a CRUD application to create, update, find, delete employee which is using Mysql database for persistence. If you open a SINGLE database terminal and issue CRUD operations against a database what is essentially happening is a STATE TRANSITION. Say there is an “EMPLOYEE” table with zero records which can be thought as the initial state of the database D. Now if you issue a INSERT INTO EMPLOYEE VALUES(‘VMKR’) a new record gets inserted which can be thought as a new state D’ of the database and value produced being the employee record. So it is a database transition from D => (VALUE, D’).

Now assume that you wanted to write unit test case to test this CRUD API. Obviously you would not want to hit the database and would ideally mock it. A simple way to mock a database is to use an in-memory map.

Initial State: Empty Map
Create an Employee:   (Empty Map) => (Map with 1 record, Employee value)
Update an Employee: (Map with 1 record) => (Map with 1 record, Employee value)
Find an Employee: (Map with 1 record) => (Map with 1 record, Option{Employee])
Delete an Employee: (Map with 1 record) => (Empty Map, Unit)

BINGO! We can use the “State Monad” abstraction to solve this problem. I have listed the source code below:

package com.fp.statemonad

import StateMonad._

import scala.collection.immutable.TreeMap

case class StateMonad[S, A](run: (S => (A, S))) {

  def map[B](f: A => B): StateMonad[S, B] =

    StateMonad(s => {

      val (a, s1) = run(s)

      (f(a), s1)

    })

  def flatMap[B](f: A => StateMonad[S, B]): StateMonad[S, B] = StateMonad(s => {

    val (a, s1) = run(s)

    f(a).run(s1)

  })

}

case class Employee(id: Int, name: String)

trait MEmployee[M[_]] {

  def createEmployee(id: Int, name: String): M[Employee]

  def updateEmployee(id: Int, name: String): M[Employee]

  def findEmployee(id: Int): M[Option[Employee]]

  def deleteEmployee(id: Int): M[Unit]

}

object MEmployee extends MEmployeeInstances {

  def createEmployee[M[_]](id: Int, name: String)(implicit M: MEmployee[M]): M[Employee] = M.createEmployee(id, name)

  def updateEmployee[M[_]](id: Int, name: String)(implicit M: MEmployee[M]): M[Employee] = M.updateEmployee(id, name)

  def findEmployee[M[_]](id: Int)(implicit M: MEmployee[M]): M[Option[Employee]] = M.findEmployee(id)

  def deleteEmployee[M[_]](id: Int)(implicit M: MEmployee[M]): M[Unit] = M.deleteEmployee(id)

}

trait MEmployeeInstances {

  // TEST INSTANCE

  implicit def MEmployee[M[+_], S] = new MEmployee[({ type λ[α] = StateMonad[Map[Int, Employee], α] })#λ] {

    def createEmployee(id: Int, name: String) = StateMonad(m => {

      val e = Employee(id, name);

      (Employee(id, name), m.+((id, e)))

    })

    def updateEmployee(id: Int, update: String) =

      StateMonad(m => {

        val e = Employee(id, update);

        (Employee(id, update), m.+((id, e)))

      })

    def findEmployee(id: Int) =

      StateMonad(m => {

        val o = m.get(id);

        (o, m)

      })

    def deleteEmployee(id: Int) =

      StateMonad(m => {

        ((), m.(id))

      })

  }

}

object Run extends App {

  import StateMonad._

  type TestState[M] = StateMonad[Map[Int, Employee], M]

  val state = for {

    c1 <- MEmployee.createEmployee[TestState](1, “Mayakumar Vembunarayanan”)

    c2 <- MEmployee.createEmployee[TestState](2, “Aarathy Mayakumar”)

    u1 <- MEmployee.updateEmployee[TestState](1, “vmkr”)

    f1 <- MEmployee.findEmployee[TestState](1)

    _ = println(“Found Employee: ” + f1)

    _ <- MEmployee.deleteEmployee[TestState](1)

  } yield ()

  println(state.run(Map()))

}

The code does the following:

  • case class StateMonad[S, A](run: (S => (A, S))) is the crux of making state transitions pure.
  • It has map and flatMap making it a “FUNCTOR” and “MONAD”. To understand more about “FUNCTOR” and “MONAD” read here: FUNCTOR_AND_MONAD
  • case class Employee(id: Int, name: String) is the Employee data model with an id and a name.
  • trait MEmployee[M[_]]  defines the CRUD api’s. Think it as “interfaces” in Java parlance.
  • object MEmployee is the companion object. Its purpose is to make the usage of trait MEmployee easier for the client. The clients can simply call MEmployee.createEmployee. It will work as long as there is an implementation implicitly available as defined by the api: (implicit M: MEmployee[M]).
  • The test implementation of using the State Monad with in-memory Map is provided by MEmployeeInstances.
  • Cryptic syntax: ({ type λ[α] = StateMonad[Map[Int, Employee], α] })#λ is called TYPE_LAMBDAS
  • The absolute magic about the State Monad in the above example is the code execution actually happens when we run the State Monad which happens here :

    println(state.run(Map()))

  • One other important point is: flatMap implicitly passes the state(in this eg: Map) to each of the subsequent functions after the first createEmployee since the for-comprehension on a Monad is a syntactic sugar of using flatMap function all the way finally yielding a value using the map function.

Output of running the program:

Found Employee: Some(Employee(1,vmkr))

((),Map(2 -> Employee(2,Aarathy Mayakumar)))

My learnings from “The Dhandho Investor”

In this blog post, I will share my learnings from the book “The Dhandho Investor” written by famous value investor Mr. Mohnish Pabrai.

Abstraction

“Abstraction” in Wikipedia.

Abstraction in its main sense is a conceptual process by which general rules and concepts are derived from the usage and classification of specific examples, literal (“real” or “concrete”) signifiers, first principles, or other methods.

I have always been fascinated by the power of “Abstraction“. Several related concrete examples and use cases can be unified under a generic abstraction.

I am a software engineer by profession and, I have encountered a few powerful abstractions like:

  • Abstractions in Object Oriented Design patterns – Design Patterns: Elements of Reusable Object-Oriented Software  is one of the best books that I have read in software engineering where it discusses about recurring solutions to common problems in software design. Each design pattern is applied over and over again to several concrete examples.
  • Abstractions in Functional programming – I have been learning functional programming for the past couple of months and there again I am seeing generic abstractions like Monoids, Monads and Applicatives which are used over and over again in various concrete applications.

Dhandho

Excerpt from “The Dhandho Investor”

Dhandho (pronounced dhun-doe) is a Gujrati word. Dhan comes from the Sanskrit root word Dhana meaning wealth. Dhan-dho, literally translated, means “endeavors that create wealth”. The street translation of Dhandho is simply “business”. What is business if not an endeavour to create wealth?

However, if we examine the low-risk, high-return approach to business taken by the Patels, Dhandho takes on a much narrower meaning.

The key abstraction, that Mr. Pabrai points out across various examples is the “Low-risk high-return” approach.

Excpert from “The Dhandho investor”

If an investor can make virtually risk-free bets with outsized rewards, and keep making the bets over and over, the results are stunning.

The entire book revolves around this simple yet powerful abstraction, “low-risk, high-return” approach.

This key point of minimising risk triggered a few associations in my head:

Warren Buffet said

Rule No.1:  Never lose money. Rule No.2: Never forget rule No.1.

In a fascinating interview of Professor. Sanjay Bakshi done by Mr. Vishal Khandelwal, professor talks about minimizing permanent loss of capital. The best/magical line to me is

The below excerpt can be found here, value-investing-sanjay-bakshi-way-part-3

So, in a sense, one could get exposure to positive black swans embedded in the drug pipeline business of Piramal Healthcare (Taleb) by making a sidecar investment alongside a man with great capital allocation and complementary skills (Zeckhauser) on extremely favourable terms (Graham) and have practically no risk of permanent loss of capital (Buffett).

I was more than convinced, that “low-risk” is a very important component to factor in making investing decisions.

Real world examples for “Low-risk high-return” approach

With the key abstraction being set, Mr. Pabrai talks about 5 real world examples like Mr. Patel, Mr. Manilal, Mr. Virgin, Mr. Mittal and himself. The wonderful thing about all these examples are how the same abstraction of “low-risk high-return” approach is being clearly visible in all these cases.

This reminded me of the “latticework of mental model”, which one should possess where real world examples can be mapped back to the fundamental models that we should possess:

Excerpt from Mr.Charlie Munger’s Elementary worldy wisdom

What is elementary, worldly wisdom? Well, the first rule is that you can’t really know anything if you just remember isolated facts and try and bang ’em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form. You’ve got to have models in your head. And you’ve got to array your experience—both vicarious and direct—on this latticework of models. You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and in life. You’ve got to hang experience on a latticework of models in your head.

I got a mental model out from the first few chapters which is “Dhandho – low risk high return approach”.

Dhandho framework

We have the abstraction of “low-risk high reward” but what is the framework/tools I would use to apply this to the investing world? Mr.Pabrai talks about the Dhandho framework which are listed below:

  1. Focus on buying an existing business
  2. Buy simple businesses in industries with an ultra-slow rate of change
  3. Buy Distressed business in distressed industries
  4. Buy businesses with a durable competitive advantage – THE MOAT
  5. Bet heavily when the odds are overwhelmingly in your favour
  6. Focus on arbitrage
  7. Buy businesses at big discounts to their underlying intrinsic value
  8. Look for low-risk, high-uncertainty businesses
  9. It is better to be a copycat than an innovator.

It could see a strong correlation between the 4 filters which Mr. Warren Buffett and Mr.Charlie Munger used to run Berkshire Hathaway. In the below video, Munger talks about the 4 filters.

4 filters mentioned by Mr. Munger in the interview is mentioned below

  1. We have to deal in things that we are capable of understanding
  2. Once we are over #1, Look for business with intrinsic characteristics, that gives durable competitive advantage
  3. Once we are over #2, Look for management in place with a lot of integrity and talent
  4. Once we are over #3, No matter how wonderful the business is, it is not worth an infinite price. We have to have a price that makes sense which gives a margin of safety considering the natural vicissitudes of life.

Cloning

Mr. Mohnish Pabrai talks about the power of cloning and cites 3 case studies Mcdonald’s, Microsoft, Pabrai Investment Funds. Many times, Mr. Pabrai mentioned that he cloned a lot of ideas from Mr. Warren Buffet and Mr. Charlie Munger. I am more than convinced that “Cloning is a powerful mental model and it works for sure when we clone from the best”.

If I have seen further than others, it is by standing upon the shoulders of giants. – Sir Issac Newton

Wonderful connections from Mahabharata

When I was a kid, I was fascinated by the great Indian epic “The Mahabharata” and the fascination continues to this day. Mr. Pabrai provides a framework for buying/selling a stock by associating Abhimanyu, who was a warrior who knew how to enter a Chakravyuh but not to exit out, as a result got killed in the battlefield. Mr. Pabrai touches a very important point, where an investor should have a framework for both entering into an investment as well as exiting out. He also talks about the importance of being laser focussed when we are making an investing decision by citing Arjuna, who was regarded as the greatest warrior.

Giving it back

Excerpt from “The Dhandho Investor”

I do urge to leverage Dhandho techiques fully to maximize your wealth. But I also hope that, well before your body begins to fade away, you will use some time and some of that Dhandho money to leave this world a little better palce than you found it. We cannot change the world, but we can improve this world for one person, ten people, a hundred people, and maybe even a few thousand people.

Mr. Mohnish Pabrai has a foundation called “Dakshina Foundation” for giving it back to the society.

Website to Dakshina Foundation http://www.dakshana.org/

Dhandho – For LIFE (My own subjective thought., and not from the book)

I was fascinated about “Low-risk high-reward” approach for investing that was discussed right throughout the book. I was thinking whether I can apply “Low-risk high-reward” approach for LIFE in general. To me, “Dhandho – For LIFE” is an endeavour to live a happy life., I would like to consider the below points as risky and would like to keep them as low as possible.

  1. Being Unhealthy is a huge risk – I lost my mother due to cancer when I was 9 years old and my dad suffered a brain hemorrhage within the next six months. It took more than a decade for our family to get back to normalcy. It was a random event(negative black swan event) for sure, but the important point to remember is having a good health is extremely important which is often discounted in this fast paced world. We should atleast do the things which we can control like having a balanced diet, do exercise regularly, avoid other bad habits, be thankful and hope for the best.
  2. Being in Debt is a huge risk – As mentioned in #1, inspite of the negative black swan event that occured to our family, we were in a way lucky because we did not incur a huge debt. So I would like to keep the debt as low as possible, if possible eliminate it.

    Rather go to bed supperless, than rise in debt – Benjamin Franklin

  3. Possessing Envy/Jealousy is a huge risk – I would like to be contented with what I have and not forced to do any action just because someone else is doing it. Not to fall for “Social Proof” especially when making big financial decisions.

There will definitely be many more points but the key to do “Dhandho – For Life” is to keep major risks as low as possible. This section about “Dhandho – For LIFE” is a very subjective thought and not mentioned in the book.

Final thoughts

It was one of the best books that I thoroughly enjoyed reading. I have listed some of the points in this blog which I considered as important to me from the book., There are many more wonderful concepts in the book which I have not listed here. Also it is not just a book on value investing but it touches on a lot of simpler yet powerful ideas that one can adopt in their life. I have become a huge fan of Mr. Mohinish Pabrai and I will try my best to clone some of his great qualities.