Do You Ever Want To Know What Is Really A Functional Programming Is?

Functional Programming Main Image

Do You Ever Want To Know What Is Really A Functional Programming Is?

Functional programming is a very funny paradigm. On the one hand, everyone knows about it, and everyone likes to use all sorts of pattern matching and lambdas, on the other hand, few people usually write in a pure FP language. Therefore, the understanding that this is going back more to myths and urban legends, which are very far from the truth, and people have the opinion that “FP is suitable for all fractal calculation programs isolated from life, but for real problems, there is a proven track record OOP time-tested in battle.”

Although people usually recognize the convenience of FP features, it’s much nicer to write:

int Factorial(int n)
{
    Log.Info($"Computing factorial of {n}");
    return Enumerable.Range(1, n).Aggregate((x, y) => x * y);
}

Than terrible imperative programs like so:

int Factorial(int n)
{
    int result = 1;
    for (int i = 2; i <= n; i++)
    {
        result *= i;
    }
    return result;
}

Why? On the one hand, yes. On the other hand, it is the second program, in contrast to the first, that is functional.

  • How so, isn’t it the other way around? A beautiful fluent interface, data transformation, and lambda is functional, and the dirty cycles that mutate local variables are a legacy of the past? So, it turns out that no.

So why is this so? The fact is that, according to the generally accepted definition, a program is considered to be written in a functional style when it consists only of pure functions. So we write:

  • A functional program is a program consisting of pure functions.
  • Ok, we knew that but what is a pure function? A pure function is a function whose call result is referentially transparent. Or, if formally:

The function f is pure if the expression f (x) is referentially transparent for all referentially transparent x.

And here the differences begin with what people usually represent as a “pure function”. Isn’t a pure function a state that does not mutate? Or is it not getting into global variables? And what is this “link transparency” like that? In fact, there really is a correlation with these things, but the very essence of purity is not to mutate anything, but this very transparency.

So what is it?

Reference transparency is a property in which replacing an expression with the calculated result of this expression does not change the desired properties of the program

This means that if we have somewhere written var x = foo () then we can always replace it with var x = result_of_foo and the behavior of the program will not change. This is precisely the main requirement for cleanliness. FP does not impose any other requirements (like immutability). The only point here is philosophical, which is considered “program behavior”. It can be defined intuitively as properties that are critically important for us to observe. For example, if the code execution generates a little more or a little less heat on the CPU, then we probably don’t care (although if not, we can work with it in a special way). But if our program stopped going to the database and cached one old value, then this really worries us!

Let’s get back to our examples. Let’s check if our rule is satisfied with the first function? It turns out that no, because if we replace Factorial (5) somewhere with 120, then the behavior of the program will change – the information that was previously written will not be written to the logs (although if we go from the position “yes, and to hell with them, with logs” and if we don’t consider this the desired behavior, the program can be considered clean, but we probably didn’t just write that line in the function, and we would like to see the logs, so we consider this point of view unlikely).

  • What about the second option? In the second case, everything remains as it was: you can replace all occurrences with the result of the function and nothing will change.

It is important to note that this property should work in the opposite direction, that is, we should be able to change all var x = result_of_foo to var x = foo () without changing the behavior of the program. This is called “Equational reasoning,” that is, “Reasoning in terms of equivalence.” Within the framework of this paradigm, what functions, what values ​​are the same thing, and you can change one to the other is completely painless.

This is an important consequence: the program does not have to work with immutable data in order to be considered functional. It is enough that these changes are not visible to an outside observer. To do this, they even came up with a special mechanism called ST, which at the level of types helps you prevent accidentally mutable leaking out. A typical example is writing a quick-sorting insert and forgot to copy the input array: ST helps turn this into a compilation error. Immutability is an important convenient property, but no one forces you to use it only, if necessary, you can mutate into a tail and mane, the main thing is not to violate link transparency.

Why is it necessary

Probably the most important question. Why be so tormented? Copy data instead of changing directly, wrapping objects in these STs of yours so that the changes (if any) do not leak out, and that’s all… The answer is for a better composition. At one time, goto was very disliked precisely because it was very difficult to understand with it how the program actually behaves and what the data and control flow really is, and it was difficult to reuse a function written with goto because then he knew how even in the middle of the function body jump without any problems.

With Equational reasoning, it’s always easy to understand what is happening: you can replace the result with a function and that’s it. You do not need to think in what order the functions are calculated, you do not have to worry about how it will lead if you change a couple of lines in places, the program simply passes the results of some functions to others.

As an example of why this is good, I can cite an incident from my life that happened just a couple of months ago. I wrote the most typical OOP code in C#, and I needed to get into the old piece where such code was written (the example is simplified)

var something = function();
DoStuff(this.Field, something);

And I needed to refactor them a bit during the task, which I did:

DoStuff(this.Field, function());

The changes passed the tests successfully, the changes passed the review code was frozen, after which strange falls began on the test bench. After a couple of hours of debugging, it turned out that something like this was done in the gut function:

... что-то считаем
this.Field = GetUpdatedVersion(this.Field, localData) // opps
... продолжаем считать и возвращаем результат

Accordingly, if earlier from the point of view of the compiler it looked like this:

var something = function();
var arg1 = this.Field;      // after calling function - a new value!
var arg2 = something;
DoStuff(arg1, arg2);

Then after refactoring the following

var arg1 = this.Field;      // before calling function - the old value remains!
var arg2 = function();
DoStuff(arg1, arg2);

Accordingly, if earlier the DoStuff function was called with the updated version of the field, then after refactoring it started to be called from the old one.

What morality can be endured here? “Nefig write functions that mutate and return data?” I agree and note that link transparency is the next logical step in this direction. In a functional program, interchanging any two independent lines will never lead to a change in the semantics of the program.

  • In general, AF is aimed at making it possible to judge the behavior of a function by observing only one of them. If you, like me, write in some C # in the usual imperative style, you also need to understand how your DI works, what function or DoStuff specifically does, whether it is safe to call this function from different threads or not. In FP, you look at one function, look at its data, and this information is enough for you to fully understand how it works.

That is, this style is aimed at a more convenient separation of program parts from each other. This greatly simplifies the understanding of the code for people who have not written it. By tradition, I note that you can be this someone in six months. The larger the project, the stronger the effect. As far as I saw, in fairly large projects with hundreds of thousands of lines, people themselves ultimately reinvent the same principles, despite the fact that the language and platform usually abut quite strongly. Because it is simply impossible to debug large programs when everything interacts with everything. The purity of a function, when its call simply returns the result, and does not write boring stories and does not send emails to mail, helps a lot in this. Any developer of a large project will tell you that well-defined contracts and small stateless modules are the easiest and most convenient to work with. A functional approach merely develops this idea to a logical point – all functions must be clean and not dependent on any state.

How are these principles reflected in the code?

As a comparison, I can offer you such an example, which I took from the Scala Red Book (an absolutely gorgeous book, very intelligibly and interestingly talks about FP, with cool tasks). However, for clarity, I adapted the text and code to C#.

Suppose we have a coffee shop and we want people to order coffee. Nothing more is needed, a very simple requirement.

OOP option

Ok, as we were told, we write:

public class Cafe
{
    public Coffee BuyCoffee(CreditCard card)
    {
        var cup = new Coffee()
        card.Charge(cup.Price)
        return cup
    }
}

The line card. Charge (cup.Price) is an example of a side effect. Payment by credit card involves some interaction with the outside world – for example, for this you may need to contact the credit card issuing company through any web service, authorize the transaction and all that. It is called a side effect because all these actions are not related to creating a Coffee instance, that is, they are, as it were, on the side of the main result of the “return a cup of coffee” function.

As a result, the code is difficult to test due to a side effect. Any experienced OOP developer will say, “Yes, you make an interface to write off money!”. A reasonable requirement, we will do so:

public class Cafe
{
    public Coffee BuyCoffee(CreditCard card, IPaymentProvider paymentProvider)
    {
        var cup = new Coffee()
        paymentProvider.Charge(card, cup.Price)
        return cup
    }
}

Despite the side effects, we now have the opportunity to test the purchase process: it’s enough to test the IPaymentProvider interface in the tests. But there are drawbacks.

  • First of all, we had to introduce IPaymentProvider, although if it weren’t for the tests, one specific implementation would suit us perfectly.
  • Secondly, it can be inconvenient to use a mok implementing the necessary functionality. A typical example is InMemory DB, where we wipe the Insert / Save /… methods, and then we get the internal state (usually in the form of lists) and see that everything is saved where necessary. Is it necessary to say that inspecting the internal state of objects is not good? And yes, you can certainly use some kind of framework that will do most of the work for us, but not all, and drag the whole framework just to test that we can buy a cup of coffee that looks like overkill.
  • Well, and thirdly, there are problems with reusing this function. Suppose we want to buy N cups of coffee. In the current interfaces, we have no easy way to do this except to write a completely new function (unless of course, we want to push our payment gateway with the same type of request):
public class cafe
{
     public Coffee BuyCoffee (CreditCard card, IPaymentProvider paymentProvider)
     {
         var cup = new Coffee ()
         paymentProvider.Charge (card, cup.Price)
         return cup
     }

     public Coffee [] BuyCoffees (int count, CreditCard card, IPaymentProvider paymentProvider)
     {
         // we now also need to handle the case of 0 cups,
         // so as not to accidentally set a check for 0 rubles
         if (count == 0) return Array.Empty <Coffee> ();
         var cups = Enumerable.Range (0, count) .Select (_ => new Coffee ()). ToArray ();
         paymentProvider.Charge (card, cups [0] .Price * count)
         return cups
     }
}

Even for such a simple case, we had to copy-paste the code. And if in this case, it is not very important, then in the case of complex, weighty logic, it can be much more painful.

FP option

How do we write code so that we don’t encounter these problems? A functional approach – instead of actually debiting funds, simply invoice and let the calling code decide for itself what to do with these. Then our function will look like:

public class Cafe
{
    public (Coffee, Charge) BuyCoffee(CreditCard card)
    {
        var cup = new Coffee()
        return (cup, new Charge(card, cup.Price))
    }
}

Yes, that’s so simple. Now the calling code, if it is a real application, can make a transaction and write off money. But if this is a test, then it just can check the returned Charge object for all its properties. No more mods needed: we separated the invoicing events and the interpretation of this invoice. A charge is a simple DTO that stores which card you need to charge. It is easy to see that our function has become pure. It simply returns a tuple of two objects, which are a simple description of the data. We can replace the call of this function with the result, and the meaning of the program will not change. And we at this level no longer need any payment provider, cheers!

What about buying N cups of coffee? Due to the fact that we got rid of the effects, we do not need to be afraid that N calls of BuyCoffee will spam our payment gateway, so we just reuse it.

public class Cafe
{
    public (Coffee, Charge) BuyCoffee(CreditCard card)
    {
        var cup = new Coffee()
        return (cup, new Charge(card, cup.Price))
    }

    public (Coffee[], Charge) BuyCoffees(int count, CreditCard card)
    {
        var (coffees, charges) = Enumerable.Range(0, count)
                                           .Select(_ => BuyCoffee(card))
                                           .Unzip();
        return (coffees, charges.Aggregate((c1, c2) => c1.Сombine(c2))
    }
}

Well, we show the Combine helper function:

public class Charge
{
    public CreditCard Card { get; set; }
    public double Amount { get; set; }

    public Charge(CreditCard card, double amount)
    {
        Card = card;
        Amount = amount;
    }

    public Charge Combine(Charge other)
    {
        if (Card != other.Card) 
            throw new ArgumentException("Can't combine charges to different cards");
        return new Charge(Card, Amount + other.Amount);
    }
}

Moreover, this helper function allows us to do many other cool things. For example, now we are able to minimize the number of interactions with a payment gateway by combining cards by customer:

IEnumerable<Charge> Coalesce(IEnumerable<Charge> charges) => 
    charges.GroupBy(x => x.Card).Select(g => g.Aggregate((c1, c2) => c1.Combine(c2))

This is just a shortlist of the benefits that cleanliness provides. And yes, note that the language is used the same, both there and there, all the difference is only in the approach.

I predict that they may object to me that the problem has not been solved, and now the code above should do this decommissioning, only now the logic is a little blurred, and we just simplified the tests of our Cafe class a little bit. In fact, this is not so, because the code above can also convey the decision of what to do next, and that code is even further, and so on to a service that will actually do something with this data (but there it can be made tested without Mokov, more about this in another article).

  • The second objection may be that in the OOP version we could configure IPaymentProvider to deal with transactions, but there are also difficulties: you need to configure timeouts, select values ​​so that the batching is effective and the latency of the operation does not increase much, plus you will still be afraid of “bad” implementations that will not be engaged in batch, and so on. In general, whatever one may say, this approach is significantly worse.

The separation of task execution into creating a descriptor of this task and interpretation seems to be a very insignificant shift from empty to empty, but this is a very important thing that can hardly be overestimated. Delaying the decision “what should we do with this data” opens up a lot of room for action, and makes many things like canceling or retrying a lot more trivial. The concept, in my opinion, is similar in power to RAII: one simple rule and a lot of far-reaching good consequences.

Is that all?

From the point of view of the very essence of AF – yes, that’s it. Lack of effects is the only requirement that must be observed for the program to be functional. But historically, FP languages ​​have more extensive restrictions, and they usually come up with restrictions for a reason, but in order to get benefits from this. The restriction on the types of variables (it is impossible to put a string in an int variable) allows you to write more reliable programs, the restrictions on changing the control flow (for example, the goto prohibition) lead to a simplification of the understanding of programs, the restriction on templating (Templates vs Generics) makes it easier to write generalized code and have better error messages, and so on.

One of the coolest advantages of common FP languages, in my opinion, is the value of function and type signatures. The fact is that, unlike “dirty” functions, a pure signature usually gives so much information that the number of possible options for its implementation is reduced to miserable units, and in extreme cases the compiler can generate the body of the function according to its signature. Why does this not work in imperative programs? Because there void UpdateOrders () and void UpdateUsers () have the same signature () -> (), but a completely different value. In the FP, they will be of type () -> OrdersUpdate and () -> UsersUpdate. Precisely because functions are only allowed to calculate the value (and not to make arbitrary game), we can confidently judge many of its properties by simply looking at the signature.

What does this give us? Well, for example, suppose we have such a function (Rust example)

// accept an array of objects, some other object, and return a value of the same type
fn foo <T> (a: & [T], b: T) -> T {... some kind of body ...}

I do not know what is inside this function, but by the signature, I see that the result will be one of the elements of the array, or in the case of an empty array – the element b that I passed. How do I know that? From there, the function does not make any assumptions about the type T. Therefore, it cannot create an instance on its own. Therefore, the only way to get a value of the same type is to take one of the objects that we passed to it.

Accordingly, I can write such a test

let a = [1,2,3,4,5];
let b = foo(a, 10);
assert!(b == 10 || a.iter().any(|x| x == b))

This test will be performed for any implementation of this function unless it calls UB and returns at least some value (does not panic and does not go into perpetual cycles). But we can safely assume that she does not do this, because it is unlikely that someone in their right mind would write a function that for some reason takes an array of any objects, but always panics (I remind you that we don’t know anything about the transferred objects, so panic only in some cases the function cannot).

Now let’s remove the second parameter and see what happens:

fn foo<T>(a: &[T]) -> T { ...какое-то тело... }

Please note that for an empty array, this function will throw an exception, panic, go into the eternal cycle or do something else bad. Or, formally speaking, it returns the Bottom type ⊥. How do I know that? And because the function was obligated to return the value of T, and we did not pass one to it. That is, its contract cannot be observed for any value of argument a. Thus, the function is partially recursive, and therefore not defined for empty arrays. And on vague arguments, functions usually panic.

In general, looking at this signature it is immediately clear that we should check the array beforehand for emptiness before calling such a function.

Why did I take for example Rust and not the same sycharp? And because his type of system is not powerful enough to guarantee this behavior. An example of a function that fails the test:

T Foo<T>(List<T> list, T value) => default(T);

I would like to rely on the type system in the sisharpa, but I can’t. You need to go watch the implementation of the function. And as soon as we went to see the implementation, we lost the main advantage that programming in high-level languages ​​gives us – the ability to encapsulate complexity and hide it behind a beautiful interface. The less information we need to understand how this or that code works, the easier and easier it is to make changes, and the more reliable the software is.

Do you know what the function will look like in the raster, which if the array is empty will return the default value of T? Like this:

fn foo<T: Default>(a: &[T]) -> T { ...какое-то тело... }

It can still fall with panic on an empty array, but given that the author clearly requested the possibility of creating a default value of this type, it is reasonable to assume that this is what happens in the body. In the end, this is superfluous writing, so if the author wrote it, then somehow it most likely uses it. And the only reasonable use of such an argument is to return the default value when the array is empty. And we immediately see this requirement in the signature. Just excellent after all! Let me remind you that in the sharp you need to go to the function body and see the default (T) call there.

  • In the functional paradigm, in 99% of cases, you just need to look at the signature of the functions to understand how it works. It may seem implausible boasting, but it is. Haskell community brought this idea to the absolute and created a Google search engine that allows you to search for functions in libraries by its signature. And it works great.

For example (a -> Bool) -> [a] -> [a] (a function that takes two arguments: a predicate and a list, returns a list of the same elements as the result) in the expected way finds the filter and takeWhile functions.

To fix, I propose a small riddle. Think about what this function is? It takes a string and returns absolutely any type.

fn bar <T> (s: String) -> T {...} // raster version
bar :: String -> a // Haskell option
Answer:

If you think about it, then we have no way to make an object about the type of which we do not know anything. Because the only thing this function can do is never return the result. That is, to return the eternal cycle or panic known to us ⊥. But remember that the function also takes a string variable. There is no sense in transmitting it for a cycle, so you can be almost sure that this function is engaged in throwing panic:

fn bar<T>(s: String) -> T { 
    panic!(s);
}

If you thought about reflection and creating a type in runtime, this is basically a possible outcome (although it is still not in the rasta and in the Haskell), but then it is not clear why the string parameter is needed. Although if you really try, you can imagine such a function. So in principle, if this is your option, then feel free to add yourself a point, this is also a possible option for languages ​​that allow this.

The skill to think of what a function does by signature is very helpful because you do not need to go into the body of functions to understand what it can do and what not. Even if the function foo from the example above takes 1000 lines, it is still obliged to return either one of the elements of the passed array or the second argument. There are no other options. And you do not need to read 1000 lines to understand this. You just know this by looking at the signature of the function.

Can a purely functional language do something useful?

This question has worried me since I knew about functional languages. “Damn,” I thought, “But I need to go to the database, make an HTTP request, write to the console in the end. But the pure language does not allow this. It’s probably suitable only for factorials.”

As it turned out, the FP language itself really can’t do all this, But here smart guys took it and figured out how to get around it. They said, “Okay, the program cannot do dirty actions. But, what if we separate the creation of the computation descriptor and its interpretation (just like in our cafe example)? And then it turns out that the whole program is clean, and the runtime that runs is unclean all the dirty work! “

  • What does it look like? Well, take for example the type of IO responsible for interacting with the outside world. This is the same type as our Charge from the example above, but instead of writing off the card, it describes input/output. IO itself does nothing if we write print “Hello world” nothing happens in the Haskell. But if we write main = print “Hello world” then the text will magically appear on the screen. How does this happen?

And the thing is that the Haskell runtime is engaged in the interpretation of this IO. That is, all the described actions occur outside the main function. That is, from our entire program we are assembling a giant state machine, which the runtime then begins to interpret. And this runtime is allowed to do “dirty” things – go to the database, print on the screen, and do whatever. But in terms of code, we never do anything.

If we want to go to the base in the Haskell, then we create the Go to VBase object, which by itself does nothing. But when the interpreter executing the main function encounters this value, it will physically go to the base.

If you use the analogy, then the Haskell program is an algorithm written on a piece of paper, and runtime is a robot that executes this algorithm. By itself, the leaflet does nothing and simply lies inactive. From the point of view of the algorithm, we cannot do anything, we can only make another leaflet with a different set of commands. And until the robot comes to interpret our notes, the leaflet remains completely inactive.

I guess I just confused you with this analogy, so let’s show an example. Here is a Rust program:

fn main() {
    let _ = print!("Hello ");
    println!("world!");
}

And she displays “Hello world!” Now let’s try to write a similar program in Haskell:

main :: IO ()
main = do
  let _ = print "Hello "
  print "world!"

And she displays “world!”. In fact, the difference between the behavior of these programs is the quintessence of the difference between a clean and an unclean program. In the case of Haskell, we created a “deduce Hello” descriptor but did not use it in any way. This descriptor was not interpreted and no inscription appeared on the screen. As a result of main, we returned the only descriptor with the world ! which was executed. On the other hand, in the case of a program in Rust, the call to print! in itself is an action, and we can’t cancel it or transform it in any other way.

It is the ability to work with effects as values ​​(to throw out the fact that we wanted to display something) make life very simple and makes bugs impossible like what I showed in the first section. And when they talk about “Effects Control in FP” they mean exactly that. Looking ahead, you can describe the effects of functions in the style of “this function writes to the database (and only to that table), uses HTTP (but only through this proxy, and to this site), can write logs and read configs. And that’s it this will be checked during the build, and if you try to go to the wrong site or read the config without annotating such a possibility in the signature, it will lead to a compile-time error.

Conclusion

As you can see, all the opposition of OOP and FI is completely artificial. You can write in that and in a different style in the same language, and in principle even combine. The whole question is whether the language encourages writing in this style or vice versa. For example, writing object-oriented in ANSI C is possible, but very painful. But in Java it’s easy. Writing Java in a purely functional style, on the other hand, is tough, but using Scala or Haskell is easy. Therefore, the question is rather that there are two tools, one is common and supported by many languages, the other is more interested in a whole range of properties, but it is not supported everywhere. Well, then your choice as a developer is which tool suits you best according to the situation.

  • Personally, I see for myself a lot of advantages in the functional paradigm in terms of code support. I am very tired of the fact that the rearrangement of two disconnected lines in some places can break something in a completely third place. I’m tired of configuring Moki and DI. I don’t want to catch errors in the runtime “The method was not locked” / “The type was not registered” / “…”, in the end, I did not choose a statically typed language for that.

Of course, FI is not a silver bullet, it has its own limitations, and it also has room to grow. But in my opinion, it is much more interesting than the current approaches. The “chips” of FP languages ​​like lambdas, pattern matches, ADTs and other things have not been surprising in the mainstream languages ​​for a long time. But this is all husk, and it becomes a really powerful tool only in conjunction with the most important idea of ​​FP – the idea of ​​link transparency.