Tail recursion (87)

1 Name: #!/usr/bin/anon 2006-02-21 02:27 ID:Gi6DzwCb

Been thinking about tail recursion lately. I think compilers should automatically make function definitions like this tail recursive:

fac 0 = 1
fac n | n > 0 = n * fac (n-1)

They can do this by noting, each time they recur, that the only thing to do after returning from this recursion is to multiply, or on the second recursion, to multiply twice. On the second recursion, these two multiplications can be simplified into one, right then and there. Like this:

fac 4:       (fac 4 = 4 * fac 3) =>
(4*) (fac 3 = 3 * fac 2) =>
(4*) . (3*) (fac 2 = 2 * fac 1) =>
(12*) (fac 2 = 2 * fac 1) =>
(12*) . (2*) (fac 1 = 1 * fac 0) =>
(24*) (fac 1 = 1 * fac 0) =>
(24*) . (1*) (fac 0 = 1) => 24

This can compute factorials of arbitrarily large n while using constant stack space. The big deal here is that the compiler has to know when it's able to simplify two composed functions into one; in this case, it has to know that a*(b*c) = (a*b)*c, i.e. * is associative. Comments?

2 Name: #!/usr/bin/anon 2006-02-21 02:29 ID:Heaven

Damn, I was trying to keep that short so it would all fit on the index page. I guess there's a penalty line for each code block.

3 Name: #!/usr/bin/anon 2006-02-21 13:03 ID:Heaven

Nobody uses recursion and compilers at the same time, so the idea is fairly meaningless!

4 Name: #!/usr/bin/anon 2006-02-21 15:13 ID:Heaven

>>3 is utter nonsense.

5 Name: #!/usr/bin/anon 2006-02-21 16:53 ID:Gi6DzwCb

>>3
Maybe I didn't provide enough context. I suppose recursion is not used very much in languages like C, C++, ... I had in mind functional languages, more like Lisp and Scheme, and particularly Haskell, which my example code is designed to look like.

Often in these languages, a routine like factorial will be optimized by adding an accumulator, like this:

facAcc a 0 = a
facAcc a n = facAcc (n*a) (a-1)
fac = facAcc 1

This allows the use of tail recursion, which is like "goto with arguments", so that it doesn't eat up stack space. But I don't think adding an accumulator is in the spirit of declarative programming, since it tells how, rather than what, to compute. Hence, the proposal in >>1.

6 Name: #!/usr/bin/anon 2006-02-21 21:37 ID:Heaven

But as I said, nobody compiles Lisp or Scheme.

Also, nobody uses Haskell.

8 Name: #!/usr/bin/anon 2006-02-22 13:44 ID:Heaven

>>6

Right, and I'm pretty sure nobody uses the first two.

I guess there's one person out there who uses Haskell, though!

9 Name: #!/usr/bin/anon 2006-02-22 13:45 ID:Heaven

Also I meant to say >>7, but didn't because I can't count any further than three! Sorry!

10 Name: dmpk2k!hinhT6kz2E 2006-02-22 20:49 ID:Heaven

I think >>6 was joking.

11 Name: #!/usr/bin/anon 2006-02-23 06:30 ID:YvFqDFYd

Proper tail recursion, including mutual tail recursion is quite simple when you have continuations. Perl6's VM, 'parrot' is based mostly on continuations.

Continauations make tail recursion optimisations, exception handling, and co-routines quite easy to implement, and as such, perl6 will have tail recusion optimisation. :)

Why I care? I dunno. Tail recursion is stupid, recursion is stupid, the stack is a precious resource, not something to throw away willy nilly.

12 Name: #!/usr/bin/anon 2006-02-23 09:23 ID:MWxl9wNd

What are your thoughts of recursion on Itanium then? Not only does it have multiple huge stacks, but it has a metric assload of registers to use instead of the stack space for args.

13 Name: #!/usr/bin/anon 2006-02-23 13:13 ID:Heaven

Pretty much no matter what you do, replacing a loop with recursion will make your memory usage go from O(1) to O(N). As >>1 shows, in special cases you can apply optimizations to *get back to the same performance as a loop.

It's incredibly wasteful and silly.

14 Name: #!/usr/bin/anon 2006-02-23 15:14 ID:MWxl9wNd

Ok, you do your octree traversals iteratively then. I'll write them in 8 lines recursively.

15 Name: #!/usr/bin/anon 2006-02-23 16:31 ID:Heaven

I don't know about you, but I write for loops a hell of a lot more often than I write octree traversals. And you know what? I can still do my octree traversals recursively even if I do all my for loops iteratively!

"Best tool for the job", maybe? Possibly?

16 Name: #!/usr/bin/anon 2006-02-23 22:25 ID:MWxl9wNd

>>15 ahahah you mean people would actually use logic instead of blindly following rhetoric? Oh how naive you are.

I was responding to "recursion is stupid" and "the stack is a previous resource"

17 Name: 15 2006-02-23 23:01 ID:Heaven

>>16

I think you missed the part where recursion is still stupid in the vast majority of cases, and an esoteric example like traversing an octree every once in a blue moon is not really a refutation.

What was being attacked was not recursion in general, but functional programming languages' insistance that recursion is the only construct one should ever use.

18 Name: #!/usr/bin/anon 2006-02-24 19:25 ID:Gi6DzwCb

>>17
Functional programming languages don't insist that. Common Lisp has its loop macro, Scheme has iterative "do" syntax, and in Haskell the way you'd define factorial in practice is

fac n = product [1..n]

although you can do it iteratively too, if you must:

fac n = result (for init next done)
where init = (0,1)
next (i,m) = (i+1, m * (i+1))
done (i,_) = i==n
result (_,m) = m
for i n d = until d n i

Iteration is available in all these languages. It's just not the best tool for the job as often as you'd think.

19 Name: #!/usr/bin/anon 2006-02-27 09:45 ID:Heaven

>>18
that's got to be the clumsiest code I've ever seen. is haskell usually like that, or does it just look insane there because you're doing something the "wrong" way?

20 Name: dmpk2k!hinhT6kz2E 2006-02-27 10:43 ID:Heaven

Functional languages fit naturally with recursion, not iteration, so of course it'd look ugly.

There most certainly are benefits to using a functional language, it's just that most people don't need them (yet). These benefits wouldn't exist if iteration was a normal feature.

21 Name: #!/usr/bin/anon 2006-02-28 01:02 ID:YvFqDFYd

>>19

He presents two ways, the first way 1 line. The second way is the clumsy non-haskellish way of doing it.

22 Name: #!/usr/bin/anon 2006-02-28 12:27 ID:Heaven

>>21
Yeah I know. And I'm talking about the second, clumsy way, since the first example is one line (and one line is never any kind of indication of what a language looks like). And I'm asking if Haskell normally looks that nasty, or if it's only because he's doing something the language clearly wasn't intended to do frequently. Or, for that matter, if he's intentionally making it more complicated than normal to "prove" how clean its recursion is.

23 Name: #!/usr/bin/anonymous : 2006-02-28 20:17 ID:qq6fgBY7

I ripped off that code from "The Evolution of a Haskell Programmer" (http://www.willamette.edu/~fruehr/haskell/evolution.html). It's an example of what someone thinking in Pascal might write.

The one-line version actually does give a pretty good indication of Haskell. Lots of things can be done in one or two lines, like finding the minimum element of a list:

minlist = foldl1 min

Don't take my word for it though, try it.

24 Name: Lulu : 2006-02-28 21:15 ID:Wlv8rAFE

`
y f = f (y f)

fac = y (\f n -> if (n==0) then 1 else n * f (n-1))
`

i like this

25 Name: #!/usr/bin/anonymous : 2006-03-01 16:09 ID:gTAQ7/+t

>>23

So basically, if there is a built-in special-case construct ("product"), that will look a whole lot tidier than doing it by hand. This is hardly an argument for or against recursion, loops, functional programming, or imperative programming.

I can do the same thing in C, by using the imaginary library libproduct:

int factorial(int n) { return product(sequence(1,n)); }

Now, the actual subject of this discussion is cases where you do have to write your own loops or recursions, for large amounts of data. Not special cases where you can use syntactic sugar, nor cases where you only have to interate or recurse a handful of times. This is a very common situation in real-world code, and my claim, at least, is that iterative langauge are ill suited for it.

26 Name: #!/usr/bin/anonymous : 2006-03-01 21:23 ID:Heaven

product = foldr (*) 1

27 Name: #!/usr/bin/anonymous : 2006-03-03 06:20 ID:O1umhTAm

And for those who don't quite get it yet:

fac n = product [1..n]

is the same as:

fac n = foldr (*) 1 [1..n]

Where 'foldr' is as essential to writing a haskell program as 'while' is to C.

28 Name: #!/usr/bin/anonymous : 2006-03-04 14:09 ID:Heaven

So foldr (the hell is up with that name?) is syntactic sugar for a certain kind of recursive loop, and hopefully implemented without recursion internally by the language?

29 Name: #!/usr/bin/anonymous : 2006-03-06 04:15 ID:O1umhTAm

This is the implementation of foldl and foldr:

foldl            :: (a -> b -> a) -> a -> [b] -> a
foldl f z [] = z
foldl f z (x:xs) = foldl f (f z x) xs
foldr            :: (a -> b -> b) -> b -> [a] -> b
foldr f z [] = z
foldr f z (x:xs) = f x (foldr f z xs)

You'll note that the type signatures are slightly different, but if you don't know how to read haskell type signatures, that'll mean very little to you.

If you're especially quick, you'll notice that it is implemented using recursion.

And this is the difference:

foldl (+) 0 [1,2,3,4,5] means (((((0+1)+2)+3)+4)+5)
foldr (+) 0 [1,2,3,4,5] means (1+(2+(3+(4+(5+0)))))

i.e. one is to the 'left' and one is to the 'right'.

in python, foldr is spelt reduce()

30 Name: #!/usr/bin/anonymous : 2006-03-06 12:49 ID:Heaven

But is it actually implemented like that, or is it implemented internally with a normal imperative loop? Because if you do implement it like that, it uses O(N) memory for something as simple as multiplying an array, which is ridiculous.

31 Name: #!/usr/bin/anonymous : 2006-03-06 16:12 ID:qq6fgBY7

The definition of foldl is tail recursive. I don't know about foldr, but GHC and Hugs seem to optimize those sorts of things in some way. Both compute fac faster and with less memory than facAcc. I have no idea how it works.

32 Name: #!/usr/bin/anonymous : 2006-03-06 23:52 ID:O1umhTAm

Uh, that's the implementation. Right there. I copied it out of the standard file 'Prelude.hs'. No trickery. That's the exact implementation.

33 Name: dmpk2k!hinhT6kz2E : 2006-03-07 04:09 ID:Heaven

Iterative languages aren't the only ones with optimizing compilers (or sane interpreters). The people writing these things aren't idiots.

34 Name: dmpk2k!hinhT6kz2E : 2006-03-07 04:11 ID:Heaven

s/iterative/imperative/

35 Name: #!/usr/bin/anonymous : 2006-03-07 14:26 ID:gTAQ7/+t

>>32

So it does use O(N) memory for something that could be done in O(1) with an imperative loop?

I suppose it might be able to optimize something that simple, but what about a slightly more complex recursion? You'll end up having to pray that the compiler will optimize it properly even though you wrote a horribly ineffecient algorithm. Of course, you always have to trust the compiler to some extent to optimize properly, but forcing you to write badly performing code on the algorithmic level, and praying the compiler fixes it, doesn't seem very wise.

36 Name: #!/usr/bin/anonymous : 2006-03-08 02:16 ID:qq6fgBY7

These statements about trusting the compiler and using efficient algorithms are misguided. Fast code requires profiling to see where the time is actually spent, and theoretically inefficient algorithms are often better in practice (example: quicksort beats radix sort on most real-world inputs).

But you're right to some degree in this case, and I do wonder how those Haskell implementations manage to optimize that.

37 Name: #!/usr/bin/anonymous : 2006-03-08 06:41 ID:Wr4EVlhE

Wait a minute.

foldl is iterative in haskell. foldr is recursive.

Can anyone show me a way to do foldr iteratively?

This thread has been closed. You cannot post in this thread any longer.