Tail recursion (87)

1 Name: #!/usr/bin/anon 2006-02-21 02:27 ID:Gi6DzwCb

Been thinking about tail recursion lately. I think compilers should automatically make function definitions like this tail recursive:

fac 0 = 1
fac n | n > 0 = n * fac (n-1)

They can do this by noting, each time they recur, that the only thing to do after returning from this recursion is to multiply, or on the second recursion, to multiply twice. On the second recursion, these two multiplications can be simplified into one, right then and there. Like this:

fac 4:       (fac 4 = 4 * fac 3) =>
(4*) (fac 3 = 3 * fac 2) =>
(4*) . (3*) (fac 2 = 2 * fac 1) =>
(12*) (fac 2 = 2 * fac 1) =>
(12*) . (2*) (fac 1 = 1 * fac 0) =>
(24*) (fac 1 = 1 * fac 0) =>
(24*) . (1*) (fac 0 = 1) => 24

This can compute factorials of arbitrarily large n while using constant stack space. The big deal here is that the compiler has to know when it's able to simplify two composed functions into one; in this case, it has to know that a*(b*c) = (a*b)*c, i.e. * is associative. Comments?

2 Name: #!/usr/bin/anon 2006-02-21 02:29 ID:Heaven

Damn, I was trying to keep that short so it would all fit on the index page. I guess there's a penalty line for each code block.

3 Name: #!/usr/bin/anon 2006-02-21 13:03 ID:Heaven

Nobody uses recursion and compilers at the same time, so the idea is fairly meaningless!

4 Name: #!/usr/bin/anon 2006-02-21 15:13 ID:Heaven

>>3 is utter nonsense.

5 Name: #!/usr/bin/anon 2006-02-21 16:53 ID:Gi6DzwCb

>>3
Maybe I didn't provide enough context. I suppose recursion is not used very much in languages like C, C++, ... I had in mind functional languages, more like Lisp and Scheme, and particularly Haskell, which my example code is designed to look like.

Often in these languages, a routine like factorial will be optimized by adding an accumulator, like this:

facAcc a 0 = a
facAcc a n = facAcc (n*a) (a-1)
fac = facAcc 1

This allows the use of tail recursion, which is like "goto with arguments", so that it doesn't eat up stack space. But I don't think adding an accumulator is in the spirit of declarative programming, since it tells how, rather than what, to compute. Hence, the proposal in >>1.

6 Name: #!/usr/bin/anon 2006-02-21 21:37 ID:Heaven

But as I said, nobody compiles Lisp or Scheme.

Also, nobody uses Haskell.

8 Name: #!/usr/bin/anon 2006-02-22 13:44 ID:Heaven

>>6

Right, and I'm pretty sure nobody uses the first two.

I guess there's one person out there who uses Haskell, though!

9 Name: #!/usr/bin/anon 2006-02-22 13:45 ID:Heaven

Also I meant to say >>7, but didn't because I can't count any further than three! Sorry!

10 Name: dmpk2k!hinhT6kz2E 2006-02-22 20:49 ID:Heaven

I think >>6 was joking.

11 Name: #!/usr/bin/anon 2006-02-23 06:30 ID:YvFqDFYd

Proper tail recursion, including mutual tail recursion is quite simple when you have continuations. Perl6's VM, 'parrot' is based mostly on continuations.

Continauations make tail recursion optimisations, exception handling, and co-routines quite easy to implement, and as such, perl6 will have tail recusion optimisation. :)

Why I care? I dunno. Tail recursion is stupid, recursion is stupid, the stack is a precious resource, not something to throw away willy nilly.

12 Name: #!/usr/bin/anon 2006-02-23 09:23 ID:MWxl9wNd

What are your thoughts of recursion on Itanium then? Not only does it have multiple huge stacks, but it has a metric assload of registers to use instead of the stack space for args.

13 Name: #!/usr/bin/anon 2006-02-23 13:13 ID:Heaven

Pretty much no matter what you do, replacing a loop with recursion will make your memory usage go from O(1) to O(N). As >>1 shows, in special cases you can apply optimizations to *get back to the same performance as a loop.

It's incredibly wasteful and silly.

14 Name: #!/usr/bin/anon 2006-02-23 15:14 ID:MWxl9wNd

Ok, you do your octree traversals iteratively then. I'll write them in 8 lines recursively.

15 Name: #!/usr/bin/anon 2006-02-23 16:31 ID:Heaven

I don't know about you, but I write for loops a hell of a lot more often than I write octree traversals. And you know what? I can still do my octree traversals recursively even if I do all my for loops iteratively!

"Best tool for the job", maybe? Possibly?

16 Name: #!/usr/bin/anon 2006-02-23 22:25 ID:MWxl9wNd

>>15 ahahah you mean people would actually use logic instead of blindly following rhetoric? Oh how naive you are.

I was responding to "recursion is stupid" and "the stack is a previous resource"

17 Name: 15 2006-02-23 23:01 ID:Heaven

>>16

I think you missed the part where recursion is still stupid in the vast majority of cases, and an esoteric example like traversing an octree every once in a blue moon is not really a refutation.

What was being attacked was not recursion in general, but functional programming languages' insistance that recursion is the only construct one should ever use.

18 Name: #!/usr/bin/anon 2006-02-24 19:25 ID:Gi6DzwCb

>>17
Functional programming languages don't insist that. Common Lisp has its loop macro, Scheme has iterative "do" syntax, and in Haskell the way you'd define factorial in practice is

fac n = product [1..n]

although you can do it iteratively too, if you must:

fac n = result (for init next done)
where init = (0,1)
next (i,m) = (i+1, m * (i+1))
done (i,_) = i==n
result (_,m) = m
for i n d = until d n i

Iteration is available in all these languages. It's just not the best tool for the job as often as you'd think.

19 Name: #!/usr/bin/anon 2006-02-27 09:45 ID:Heaven

>>18
that's got to be the clumsiest code I've ever seen. is haskell usually like that, or does it just look insane there because you're doing something the "wrong" way?

20 Name: dmpk2k!hinhT6kz2E 2006-02-27 10:43 ID:Heaven

Functional languages fit naturally with recursion, not iteration, so of course it'd look ugly.

There most certainly are benefits to using a functional language, it's just that most people don't need them (yet). These benefits wouldn't exist if iteration was a normal feature.

21 Name: #!/usr/bin/anon 2006-02-28 01:02 ID:YvFqDFYd

>>19

He presents two ways, the first way 1 line. The second way is the clumsy non-haskellish way of doing it.

22 Name: #!/usr/bin/anon 2006-02-28 12:27 ID:Heaven

>>21
Yeah I know. And I'm talking about the second, clumsy way, since the first example is one line (and one line is never any kind of indication of what a language looks like). And I'm asking if Haskell normally looks that nasty, or if it's only because he's doing something the language clearly wasn't intended to do frequently. Or, for that matter, if he's intentionally making it more complicated than normal to "prove" how clean its recursion is.

23 Name: #!/usr/bin/anonymous : 2006-02-28 20:17 ID:qq6fgBY7

I ripped off that code from "The Evolution of a Haskell Programmer" (http://www.willamette.edu/~fruehr/haskell/evolution.html). It's an example of what someone thinking in Pascal might write.

The one-line version actually does give a pretty good indication of Haskell. Lots of things can be done in one or two lines, like finding the minimum element of a list:

minlist = foldl1 min

Don't take my word for it though, try it.

24 Name: Lulu : 2006-02-28 21:15 ID:Wlv8rAFE

`
y f = f (y f)

fac = y (\f n -> if (n==0) then 1 else n * f (n-1))
`

i like this

25 Name: #!/usr/bin/anonymous : 2006-03-01 16:09 ID:gTAQ7/+t

>>23

So basically, if there is a built-in special-case construct ("product"), that will look a whole lot tidier than doing it by hand. This is hardly an argument for or against recursion, loops, functional programming, or imperative programming.

I can do the same thing in C, by using the imaginary library libproduct:

int factorial(int n) { return product(sequence(1,n)); }

Now, the actual subject of this discussion is cases where you do have to write your own loops or recursions, for large amounts of data. Not special cases where you can use syntactic sugar, nor cases where you only have to interate or recurse a handful of times. This is a very common situation in real-world code, and my claim, at least, is that iterative langauge are ill suited for it.

26 Name: #!/usr/bin/anonymous : 2006-03-01 21:23 ID:Heaven

product = foldr (*) 1

27 Name: #!/usr/bin/anonymous : 2006-03-03 06:20 ID:O1umhTAm

And for those who don't quite get it yet:

fac n = product [1..n]

is the same as:

fac n = foldr (*) 1 [1..n]

Where 'foldr' is as essential to writing a haskell program as 'while' is to C.

28 Name: #!/usr/bin/anonymous : 2006-03-04 14:09 ID:Heaven

So foldr (the hell is up with that name?) is syntactic sugar for a certain kind of recursive loop, and hopefully implemented without recursion internally by the language?

29 Name: #!/usr/bin/anonymous : 2006-03-06 04:15 ID:O1umhTAm

This is the implementation of foldl and foldr:

foldl            :: (a -> b -> a) -> a -> [b] -> a
foldl f z [] = z
foldl f z (x:xs) = foldl f (f z x) xs
foldr            :: (a -> b -> b) -> b -> [a] -> b
foldr f z [] = z
foldr f z (x:xs) = f x (foldr f z xs)

You'll note that the type signatures are slightly different, but if you don't know how to read haskell type signatures, that'll mean very little to you.

If you're especially quick, you'll notice that it is implemented using recursion.

And this is the difference:

foldl (+) 0 [1,2,3,4,5] means (((((0+1)+2)+3)+4)+5)
foldr (+) 0 [1,2,3,4,5] means (1+(2+(3+(4+(5+0)))))

i.e. one is to the 'left' and one is to the 'right'.

in python, foldr is spelt reduce()

30 Name: #!/usr/bin/anonymous : 2006-03-06 12:49 ID:Heaven

But is it actually implemented like that, or is it implemented internally with a normal imperative loop? Because if you do implement it like that, it uses O(N) memory for something as simple as multiplying an array, which is ridiculous.

31 Name: #!/usr/bin/anonymous : 2006-03-06 16:12 ID:qq6fgBY7

The definition of foldl is tail recursive. I don't know about foldr, but GHC and Hugs seem to optimize those sorts of things in some way. Both compute fac faster and with less memory than facAcc. I have no idea how it works.

32 Name: #!/usr/bin/anonymous : 2006-03-06 23:52 ID:O1umhTAm

Uh, that's the implementation. Right there. I copied it out of the standard file 'Prelude.hs'. No trickery. That's the exact implementation.

33 Name: dmpk2k!hinhT6kz2E : 2006-03-07 04:09 ID:Heaven

Iterative languages aren't the only ones with optimizing compilers (or sane interpreters). The people writing these things aren't idiots.

34 Name: dmpk2k!hinhT6kz2E : 2006-03-07 04:11 ID:Heaven

s/iterative/imperative/

35 Name: #!/usr/bin/anonymous : 2006-03-07 14:26 ID:gTAQ7/+t

>>32

So it does use O(N) memory for something that could be done in O(1) with an imperative loop?

I suppose it might be able to optimize something that simple, but what about a slightly more complex recursion? You'll end up having to pray that the compiler will optimize it properly even though you wrote a horribly ineffecient algorithm. Of course, you always have to trust the compiler to some extent to optimize properly, but forcing you to write badly performing code on the algorithmic level, and praying the compiler fixes it, doesn't seem very wise.

36 Name: #!/usr/bin/anonymous : 2006-03-08 02:16 ID:qq6fgBY7

These statements about trusting the compiler and using efficient algorithms are misguided. Fast code requires profiling to see where the time is actually spent, and theoretically inefficient algorithms are often better in practice (example: quicksort beats radix sort on most real-world inputs).

But you're right to some degree in this case, and I do wonder how those Haskell implementations manage to optimize that.

37 Name: #!/usr/bin/anonymous : 2006-03-08 06:41 ID:Wr4EVlhE

Wait a minute.

foldl is iterative in haskell. foldr is recursive.

Can anyone show me a way to do foldr iteratively?

38 Name: #!/usr/bin/anonymous : 2006-03-08 14:44 ID:gTAQ7/+t

>>36

Well, exactly, the way to get fast code is to do proper testing to see what works best, working from your knowledge of algorithms and optimization, which is why you definitely don't want to intentionally write a horrible algorithm because the langauge is designed to only use horrible algorithms, and then hope the compiler will clean up your mess for you.

39 Name: #!/usr/bin/anonymous : 2006-03-08 17:41 ID:qq6fgBY7

>>37

foldr f z ls = foldl (flip f) z (reverse ls)

For reference, flip and reverse are defined as:

flip f x y = f y x
reverse = foldl (flip (:)) []

40 Name: #!/usr/bin/anonymous : 2006-03-08 19:54 ID:Wr4EVlhE

Both foldl and foldr in the Prelude is O(N) time. foldl is O(1) space; foldr is O(N) space.

>>39
You reversed a list, which takes O(N) time and space. Then did foldl on it which takes O(N) time, O(1) space.

The foldr in the Prelude goes forward in the list, creating a huge function, which is O(N) time and space. Then it goes backwards to simplify the function, which takes O(N) time.

Your foldr is basically the same as the one in the Prelude. The difference is that yours has two list and it traverses through the both lists once. The Prelude's has one list and a huge function. It traverses through the list and simplifies the function. I don't know what goes on in the compiler.

41 Name: dmpk2k!hinhT6kz2E : 2006-03-09 02:02 ID:Heaven

> you definitely don't want to intentionally write a horrible algorithm because the langauge is designed to only use horrible algorithms

This is leaving the realm of the reasonable, and entering the realm of religion. Recursion does not imply horrible algorithm. In fact, iteration is (provably) just one case of recursion.

Those problems that are iterable are also easy to optimize in functional languages using a tail recursion. No active frame is created on the stack for either, unless the language implementation is braindead.

42 Name: #!/usr/bin/anonymous : 2006-03-09 14:35 ID:gTAQ7/+t

> In fact, iteration is (provably) just one case of recursion.

Does this proof take into account the memory usage of stack frames in recursion?

> Those problems that are iterable are also easy to optimize in functional languages using a tail recursion.

This statement might or might not be true, but it is not trivially obvious that it should be. Please back up this claim.

43 Name: #!/usr/bin/anonymous : 2006-03-09 22:54 ID:JgtJdHur

>>42

Do you know what tail recursion is?

44 Name: dmpk2k!hinhT6kz2E : 2006-03-10 01:25 ID:Heaven

> Does this proof take into account the memory usage of stack frames in recursion?

Provably the same means that they are identical. If you can optimize out the use of stacks in iteration, you can do the same for the same case in recursion since it's the same thing.

> Please back up this claim.

For once I think "go google it" is apt. If you're going to argue a point, at least research it a little. Here's the first three links returned in a google for "tail recursion", all which include examples:

First link: http://en.wikipedia.org/wiki/Tail_recursion
In computer science, tail recursion is a special case of recursion that can be easily transformed into an iteration. It is used in functional programming languages where the declarative approach and explicit handling of state emphasize recursive functions that rapidly fill the stack. Replacing recursion with iteration, either manually or automatically, drastically decreases the amount of stack space used and improves efficiency.

Second link: http://www.cis.ksu.edu/VirtualHelp/Info/develop/cmu-user.info.Tail_Recursion.html
Tail recursion is interesting because it is form of recursion that can be implemented much more efficiently than general recursion. In general, a recursive call requires the compiler to allocate storage on the stack at run-time for every call that has not yet returned. This memory consumption makes recursion unacceptably inefficient for representing repetitive algorithms having large or unbounded size. Tail recursion is the special case of recursion that is semantically equivalent to the iteration constructs normally used to represent repetition in programs. Because tail recursion is equivalent to iteration, tail-recursive programs can be compiled as efficiently as iterative programs.

Third link: http://www.nist.gov/dads/HTML/tailrecursn.html
Some examples in C. The others all have examples too, but this one is in a language you're probably familiar with.

45 Name: #!/usr/bin/anonymous : 2006-03-10 03:40 ID:Wr4EVlhE

This is pretty stupid. fyi. Tail recursion is basically the same as a while loop if the compiler does it right. In functional languages (they usually don't have while loops), tail recursion is called iteration (ex: foldl) and different from actual recursion (ex: foldr).

46 Name: #!/usr/bin/anonymous : 2006-03-10 12:53 ID:gTAQ7/+t

Yes, tail recursion can be optimized. But tail recursion is a very limited subset of all possible recursions, and it is also a very limited subset of things that can be implemented in O(1) memory with an iterative loop. It's not obvious that all the things you can do with an iterative loop can be transformed into tail recursion in O(1) memory.

This very thread contains this example: >>37,39,40. I don't see a resolution for how to do foldr with tail recursion. With a loop, I can trivially change the order I traverse the elements in, including some pretty non-trivial orderings like taking the odd elements first and the evens second.

What I asked for is some proof that all this, and not just a subset, can be transformed into tail recursion in O(1) memory and O(n) time.

47 Name: dmpk2k!hinhT6kz2E : 2006-03-10 13:09 ID:Heaven

> But tail recursion is a very limited subset of all possible recursions

Yes.

> and it is also a very limited subset of things that can be implemented in O(1) memory with an iterative loop.

No. Please read the links I provided.

48 Name: #!/usr/bin/anonymous : 2006-03-10 14:05 ID:Heaven

I did read them, if not all that thoroughly, but I didn't see a general proof of this. For instance, how do I, with recursion, do:

  1. >>37,39,40?
  2. Take a list of the numbers 0..n, and shuffle them in random order in O(n) time and O(n) memory? (Preferrably without using any more memory than the list already uses, and without running through it more than one time.)
  3. Perform a Shell Sort? (http://en.wikipedia.org/wiki/Shell_sort)
  4. Perform a Pigeonhole Sort? (http://en.wikipedia.org/wiki/Pigeonhole_sort)

Maybe I am getting ahead of myself since these are mostly built around array indexing, but that's pretty much what iterative loops are all about anyway.

49 Name: #!/usr/bin/anonymous : 2006-03-10 21:52 ID:Heaven

There is no way to do foldr in O(1) space and O(N) time.

50 Name: #!/usr/bin/anonymous : 2006-03-10 22:54 ID:JgtJdHur

Ugh...

Okay. An iterative loop:

while (x) {

do y;

} (hold memory in z for next iteration)

Now, since you're using a bounded amount of space for z, you can imagine that all the information in z is held in a struct.

Tail recursion:

function tail (input, z) {

if (done) return answer;
else tail (input for next iteration, z for next iteration)
}

When you do the recursive call, all the information required is in the arguments, and you can drop everything else, thus using the same stack area.

51 Name: #!/usr/bin/anonymous : 2006-03-10 23:05 ID:JgtJdHur

Oh, and if you want to be pedantic about the matter, there is no such thing as O(1) space, because you need log N bits to hold the pointer to the input in memory.

52 Name: #!/usr/bin/anonymous : 2006-03-11 03:21 ID:Wr4EVlhE

>>51
wot? a pointer is just a number. it doesn't grow or shrink.

53 Name: #!/usr/bin/anonymous : 2006-03-11 03:34 ID:Heaven

>>46
You cannot write foldr with a simple imperative loop because lists in question are one-way and cannot be traversed in the reverse order. Similarly, the problems listed in >>48 cannot be solved efficiently with Haskell, not because Haskell forbids imperative loops, but because lists in Haskell are one-way and immutable.

54 Name: #!/usr/bin/anonymous : 2006-03-11 04:48 ID:Heaven

>>53
And some other data structure can be used instead, right?

55 Name: #!/usr/bin/anonymous : 2006-03-11 06:57 ID:Heaven

Yes, but there are some restrictions, because in Haskell basically all data structures are immutable. When you want to modify an array, you must make a copy of the entire array. You can use mutable arrays, but then you must explicitly sequence their manipulations using monads. There is also a mechanism that presents a immutable interface on top of a mutable array, but the time behavior of resulting arrays is somewhat complicated.

56 Name: #!/usr/bin/anonymous : 2006-03-11 20:54 ID:LH3uSuz4

>>50-51

I don't think you understand what is being discussed. Nobody is confused about what tail recursion means. We are discussing where it can be applied, and where not.

>>53-55

So basically my complaint in >>35 is justified - the language does not allow you to implement certain efficient algorithms, due to its inherent limitations?

What if you assume you have functions like "readarray" and "writearray" to do random access to memory arrays? Which points in >>48 could be implemented efficiently then?

57 Name: #!/usr/bin/anonymous : 2006-03-12 00:52 ID:Heaven

>>56
Your complaint on >>35 is not justified. foldr on a singly linked list cannot be done in O(1), imperative language or not.

58 Name: #!/usr/bin/anonymous : 2006-03-12 04:42 ID:/8YW0amH

>>56

WTF? 50 answers all your iterative problems.

59 Name: #!/usr/bin/anonymous : 2006-03-12 05:44 ID:Heaven

>So basically my complaint in >>35 is justified - the language does not allow you to implement certain efficient algorithms, due to its inherent limitations?

No. Using mutable arrays, you can implement any algorithm that can be implemented using imperative languages. Lists are commonly used because they are convinient, but whenever the performance matters, you can use mutable arrays to optimize your program.

60 Name: #!/usr/bin/anonymous : 2006-03-12 12:59 ID:LH3uSuz4

>>57

O(1) memory.

>>58

>>50 doesn't even answer >>37,39,40, the easiest of the problems in >>48. >>53 already agreed that it's impossible, when using lists.

>>59

How would that be done in practice, then? I have no idea what "you must explicitly sequence their manipulations using monads" means.

61 Name: #!/usr/bin/anonymous : 2006-03-12 16:01 ID:qq6fgBY7

>>60
foldr can't be done in O(1) memory and O(n) time in any language, because the lists are singly-linked.

I suppose you could do it with O(1) memory if you didn't mind using O(n^2) time. Evaluate the function on elements n-1 and n, then re-search from the beginning of the list for element n-2, then evaluate the function on n-2 and the previous result, re-search for element n-3, and so on.

If you had a way to specify (or better yet, allow the compiler to infer) that a binary function was commutative and associative, then foldr could be implemented the same way as foldl for such functions. I don't know how much use this would be in practice.

62 Name: #!/usr/bin/anonymous : 2006-03-12 16:46 ID:LH3uSuz4

>>61

Which is why nobody would use single-linked lists as some sort of generic data type, due to them being far too limited. I guess the problem isn't so much using a functional language as using singly-linked lists in the first place.

63 Name: #!/usr/bin/anonymous : 2006-03-13 02:50 ID:Heaven

oh lawd

64 Name: #!/usr/bin/anonymous : 2006-03-13 07:21 ID:Heaven

>How would that be done in practice, then? I have no idea what "you must explicitly sequence their manipulations using monads" means.

First, two primitive operations on mutable arrays are provided.

readArray :: STArray s i e -> i -> ST e
writeArray :: STArray s i e -> i -> e -> ST ()

Here, STArray s i e is something like pointer to array, i is index type of array (usually integer), e is rype of array element, and ST foo is an action whose result is of type foo. For example,

writeArray arr 4 'c'

is an action that, when performed, writes 'c' to the 4th element of the array arr.

Next, you can sequencially combine these actions, producing another action. For example,

writeArray arr 4 'c' >> writeArray arr 45 'd'

is an action that, when performed, writes 'c' to the 4th element and then writes 'd' to the 5th element of the array arr. Likewise, the following is the function that produce an action that swaps the ith and jth element of a specified array.

swap arr i j = 
readArray arr i >>= \x ->
readArray arr j >>= \y ->
writeArray arr i y >>
writeArray arr j x

Actions that can be combined like the above are called monads.

65 Name: #!/usr/bin/anonymous : 2006-03-13 09:44 ID:Heaven

>>60

I give up.

66 Name: #!/usr/bin/anonymous : 2006-03-13 12:34 ID:LH3uSuz4

>>64

Well, I see, even though I don't quite get the significance of monads. I suppose I will have to change my opinion to thinking that the main problem is not the recursion but the use of singly-linked lists. Why on earth is that a preferred data type?

>>65

Give up what?

67 Name: #!/usr/bin/anonymous : 2006-03-13 16:32 ID:y5a64iNe

>>66
Single link list is preferred probably because it's the simplest persistant data structure.

Haskell sucks at lot at arrays. Though there is an interesting module in GHC called PArr, parallel arrays, which is like list comprehensions for arrays.

68 Name: #!/usr/bin/anonymous : 2006-03-13 19:25 ID:LH3uSuz4

>>67

Well, yeah, I've written quite a number of singly-linked lists too out of laziness because they got the job done, but I wouldn't for a second think to use them as a general data structure. They're just about the most useless data structure out there.

69 Name: #!/usr/bin/anonymous : 2006-03-13 23:54 ID:y5a64iNe

>>68
what do you propose?

70 Name: #!/usr/bin/anonymous : 2006-03-14 07:23 ID:wgzx+Fh5

>>69

I have nothing constructive to add, but 69 is very similar to tail recursion.

71 Name: #!/usr/bin/anonymous : 2006-03-14 13:15 ID:Heaven

>>69

Well, arrays, obviously. Preferrably mutable ones, although I'm sure that would rub somebody's idea of langauge aesthetics the wrong way.

72 Name: #!/usr/bin/anonymous : 2006-03-16 02:52 ID:qq6fgBY7

>>64
Nice explanation, thanks.

>>71
See >>64. Haskell is purely functional, so nothing in it can really be mutable. The way you write a real-world program is, roughly, the function "main" evaluates to a sequence of instructions which are then run. That's what monads do; they're basically sequences of instructions, plus a strategy for combining those instructions.

I might not have explained that clearly, though. I just barely understand it all myself.

73 Name: #!/usr/bin/anonymous : 2006-03-16 12:42 ID:Heaven

> Haskell is purely functional, so nothing in it can really be mutable.

Except that >>64 does show how to use mutable arrays, so your statement is pretty silly. They're not first-class language constructs, though, which I think they ought to be, because without mutable arrays, there are large subsets of problems that you can never solve efficiently.

74 Name: #!/usr/bin/anonymous : 2006-03-16 17:42 ID:qq6fgBY7

>>73
Not really, though. What >>64 shows is a way to produce instructions, and, when sequenced and eventually bound to "main" in the Main module and executed, those instructions can use mutable arrays (or read input, or produce output, or whatever else they want to do). But evaluating expressions can't have side effects.
Note the return types of readArray and writeArray:

readArray :: STArray s i e -> i -> ST e
writeArray :: STArray s i e -> i -> e -> ST ()

Most of the time, you can think of these functions as doing something. But what's really going on is that they are pure functions (no side effects; outputs completely determined by inputs) with return values that are instructions for things to do in the ST monad, as indicated by the "ST e" and "ST ()" in the return types.

In a sense, a Haskell program is an expression that evaluates to an imperative program, which is then run. I hope that makes a little more sense.

75 Name: #!/usr/bin/anonymous : 2006-03-16 22:28 ID:Heaven

But what is that other than semntic games? Your expression has no side effects, but it doesn't give you an answer either. To get your answer, you have to run the generated code with all its side effects.

This is looking more and more like another functional programming eruv to me.

(http://en.wikipedia.org/wiki/Eruv, http://blog.wfmu.org/freeform/2005/06/why_my_hometown.html)

76 Name: #!/usr/bin/anonymous : 2006-03-17 00:22 ID:qq6fgBY7

> But what is that other than semntic games?

Quite a lot. All of the functions written in the language are pure, whether in the IO or ST monad or not. Secondly, in practice, the vast majority of code you write is not in these monads. When you have a function with the type "Foo -> Bar", you can be sure it won't surprise you by doing some input.

77 Name: #!/usr/bin/anonymous : 2006-03-17 01:05 ID:Heaven

But what's the value is being "pure" if you don't generate an actual answer?

It seems, to make use of a silly analogy, like a clothing store that proudly claims "We don't harm animals!", and yet sell leather products, and if you try to buy one, you are handed a list that reads "1) Kill a cow..."

78 Name: #!/usr/bin/anonymous : 2006-03-17 13:09 ID:Heaven

I don't think whether the language is pure or not matters so much. Rather, the important thing is what programming style it supports, or what algorithm it can be used to implement. From this point of view, it would be reasonable to say that Haskell has mutable arrays.

Still, I think the default data structure should be persistent one, because it greatly simplifies programming. It may not be lists, though.

79 Name: #!/usr/bin/anonymous : 2006-03-17 22:38 ID:Heaven

>>77 confuses me. I fail to see how functions that compute and return a value (and that make up the majority of Haskell code) can be said to "fail to generate an answer." The fac function in >>1 is purely functional, and it generates an answer...

80 Name: #!/usr/bin/anonymous : 2006-03-18 02:06 ID:Heaven

>>79

From what I understood of the earlier explanation, a function using monads wouldn't return a direct answer, only a program that will compute the answer for you. Yes? No?

81 Name: #!/usr/bin/anonymous : 2006-03-18 17:10 ID:Heaven

>>80
Sorta, yeah, if it's in the IO or ST monad (monads actually do more than that). Yet, most of the functions one will write are not in these monads. As it says in "A Gentle Introduction to Haskell" (http://haskell.org/tutorial/io.html):

> So, in the end, has Haskell simply re-invented the imperative wheel?
> In some sense, yes. The I/O monad constitutes a small imperative sub-language inside Haskell, and thus the I/O component of a program may appear similar to ordinary imperative code. But there is one important difference: There is no special semantics that the user needs to deal with. In particular, equational reasoning in Haskell is not compromised. [...] An experienced functional programmer should be able to minimize the imperative component of the program, only using the I/O monad for a minimal amount of top-level sequencing. The monad cleanly separates the functional and imperative program components. In contrast, imperative languages with functional subsets do not generally have any well-defined barrier between the purely functional and imperative worlds.

82 Name: #!/usr/bin/anonymous : 2006-03-18 22:15 ID:Heaven

> In contrast, imperative languages with functional subsets do not generally have any well-defined barrier between the purely functional and imperative worlds.

I don't see why this would be a positive thing. You've already lost the theoretical advantage of functional languages, no matter how much of "barrier" you put up between imperative and functional code, and for practical code, you'd be much better off with less of a barrier, because in practice, that barrier will just get in the way of actually solving your problem.

83 Name: #!/usr/bin/anonymous : 2006-03-19 04:11 ID:Heaven

>>82
How much practical Haskell programming have you done before arriving at this conclusion?

84 Name: #!/usr/bin/anonymous : 2006-03-19 18:55 ID:Heaven

>>83

None. I am merely speaking from general experience with a wide array of programming languages and their quirks and limitations, and the main thing I've learned is that nothing gets in the way of solving a problems efficiently as much as programming language design dogma.

85 Name: #!/usr/bin/anonymous : 2006-03-19 23:50 ID:Heaven

THIS IS A HACK

I can tell because of the language design, and from having seen many hacks in my time.

86 Name: #!/usr/bin/anonymous : 2006-03-20 01:37 ID:Heaven

>>84
It depends on what kind of problems you're trying to solve. For quick hacks, Perl is the better tool. For programming large systems, the hardest part is managing complexity, and being able to reason effectively about your program is a huge help. That's where the dogma of purity and referential transparency comes in very, very handy.

87 Name: #!/usr/bin/anonymous : 2006-03-20 13:26 ID:Heaven

>>86

Maybe, but for a large projects you will also be solving a lot more different and unrelated problems, and once again programming language flexibility will help you.

This thread has been closed. You cannot post in this thread any longer.