I know the basics of C, C++ and Python, but I wanted to actually learn some programming language more deeply so I picked Perl. I read few books and completed many exercises, but when I actually wanted to code something serious to make my skills better, I found nothing I could do. Sourceforge didn't offer me much to do with Perl and the languages used the most there were C/C++.
tl;dr Should I continue with Perl, is it still used or I'm just wasting my time and should learn only enterprise languages?
Best regards
Learn Java.
Learn Ruby instead and improve your Python and C skills.
if you want to work on your skills you should read sicp and learn lisp, but only if you want a language that is strictly for improving skill, don't learn java.
>>4
I want a language that does something productive, so I could work on system tools or programs.
>>5
Learn Python or Ruby if you want languages that are both worth learning and used in the real world. Since you already started learning Python, unless you dislike it, you should consider mastering it.
If you're the kind of person who goes to a programming board, you're not the kind of idiot who'll shit Java 9 to 5 then try forget about his horrible job, so don't waste your free time with ENTERPRISE languages. Whatever you can learn by mastering a good language will transfer to your ability in general, regardless of the tools.
>>6
Alright then, I will look into proceeding with Python. Should I continue with Perl?
>>6
One more question if you don't mind: What's the best way of gaining experience beyond book exercises? I couldn't find anything I was actually capable of doing on Sourceforge.
Python and Perl are relatively similar languages. If you can't find something to do with Perl, you won't find it with Python either.
I'm a bit surprised you can't find something to do with Perl. Being able to use TCP from a language presents you with more possibilities than you'd be able to think of in a lifetime. And Perl's crown jewel is that massive CPAN.
Also, while I think Python is a nicer language, it's hamstrung in many ways. If you learn Perl as well you're less likely to be an idiot savant.
Think of some piece of software you'd like to write it, and try writing it. Even if you know in the back of your mind that you don't have the skills yet, it will still be a valuable experience. One of the first programs I tried to write was a role-playing game, knowing full well I couldn't do it. But it taught me a lot about C way back then.
Thanks for your answers.
>>8
Depends on what you do, but if you're interested in challenging problems, try paying a visit to http://projecteuler.net/
It doesn't require a deep knowledge of your language, just some serious thinking.
>>12
And a math background.
>>13
I slept through every high school math class and solved 30 of them, and can do more before I have to find some math refreshers. Many are just about efficient number-crunching.
>>14
I find that a lot of them can be brute forced, yes. But soon enough that becomes infeasible and you need to know your shit and thinking about the problem all day won't help.
>>15
Yeah, but because of the scale of most of the problems, I think you can realize soon enough whether or not you're doing it right, since you know that the correct solution of every problem is supposed to take less than a minute to run.
Haskell, because of lazy evaluation, point-free programming, monads and potentially some new weird abstract stuff to deal with in the future.
Prolog, because logic variables, unification, non-determinism and database side-effects.
>>17
He seems to prefer stuff that is used in the real world (note: usable ≠ actually used).
>>18
Oh, my bad. I do think Haskell will be more widely used in the near future, though. I say this because of the increasing number of optimizations being added to GHC lately. And because of easier GUI programming in Haskell through FRP (Grapefruit and such).
>>17
You missed "controlled effects" and "powerful type system" (both somewhat intertwined in Haskell).
>>20
Yes, I should at least have mentioned type inference. I should have also mentioned that "monads" implies controlled effects, non-determinism, selection of a function's kind of error signaling (return value or exception) by caller of said function, and a whole bunch of other stuff, much of it I don't know about.
> because of lazy evaluation, ..., monads
Lazy evaluation as a default seems to imply the necessity of something like monads. The question is whether lazily evaluating everything is a good idea; I haven't seen many problems in the wild that are referentially transparent in nature (i.e. purely algorithmic).
Based on the whole monad & monoidal wankery I'm seeing, I'm inclined to think that it's not (yet). You shouldn't try to be too clever when programming, because it'll come back and bite you when maintaining.
Haskell is a research language, and probably should remain there until they hash all these issues out -- in a simple and orthogonal manner.
> Lazy evaluation as a default seems to imply the necessity of something like monads.
If the assertion is that monads only have use in lazy languages then I disagree with that assertion. Otherwise, indeed, monads are a powerful abstraction for expressing computation of many kinds.
> The question is whether lazily evaluating everything is a good idea; [...]
To what is this question relevant? The usefulness of Haskell?; Haskell has an active userbase and community; that it is useful is a given.
> I haven't seen many problems in the wild that are referentially transparent in nature (i.e. purely algorithmic).
Some programs cannot be written that are completely referentially transparent, yes. However, their implementations can be composed of referentially transparent constructs. I don't think the Haskell language makes the claim that all programs are to be purely referentially transparent, indeed, Simon Peyton Jones calls Haskell his favourite imperative language. He also claims that most of the work of a Haskell program is in the pure code, which makes sense.
> Based on the whole monad & monoidal wankery I'm seeing, I'm inclined to think that it's not (yet).
I am not sure how to interpret this remark.
> Based on the whole monad & monoidal wankery I'm seeing, I'm inclined to think that [lazy evaluation is not a good idea] (yet).
Perhaps, but more elaboration on why monads are indeed “wankery” should be provided for this claim to hold meaning.
> Haskell is a research language, and probably should remain there until they hash all these issues out
Which issues? A list of them -- in a simple and orthogonal manner -- would be helpful. Perhaps I could forward them to the Haskell mailing list.
>>23
I haven't, but it sure sounds enticing. I shall read it.
> If the assertion is that monads only have use in lazy languages then I disagree with that assertion.
Do you think they're worth their cognitive weight in strictly-evaluating languages? The main use of monads appears to be for threading state, particularly IO, albeit there are other varieties.
> Haskell has an active userbase and community; that it is useful is a given.
You can make anything useful with enough effort (hi, Java, C++!); popularity isn't an ideal metric. I'm more interested in the amount of leverage a language gives: how much can you achieve on an arbitrary task given a fixed amount of time?
I believe type inference is a win here -- and I'm looking forward to the results of the recent interest in gradual typing and universal types. I have my doubts about default lazy evaluation though. It's not just IO, there appear to be problems with reasoning about the size of the live set.
> Some programs cannot be written that are completely referentially transparent, yes.
Almost all can't be. On the user-side they're obviously not, and even on the server they're usually not either (network IO, file IO, DB). This leaves problems that are embarrassingly parallel in nature (rendering, sequencing, encoding and the ilk), which are relatively uncommon. So I wonder if we're not trying to fit a square peg in a round hole.
Of course, I admit that this is a bit of a red herring: as you imply, a better question might be the relative amount of necessary pure to impure code. Your typical video compression utility is mostly algorithmic, even if it needs to access the file system from time to time.
> I am not sure how to interpret this remark.
The language appears to fight its adherents. Witness the debate over monad transforms. Instead of solving external problems the developers are contorting their brains on how to get their problem to fit the language. Category theory? Egads, that's research all right.
> Perhaps I could forward them to the Haskell mailing list.
They're aware of it, and in far greater depth than I ever will: http://lambda-the-ultimate.org/node/2749#comment-41078
By the way, since STM was brought up: http://groups.google.com/group/comp.lang.functional/msg/3790727a8146daca
>>25
Brainfuck has an active userbase. That doesn't make it useful.
>>28
Wikipedia says otherwise: http://en.wikipedia.org/wiki/Software_transactional_memory#Performance
>>30
Say what otherwise? Reread the link I posted:
IMO, 4 and 5 are basically hopeless. They provide an abstraction that is out of touch with the underlying hardware, AND is harder to program. Why would anyone possibly want this?
I agree with him. A lot of implementation complexity for what gain? If you're dealing with anything other than low-contention resources you'll be raped by redos and cache-line RFOs. And this completely ignores NUMA -- AMD already has it, and with Nehalem so will Intel. It's the future; message or ownership passing can take advantage of it.
Also, what is this, April 1st? Link to papers, not wikipedia. Wikipedia is a starting point for your research, not a definitive resource.
>>31
Thanks for the LTU link, by the way. That Disciple thing seems very interesting.
If by harder to program he meant it's harder to develop an application based on STM, then Wikipedia does say otherwise and I agree with it. However, If he meant it's harder to develop a STM implementation, then I misunderstood him.
Isn't NUMA an interprocessor issue? On a single multicore processor architecture, NUMA is meaningless and STM shall prevail due to greater ease of use and good enough performance. On a NUMA multiprocessor architecture (or on a cluster), though, then yeah, message passing is realy the way to go.
Also, by linking to wikipedia, I indirectly linked to various papers LOL.
> If by harder to program he meant it's harder to develop an application based on STM, then Wikipedia does say otherwise and I agree with it.
Actually, in this regard, so do I. It beats traditional concurrency primitives easily. Just keep it away from high contention.
But if you're going down the concurrency road, might as well try and get it right the first time, lest ye end up with a kitchen sink. Okay, I don't know if message passing is the right thing, but it arguably has fewer problems than STM. STM doesn't scale that well, and now that I think about it, how do you avoid thread starvation?
Another problem is the GC (like he mentions with Concurrent ML): if you have any shared state between cores you have no choice but to wait until all threads on all cores join before starting your mark phase. You don't need to keep the threads locked during the entire mark phase, but that joining and initial halt will put an upper end on any speedup you can get (see: Ahmdahl's Law). I'm not aware of any algorithm that allows you to avoid this, but if you come across one please tell me -- I'm mulling over a design for Gambit C's GC that's also useful for Termite, and I could really use this sort of knowledge.
Message passing avoids all this mess. As a bonus, when you send a message it no longer matters -- latency aside -- whether the receiver is on the same node or on the other side of the planet. Erlang's fault tolerance is based on this. The price to pay is copying, of course. BUT! Copying has its advantages (see below).
Having said that, I'm no Simon Peyton Jones. Perhaps he seems something I don't; I'm just the peanut gallery. If you know of any good papers, feel free to link 'em.
> Isn't NUMA an interprocessor issue?
Well, it used to be. Not anymore.
The critical part to understanding NUMA is the non-uniform bit. Remember that a single Opteron processor has HyperTransport and on-die memory controllers for each core? That means that memory banks are now split up among cores. So, some memory is local to each core.
There's advantages to this, but what if you try and access memory that's not local to your core but to another one? Your core pretends it has the data but actually sends a message to that other core. The other core does its thing, and returns your core the data. Ignoring potential bandwidth problems, this introduces latency.
If you want to know more about memory architecture and optimization, I highly recommend this: http://people.redhat.com/drepper/cpumemory.pdf It's not the greatest writing, but it is the best single paper I've seen on the topic. I need to read it again.
> > If the assertion is that monads only have use in lazy languages then I disagree with that assertion.
> Do you think they're worth their cognitive weight in strictly-evaluating languages?
Yes.
> The main use of monads appears to be for threading state,
One of the uses, yes. (Counter example: the list monad).
> particularly IO,
IO is one of the uses, yes.
> > Haskell has an active userbase and community; that it is useful is a given.
> You can make anything useful with enough effort (hi, Java, C++!); popularity isn't an ideal metric. I'm more interested in the amount of leverage a language gives: how much can you achieve on an arbitrary task given a fixed amount of time?
I suggest learning Haskell to find out. Judging based on FUD and second-hand accounts will not provide, I opine, a satisfactory evaluation of the usefulness of any language. It is difficult to measure usefulness, easiness and “powerfulness”, although people like Paul Graham are trying to do that.
> > Some programs cannot be written that are completely referentially transparent, yes.
> Almost all can't be. On the user-side they're obviously not, and even on the server they're usually not either (network IO, file IO, DB). This leaves problems that are embarrassingly parallel in nature (rendering, sequencing, encoding and the ilk), which are relatively uncommon. So I wonder if we're not trying to fit a square peg in a round hole.
“Some programs cannot be written that are completely referentially transparent, yes” was to be taken with a hint of jest. Of course programs have to output things and do side-effects or, as SPJ says, “the computer just gets hot”.
> Of course, I admit that this is a bit of a red herring: as you imply, a better question might be the relative amount of necessary pure to impure code. Your typical video compression utility is mostly algorithmic, even if it needs to access the file system from time to time.
Indeed.
> > I am not sure how to interpret this remark.
> The language appears to fight its adherents. Witness the debate over monad transforms. Instead of solving external problems the developers are contorting their brains on how to get their problem to fit the language. Category theory? Egads, that's research all right.
I have found monads to be useful in expressing computations. Consider a CGI monad, which allows you to express all CGI actions such as cookies, reading/writing the headers, outputting content etc. Now, I might derive my own instance of this CGI class called Web to which I can add some new abstractions, such as session state or continuations or whatever I need. Monad transformers, too, I have used to create an IRC monad for an IRC daemon, which needed the IO monad to write to a socket stream, to provide abstract operations to do with the IRC server (send a message, write to the log, shedule a client's K-LINE, etc.).
> > Which issues? A list of them -- in a simple and orthogonal manner -- would be helpful. Perhaps I could forward them to the Haskell mailing list.
> They're aware of it, and in far greater depth than I ever will: http://lambda-the-ultimate.org/node/2749#comment-41078
The issues have not been stated; I have no interest in following these links for this conversation.
>>29
If something is being used then it is by definition useful. The question is “what for?”, are people using Haskell for laughs, like Brainfuck, or for serious applications?
> > IMO, 4 and 5 are basically hopeless. They provide an abstraction that is out of touch with the underlying hardware, AND is harder to program. Why would anyone possibly want this?
I have not found STM harder to program, but this is an issue of contention that I don't expect to resolve nor have interest in persuing. The implementation is hard as admitted by the proponents, it is the high-level usage of it that is the interesting idea.
However, how are the various implementations of simultaneity relevant to Haskell's usefulness? I suggest we stay on topic for fear of red herring argument (“STM is bad therefore Haskell is not useful”).
> Yes.
Details, man. Why do you think this? Give an example in Python/Ruby/Perl/whatever.
> One of the uses, yes.
In practice, what are most monad found in Haskell code used for?
> Judging based on FUD and second-hand accounts will not provide
You know who Oleg Kiselyov is, right? Also, what is wrong with his criticism?
> The issues have not been stated; I have no interest in following these links for this conversation.
Classy.
> However, how are the various implementations of simultaneity relevant to Haskell's usefulness?
They're not, geez. This isn't some gargantuan "everything-is-wrong-with-your-pet-language-just-cuz" fest.
STM was raised, and STM was discussed. You're not the only person on this board.
> Give an example [of monads in another language]
Monadic parsers like Parsec (and indeed Parsec itself) have become of interest in C#.
> In practice, what are most monad found in Haskell code used for?
Monads are for expressing computation. Here are some examples: http://en.wikipedia.org/wiki/Monads_in_functional_programming#Examples
> > Judging based on FUD and second-hand accounts will not provide, I opine, a satisfactory evaluation of the usefulness of any language.
> You know who Oleg Kiselyov is, right? Also, what is wrong with his criticism?
I know that Oleg Kiselyov is being used as an authoritative view. His criticism is a second hand account. Second hand accounts, as previously stated, I opine, will not provide a satisfactory evaluation of a language over first hand experience.
> > The issues have not been stated; I have no interest in following these links for this conversation.
> Classy.
> This isn't some gargantuan "everything-is-wrong-with-your-pet-language-just-cuz" fest.
> You're not the only person on this board.
I have no interest in furthering this discussion.