Sometimes when writing programs I forget to put in any comments at all. Then later I come back to the code and I'm like, "What the F is this shit?" Anyone else have this problem?
>Kids these days, they need to trick themselves into discipline.
Can't argue with that. I do need to trick myself, and writing the outline of my method first is the least boring way I found. I dislike writing comments just as I finish to code, and too often I find myself writing them at the end of the day, or every other day, in bulk. It may be a failure of my character, but hey, at least I try to compensate for it this way.
>I believe writing comments using as I understand PPP would prevent you from changing your mind and improving the implementation while you write it.
I'm just lousy at explaining, then. Comments are never good if they are at the level of implementation, and PPP comments are no exception : they should document the intent behind a portion of code.
As for improving the design as you go, I like to do most of my thinking beforehand, so it rarely happens to me. When it does happen, well, I update the comments as I go. I don't think anyone would go "ah, but I can't change this otherwise my comments won't be appropriate anymore !" when it's so quick to change a line of comment. And since you only write the outline of your code, most improvements won't affect your comments since they will be too low-level.
Descriptive naming helps. In algorithms I put a summary of each step in a comment, like
# 1. Read user input
# 2. Perform bounds checking
is this "literate programming" thing just another name for PPP?
That first comment is only helpful if you're using a language where reading input is more than one or two lines (which it might well be if you're writing C, etc).
The second comment would be more useful if you explained what bounds you're actually supposed to be checking. Let the comment say what the code is supposed to do -- if it then turns out to be doing something rather different in some edge case, you have an easy reference for the correct behaviour and can fix the code to match the comment. Without documentation of what the bounds actually are, next time you look at that bit of code the only thing you have to go on is what the code says -- and if the reason you're looking at the code is that it's doing something wrong, you're pretty much shit outta luck if you can't remember the actual bounds you were trying to check.
Nowadays with GUI's and increased functionality, data from user input usually has to be decoded, pulled apart and run through a filter. The most common example is reading an HTTP request.
Too much blabber makes the actual code harder to organize and scan through quickly. If you really need to be held by the hand then maybe you shouldn't be debugging someone else's code (this is where descriptive variable naming, etc helps).
No, writing your own HTTP code is what makes it harder to organise and scan through quickly. Unless your project is a HTTP library or a web server, there's no reason to parse HTTP by hand. Especially in a GUI application.
No, a request with parameters has to be parsed. Plus the data may be encoded or may correlate to information stored on a database
Your "write javadoc-equivalent first" technique fails at making sure the comments are maintained.
It naively assumes the comments themselves won't have bugs. (Hay if it doesn't show up in a debugger it's not a bug, right.)
It's an issue of philosophy. If you view code as something to be written, then let sit, your code will be buggy and your comments will suck. But if you come back and read your code -- and, in so doing, make improvements as you deem necessary -- then you tend to get quality, maintainable code with meaningful comments.
Just look at various open source projects for evidence of this. Most self-proclaimed "Linux developers" take the former approach, and their code is crap, whereas BSD-based operating systems tend to take the latter view (OpenBSD developers being the most vocal about it), and if you read their code, it really is a world of difference. And those OSes are usually much more stable.
i picked up a copy of flash a couple of months back and i've gotten the hang of the basics of the program. but i was wondering if there were any websites that had instructions/tutorials/tips for improving my animation. any suggestions?
That's not really code. Try /tech/.
What about it? >>1 didn't ask about ActionScript.
Geez. This board is full of idiots ready to pounce on people. "He doesn't know about ActionScript!" Of course I know about ActionScript, but that wasn't in the question, dolt.
Yes, exactly like me. Really, it just pissed me off because when posting >>2 I thought, "Won't someone say 'what about ActionScript'? ...nah, nobody who will post here is that stupid," but I was completely wrong, as usual. Someone just decided to be a wise-ass without saying anything relevant.
Anyway, thread hijacked. This thread is now about ActionScript. I've never used it myself, but I hear it's vaguely C-like. Is it any good?
Schoolwork is keeping me from doing anything with this particular project, but some years ago I decided that I'm going to learn how to program.
I've decided to learn Perl first. Please tell me why this is a horrible idea! The more interesting reasoning, the better!
Ruby 2.0 probably isn't coming any time soon.
> You need Ruby command to build YARV :)
sounds like it'll probably take a few years just to build it...
Most of YARV is in C. The only bit Ruby seems involved in is generating the Makefile.
I like PHP, because I enjoy making little web doodads. And it seems to be pretty damn easy to do just about anything in PHP.
Silly >>72! Don't you know that any mention of PHP on this board is met with replies of ill-conceived panic and h8orade?
I wouldn't mind hearing well-thought-out h8 at PHP
See the "PHP users are dumb" thread. I guess PHP is okay for "little web doodads", though.
My, that was enlightening. Thanks for bringing that to my attention. I was unaware of the securitiy risks and such.
I've never written anything big in PHP, though. I do bigger things in Python.
Do you know of good frameworks/libraries to create PHP database-centric applications with flexible forms, master-detail forms, etc. a la Oracle Developer? I don't mean noob simple, unflexible code generators, but something real. Rich client-side controls are a major plus.
That seems to be for Python, not PHP. Even so (I can do Python), does that do what I want? I can't see any mention to that anywhere in zope.org which is hueg. The installation was simple but you're left without a clue of what to do, and the directory structure looks like bloated enterprise software.
Wait, don't tell me that was some kind of meme... I haven't been lurking this board
I wrote one. It's 663 lines and about 12K of C code, implementing a crappy Forth-ish language. Anyone else like to make these?
fabrice bellard (author of tinycc) entered a one page C interpreter in the obfuscated C code contest. I think there were other 1 or 2 page interpreters and compilers in there.
Note the contest entry is unformatted. You have to look for the human-readable, documented version. And Fabrice has a website with tinycc and its predecessors.
Oh yeah? Show me PHP code that does the same thing as perl's <> and is as well established, as well documented, as completely foolproof and as trivial to use again and again (because you will) without messy copy-and-pasting. Go on.
>"magic" like <>
??? lol wtf. The diamon operator is probably one of the best things to use about perl, it's hecka awesome. By the way, early PHP was programmed in Perl, so you are DQN.
> eval while<>
> eval for<>
you can also do
A new human verification system uses kittens instead of text:
Brilliant. This is thinking outside of the box; not only could it work better than OCR-able captcha systems, but it was probably easier to code too.
This is not accessible, so it's still shit.
That's also completely useless as soon as the kitty database becomes available.
I don't understand why this post is very popular and people think it's something new. It's not, and it's well-known why. If it can protect a small site like this one, good for the author, but as soon as somebody worth attacking will use it, this system will be a complete joke.
Making CAPTCHA is just like sending an e-mail with a link, a large inconvenience for users, that loses you some users/posts/business, but it is a never-ending arms race. You can't win, but if you defend yourself at the expense of the legitimate users' comfort, you show that you have lost.
Why not ask a simple riddle? It should be on par with kittenauth if you have a lot of riddles.
That would require me to think, making it even worse than catchpa.
I like dot kde's solution, although it's probably simple enough to get around.
It's as effective as a button saying "click this button to post", or a text field saying "Type 'kittens' here:". It will stop an automated spam spider, but anyone attack it speficically will go through it like a giant razor flying horizontally through skyscraper made out of butter.
>>17 None of those buttons are labelled "m00t" ...
>>16 That's only good for people good enough in English. Many riddles depend on language far too much to be solved by not so advanced speakers. That's also true for kitten auth with more than kittens. I for one didn't know what porcupines are until a moment ago.
"Wow"? Maybe you should try actually learning a language sometimes. One of the hardest things is learning animal and plant names. These are usually completely different and unrelated in every language.
How many of the foreign-language animal names at http://members.tripod.com/Thryomanes/animals1a.html would you get from just seeing the word written?
Anyone played with any concatenative languages, such as Joy and Factor?
Joy is more conducive to playing with. I like it a lot. Factor is a "practical" type of language according to those who practice it.
Forth is also considered a concatenative programming language.
Sounds like more "let's do things different BECAUSE WE CAN!" functional-language wankery. I don't see it doing anything any better than any other language, just different for the sake of being different.
Don't be such a curmudgeon. I'm trying to get some discussion going here.
Anyway, concatenative languages are sort of like reverse Polish notation (RPN). You write 2+5 as '2 5 +', where each number pushes itself and the + operator pops two operands and pushes its result on the stack. But concatenative languages are not the same as RPN. You can square whatever number happens to be on top of the stack by doing 'dup *', where dup is the operator that duplicates the object on top of the stack. This is not possible with RPN calculators. In Forth, the word (Forth calls subroutines "words") to square the number on top of the stack is defined by:
: square ( n -- n ) dup * ;
"( n -- n )" is a comment indicating that the word consumes one value from the stack and leaves one new value.
Forth dates back to the 1970s, but the author of Joy was first to use the term "concatenative", probably in the late 90s or early 00s. Joy is a more functional language, with quoted programs (basically functions) that can be pushed on the stack and used by various combinators. Factor on the other hand is intended for practical programming and as an evolution of Forth. Although it borrows quoted programs from Joy, it doesn't have the recursion combinators.
No, I didn't spend much time doing that, because the presentations of the language made no attempt to convince me there was any value in doing so. There was no suggestion as to why making this fairly significant effort would be of any benefit to me. It just seemed like the same old functional-programming academic thought experiments that I've seen a thousand times already. They're like obfuscated programming languages with less humor and more delusions of importance.
Now, I only looked at Joy, so I don't know about Factor. If it's designed for "practical programming", that's obviously better, but for the life of me I can't see why you'd ever design for anything else.
"Practical programming" languages take new ideas from academic-type languages and put them to real-world use, e.g. Factor contains concepts borrowed from Joy. Without the more theoretical programming languages, we'd all be using a very nicely polished COBOL.
Academic languages are also good for thinking about computation: the Scheme REPL and so on. If that's "wankery", well, what field of mathematics isn't?
I'm backing WAHa up on this one: for the last week, I've been attempting to understand the whole concatenative language paradigm. None of the Forth tutorials I've read can offer up a good reason why stack-based mental gymnastics are better than letting the compiler do all that work. I've made inquiries at concatenative language lists, and the only answers I've gotten have been:
a) Expand your mind! (with nothing to back that statement up), or
b) Forth allows you to micromanage the stack for effiency (which is funny, because the "blazing fast" GForth compiler is usually about an order of magnitude slower than C over at the Great Computer Language Shootout).
The final straw came when, whil reading "Thinking Forth", the definitive Forth book, the author proceeds to demonstrate something that "could only be done in Forth!" that could have just as easily been done in C with pointers fifteen years ago.
The main reason I've heard (at least for Forth and Factor; Joy is closer to Lisp) is that these languages allow you to factor your code into really short words. Supposedly, this allows you to manage complexity and write less code. I've no comment on this personally as I've barely tried either language, but I'm surprised you didn't hear that justification. It's one of the major selling points of Factor and is mentioned on Chuck Moore's colorForth website (Chuck Moore is Forth's inventor).
Yeah, "smallness" by eliminating redundant code is the main FORTH advantage I've heard about.
The interpreter is small, the code you write is small, and Chuck Moore is such an extremist at eliminating redundancy he produces completely functional systems (OS, API, GUI, apps) in a few megabytes.
I rejected that as pointless for a long time. Now I'm thinking this militant anti-redundancy approach might be a solution to code complexity and bloat. How many times have you read programmers' complaints about maintaining huge, undocumented, poorly written apps? If they were SMALL, NON-REDUNDANT poorly written apps, it wouldn't be as difficult.
Next step: check if FORTH really does guide a lazy programmer to avoid redundant code.
#9 was saged, but I want to see if anyone has any real experience with FORTH.
Hey, want to hear something really weird about ISS ? Here goes.
I received a bug report from one of users, apers that one ASP file suddenly started to ask for username and password (We here use integrated Windows authentication to restrict user access in our application). Well, I did the logical thing and checked ACL and ISS config, but nothing seamed to be wrong, so I tried to give all possible rights for that file to all possible users, still, it just sits there and asks me for username and password.
What to do, what to do ? I started to experiment with content of that file. And guess what ? When from that file, somewhere in middle, I remove "width=300,height=350" it works again ! Yes, I removed few characters and it works ! I'm shitting you not ! What the fucking hell is up with that ?! I mean, really, now, what the heck has some ECMAScript parameters to do with file access rights ?! Gawd ! Am I missing something here ?
I think I'll pull a math test answer on this one. Not enough information to solve problem.
> I removed few characters and it works
What happens if you put them back?
>>4 sounds like file ntfs permission problem.
you need to give read permission to 'everyone' for the physical files.
You run pages on the International Space Station? >_>
I know this isn't traditional coding or scripting, but this seems like a good place to look.
I need to get a set of outputs from SPSS where it will allow me to manually set ranges for a variable on a table output.
For example, I need descriptives on variable "Degrees" broken down by "300-325" "326-350" etc., except that I don't have a variable to tell me those ranges, so I need to code it by hand into either a filter or my syntax.
Is there any way to write a script so that I can at least get it to spit out a couple things at a time versus me having to go in and change the tags on three variables every time I do an output?
This wouldn't be a problem, except that I need 128 separate descriptives, so any form of automation would be appreciated.
Of course, major geek points can be earned by being one of the first to hack in an obscure language before it hits the big time (if it ever does). According to this article, one language that may have a good shot is Dylan:
While I think the "market" for new mainstream langs is more crowded than ever now, this one looks pretty neat. Have you tried it? Your thoughts?
Hm. Bastard child of Haskell and Ruby with naming conventions from GNU Arch. But why are members "slots"? Kind of neat, though.
Every language needs its gimmick, and since all the good gimmicks have been done already, they need to come up with new names for the existing ones.
Compare: "instance variable", "data member", "field", "slot", etc; compare "method", "member function", etc. They're just rebranding the wheel like everyone else.
> "slot" ... "method", "member function"
Am I the only one seeing the sad sexual innuendo here?
Who needs Dylan when you have Lisp?
Hey, if you've got enough time to design programming languages, you probably need every bit of innuendo you can lay your hands on.
Dylan is the greatest singer-songwriter of all time.
True as that may be, he isn't a very good programming language.
I tried the gwydion implementation of Dylan a while back. My experience: the compiler was incredibly anal about where certain overtly verbose yet seemingly free-form module headers are, and its error messages with regard to these were next to useless. Additionally the compiler was, on a 1.9ghz athlon XP, so slow that any recent GHC is actually faster even if you exclude the time it takes gwydion to link things (which is like 80% of the wall clock time).
My verdict is that Dylan must've been a byproduct of the "make code readable to simpleton managers" movement of the nineties. No wonder that Apple pushed so hard for it in the days of macos 7 (or was it 8? or copeland or some such?). As a serious language however? I'll take Common LISP over this bullcrap any day.
LISP was a hard sell because I've had very few co-workers willing to learn it.
I had better luck with Python becasue it's easier to pick up, and the ctypes add-on lets you call C libraries without writing binding code.
The disadvantage is we can't automatically convert our final code into C for maximum performance and source obfuscation.
Looking into Euphoria language for that.