At first, I decided to try learning C++, then I heard C was "better", now I'm hearing Lisp is good. I'm getting annoyed with indecisiveness, so I'm going to ask this one last time... Which language would be best to learn out of these three? My intentions in programming will be small projects, game mods, network applications and other things of that sort. What would you all recommend out of those three, and why? Not looking for anything outside the above three.
Please halp.
You're a moron
Nuruqo
Speaking as a Lisper, if you want to write that sort of stuff I'd recommend you learn C first.
Better to just get on and learn something than fuck about wondering what to learn.
> Better to just get on and learn something than fuck about wondering what to learn.
Nicely said.
Or do something instead of just learning. This is my problem.
>>7
>>2 here
Here's my reply:
Learn C, then lisp.
When you're done, and you feel quite familiar with the languages, learn C++. Or, fuck that, just learn common lisp instead of C++.
Common lisp has most of C++'s features that C doesn't have.
Paradigms, features, etc are more important than syntax.
When you're done, study something more OS-related, or networking, or encryption, or 3D, or whatever floats your boat.
>>8
Alright, I'll try that too. I've pretty much got all the basic-to-intermediate things down with C, so since that won't be my "main" language, I'll look at Lisp like you said.
Also, I has a question:
Some people say different languages are good for different tasks... what they do mean by that? Aren't "general purpose" languages (like C, C++, etc) good for everything, in that case?
>>9
What the hell is a "general purpose language"?
there's no such term, a language can be imperative, functional, it can support various features such as safe strings, regex, whatever, namespaces, strict typing etc.
I also mentioned that in my post:
> Paradigms, features, etc are more important than syntax.
and no, you don't know C. fuck the "basic-to-intermediate".
You either know or don't know a language, and in your case you don't know C so get your ass working and learn it properly.
You have to know everything about the language. (with exception to complex numbers and that stuff, which are almost never needed)
Well, you didn't have to be fucking rude about it. In case you can't fucking read properly, I said C wasn't going to be my main language. I'm not going to learn a full fucking language if I'm not going to use it, asshole. Try again. Thanks.
> You have to know everything about the language.
Knowing everything about a language is a bit over the top, methinks. When was the last time someone needed to know trigraphs when programming C?
I'm also not a fan of overinvesting in a single language since they all suck in various ways. That way leads to religion because of the investment involved.
> Some people say different languages are good for different tasks... what they do mean by that?
Some languages are better at some things than others. Languages have different focuses.
Perl has great support for processing text, Erlang makes distributed computation and failover simple, the Lisps dominate metaprogramming, C is the closest thing to a portable assembler...
Let's say you need to munge some text. C is a perfectly capable language for doing that, but it will be many times easier to use Perl instead. But if you're writing performance code and twiddling bits, C is the better choice.
Nothing wrong with learning all three. It is always good for a programmer to have multiple tools at his disposal.
As for order, learn C first. Then do Scheme (the clean Lisp). If you learn the latter first, C might seem like a step backward to you.
Then learn the dirty ones: Common Lisp and C++. My prejudiced judgment suggests skipping C++, but it is very heavily used in game programming.
FORTH LEARN .
C++ if you want to get into game programming. it really depends on what you want to achieve.
>>10
I only read K&R. C99 can stick its complex numbers up its anus.
I have the opposite gripe: C99 might have been too much in some areas, but it was far too little overall.
What we really need is a C 2.0.
It's never going to happen though. >:(
>>19
C 2.0? What do you think that is missing from C?
Where was C99 too much?
I for one would really like a "typeof" macro.
Things I'd love to see:
It'd still be C (other than the type inference), but it'd be a less error-prone and more pleasant language to use. The only reason I'd call it "2.0" is because the new standard library wouldn't be backward-compatible.
I'm undecided if strings should support Unicode. Unicode's the future, and it'd be nice to put unicode in string literals and have it Just Work, but have you seen the size of ICU? An is to make strings use machine words instead of bytes and leave ICU separate.
"An is to" = "An alternative is to"
I'm positive there are projects working on "clean" or "safe" subsets of C; naturally, I can't remember their names, but Googling would certainly turn up something. If I encounter anything, I'll post links here.
What exactly would be the point of a "clean" or "safe" subset of C, in your mind?
There's cyclone at http://www.research.att.com/viewProject.cfm?prjID=67 so I don't think you or >>21 are alone, but I also doubt cyclone's usefulness as well...
Do trigraphs really hurt anyone being there?
I too would love to see computed gotos become part of the standard, but I don't worry about it much. GCC
is everywhere that C matters, so I just use that.
I disagree on the pascal-style strings. You can't merge the tails of the strings, you still need all those library functions in order to deal with strings longer than 256 octets (or whatever your very-small limit would be). Trading one buggy kind of string for two buggy kinds of strings doesn't sound like a win- just another way to confuse programmers into using the wrong data structure.
What exactly do you mean by local type inference?
> Do trigraphs really hurt anyone being there?
They make the compiler more complicated, and nobody uses them today. C has a number of obscure corner cases that makes parsing it a lot harder than it need be. Complexity adds up; death by a thousand cuts.
> GCC is everywhere that C matters, so I just use that.
Have you seen clang? If I want to use another compiler for whatever reason, I can't rely on GCC extensions. It's a good idea so it should be in the standard, dammit. I want tail-calls too while I'm at it, so I can use C as an intermediate language for more advanced languages without the performance hit of trampolines. Portable assembly my arse.
> You can't merge the tails of the strings
Make your own datatype. If you need something obscure like this, that's why ADTs exist. Or be crazy and use a GC so you can slice strings.
Why do we suffer under a long and sordid history of buffer overflow vulnerabilities since C's inception? Null-terminated strings are a bug-prone default thanks to some performance benefit that only applies to the VAX. Not to mention that non-8-bit-clean strings just suck.
> whatever your very-small limit would be
Use a machine word instead of a byte. The string can be as large as your memory space. This is the minimun that the rest of the world uses. Hell, even the masochistic C++ people hate C's strings, although they can't completely escape them.
> local type inference
Put types in the parameter list, like normal. Locals inside the function figure out what they should be based on the signatures of the local and called functions. See what newer versions of C# and D are doing.
> Use a machine word instead of a byte. The string can be as large as your memory space.
Eaah. That makes common routines like strchr
(or indexOf if you prefer to call it something else) much more expensive to implement. It also wastes some memory, and means that very large strings (and bitmaps) will render the CPU caches useless because all the libraries keep bouncing back to the beginning to check the length.
Making this work in practice would mean changing the ways we presently make cpus...
>> You can't merge the tails of the strings
> Make your own datatype. If you need something obscure like this, that's why ADTs exist.
Uh, then you don't mean pascal strings at all? Or maybe you can qualify this better?
> Put types in the parameter list, like normal.
That's a huge ABI change, but one I've recently looked at. Consider the fact that you'll be doubling at a minimum, stack use on CISC systems, and wasting many registers on RISC systems. All for the "occasional" time that you actually want this information?
Tagged pointers are a much cheaper way to do it, and don't require ABI changes.
> Why do we suffer under a long and sordid history of buffer overflow vulnerabilities since C's inception?
I think this statement says more about where you're coming from, but I'll bite.
Because programmers are stupid? It seems like a good idea to try to get the language to protect the programmer from their own stupidity, but frankly programmers demonstrate their own stupidity even in safe languages- such as with sql injection or other quoting problems. I simply don't think it's possible, and in fact trying to increase the safety (without making it absolute) just serves to make the programmer think they can get away with more.
A better solution is to make the wrong way to programs generate wildly inaccurate results. That means designing APIs that fail quickly- such as wiping memory during free()
, or padding buffers with a printable character instead of NUL.
Some of your users will complain about your APIs being hard to use and that they "just can't figure it out", but really, this is why secure programs are so hard to find; so many people "just figuring it out", instead of actually writing secure software.
> Have you seen clang?
Yes. It supports many GCC extensions and promises the support of more in the future.
Being as how it's very immature and it's code generation still sucks awfully right now, I don't see why you'd want to contort yourself for it.
>>21
Sorry for taking so long to reply.
> Things I'd love to see:
> * Null-terminated strings eliminated in favour of Pascal-style strings. In other words real strings, not an error-prone pointer hack.
There are open source standard C libraries for that. Why don't you use these? I disagree with you in this.
http://bstring.sourceforge.net/
> * The standard library cleaned up. Since strings know their length and are 8-bit clean, you can throw away some function and merge others. Also, function names and arguments can be made consistent.
strings know their length? I don't know what that means. Unless you take point 1 as implemented, but I disagree with point 1. I have to disagree with point 2 as well.
strings are not 8-bit clean. CHAR_BIT in ISO 9899:1999 is required to be at least 8, but it can have a value greater than that. POSIX.1-2001 guarantees CHAR_BIT to equal 8.
> * Get rid of trigraphs.
I disagree. They are not that annoying.
> * Some form of local type inference.
What exactly is local type inference? Can you give a C example of what you have in mind? I did google it, but the results I got from wikipedia were disappointing.
> * I'd love to see computed gotos and labels-as-values become standard.
Once more I do not understand. What makes you think you can not have a "computed goto" in C? What exactly do you have in mind? Example again please.
> It'd still be C (other than the type inference), but it'd be a less error-prone and more pleasant language to use. The only reason I'd call it "2.0" is because the new standard library wouldn't be backward-compatible.
Your only issue as I see is bstring. Use bstring then. For me strings were never an issue, but C was my first language. Were do you come from?
> I'm undecided if strings should support Unicode. Unicode's the future, and it'd be nice to put unicode in string literals and have it Just Work, but have you seen the size of ICU? An is to make strings use machine words instead of bytes and leave ICU separate.
string literals do "support" unicode.
> That makes common routines like strchr much more expensive to implement
On old machines the opposite was the case (use (e)cx and loop to forgo one of the explicit comparisons). On modern machines it makes little difference.
My assembly is horribly rusty, but it's something like:
next: mov bl, byte ptr [edi] ; load character from string
cmp bl, al ; al contains the character we're looking for
jz found ; first conditional branch
inc edi
cmp bl, 0 ; check for null termination
jnz next ; second conditional branch
jmp not_found
versus
next: mov bl, byte ptr [edi]
cmp bl, al
jz found
inc edi
dec ecx ; instead of cmp we have dec, which will set zero flag
jnz next
jmp not_found
They're both the same except for one instruction.
However, Pascal-style strings makes strlen much faster, and by extension any string function that allocates a new string.
I think this is a tangent though. I'm a fan of getting it correct, then getting it fast. C's long history of buffer-overflows is argument enough for me against C-style strings. The various strn* implementations are a band-aid to the real problem.
> It also wastes some memory
I don't think that's relevant on modern machines. If you're working with many millions of very short strings then there's always an ADT.
> means that very large strings (and bitmaps) will render the CPU caches useless because all the libraries keep bouncing back to the beginning to check the length
But that's the point of a cache: it stores hot data. There are a few normal cases:
There might one bad performance case: you have a huge number of strings longer than a cache line where you mostly linearly access the end of each string, but only rarely per string, and most of the time of your program is spent accessing these huge number of strings. Other than being rare, this might be offset by out-of-order execution (I'm not sure).
I'm fairly confident that for every catastrophic performance case due to cache effects that someone can raise against Pascal-style strings, I can do the same for C-style; it'd be a race between microoptimization versus algorithm.
> Or maybe you can qualify this better
A struct with a length and an array of bytes. I should have said Pascal-style, not Pascal.
> Consider the fact that you'll be doubling at a minimum, stack use on CISC systems, and wasting many registers on RISC systems
I believe you're of thinking something different. If you're referring to dynamic typing, this isn't it; types are all handled at compile time. The machine instructions generated are identical to what you'd get with manifest typing (the stuff in C/C++/Java/Pascal/etc), but there's less typing for the meatbag between the keyboard and chair.
As an aside, dynamic typing doesn't have to increase memory if you're willing to reserve bits on data words themselves. It's a tradeoff between space and time.
> Put types in the parameter list, like normal. Locals inside the function figure out what they should be based on the signatures of the local and called functions. See what newer versions of C# and D are doing.
Is it similar to what I asked for? A typeof keyword?
continued...
> It seems like a good idea to try to get the language to protect the programmer from their own stupidity, but frankly programmers demonstrate their own stupidity even in safe languages- such as with sql injection or other quoting problems
Of course, but note that with C you can have both buffer overflows and SQL injection attacks. With higher-level languages it's only SQL injection attacks. Some make it very difficult to do even the injections attack due to string tainting.
Part of my job involves security (can you guess?), so I'd really appreciate fewer potential attack vectors. Let's put it this way:
Let's say you have two boxes:
You can argue that if the software handling each port was written properly it wouldn't be a problem, but do you really want to rely on that? What if your job depends on it? What if your whole business relies on it?
I hope you see the parallel here. One box has fewer potential points of attack. Likewise, a language can have fewer points of attack.
> A better solution is to make the wrong way to programs generate wildly inaccurate results.
Or make it difficult to do it the wrong way at all.
Let's look at an example from C and D. What is the value of:
int c;
In C it's whatever was already at that location. Let's say it's a local variable on the stack. Then the value is probably from a chain of functions called previously by a parent function.
In D it's 0. Always. It's explicity initialized this way.
I've had this problem with C bite me in the ass a number of times in the past, and bugs caused by forgetting to initialize can be hard to track down without static analysis. So I appreciate this.
But wait! What if you really need every bit of performance, you're sure your code is correct, and you're willing to bypass this safely valve provided by D?
No problem!
int c = void;
You're now back to C's behaviour. As an added bonus, if another programmer comes along later they can be sure that, yes, not initializing it was intentional. It's not that you forgot.
I don't particularly like D, but it was designed by a former aircraft engineer. Some of the design decisions visibly reflect that.
> I don't see why you'd want to contort yourself for it.
Yes. But I'll want to one day. Or maybe PCC. Who knows.
I'm not a fan of barriers to portability. The Lisp world is having problems right now because the Common Lisp standard hasn't evolved to match the times. Let's not do that to C too.
Can we have a longer post limit? D:
> Why don't you use these?
Because character literals are still null-terminated, so now we need two sets of string functions instead of one. That's a recipe for bugs galore.
This is why the standard library should be revamped: let's make something like bstring the standard and chuck out the current mess of fail.
> They are not that annoying.
They're useless.
> What exactly is local type inference?
Here's how we do it now:
int foo( int x )
{
int bar = x;
}
Here's how we could do it:
int foo( int x )
{
var bar = x;
}
The compiler figures out that var should be int, so you no longer have to provide the type for each local variable when you define it. It's a bit like a primitive form of Damas-Hindley-Milner type-inference in Ocaml or Haskell that only applies within a function.
Looks silly in this trivial example, but it's really nice anything bigger.
As mentioned in the other posts I just made, C# 3.0 and D both make use of it. So will C++0x.
> string literals do "support" unicode.
UTF-8. Now let's say I want to index or concatenate that. Whoops.
Sorry, I missed this:
> What makes you think you can not have a "computed goto" in C?
It can. GCC does.
I'd like every C compiler to support it.
>>33
So it's actually what I asked for, typeof().
Though what you ask for is only part of what I ask for, not only that, but what you ask for is almost useless.
> This is why the standard library should be revamped: let's make something like bstring the standard and chuck out the current mess of fail.
I disagree. bstring for ME and MY projects is bloat. Do you honestly think bstring can run efficiently without you noticing in a embedded system? Where most of C programming is now?
> UTF-8. Now let's say I want to index or concatenate that. Whoops.
I think you lack C skills. what the hell are you talking about?
man wcscat.
You can use [] to index.
> I don't think that's relevant on modern machines.
Well you're wrong; I just measured it. On large strings (more than a page big) the cost is significant. On modern systems that means flushing the CPU cache and more page faults. On PIC it would mean no C at all (or at least, no strings).
Go ahead and use Java or D or whatever if you think the costs don't matter.
> Part of my job involves security
> I've had this problem with C bite me in the ass a number of times in the past,
You're an inexperienced C programmer and you're working on security systems. No wonder you think it's a good idea.
You think that the only possibility is being more careful, and having the computer assist you at that (because clearly, being careful is hard).
Of course, the reality is that you shouldn't be writing C code because when you do it, you get security vulnerabilities. You just minimize the amount of C and hope for the odds, right?
When I write C, I don't get security vulnerabilities. That's not bragging, and it's not because I'm much more careful with bounds checking. I even use strcpy()
.
Real security comes from knowing exactly what the interactions are at the important moments, and failing fast. Compiler hiding makes that difficult, which is why I use C. Using Java or Perl would remove bounds checks, but it'd require knowledge about their innards instead of just the operating system itself.
I would, however like to see O_TRUNC
go away. Maybe it should be renamed O_TRUNC_THIS_DOESNT_DO_WHAT_YOU_THINK_IT_DOES
would deter people...
> The Lisp world is having problems right now because the Common Lisp standard hasn't evolved to match the times.
Uh no. The lisp world is perpetually having problems because using lisp effectively requires a much higher skill level than the populous has.
>> What exactly is local type inference?
> example clipped
Having a "var" keyword figure out the type of things is a bad idea. And no, it isn't even remotely DHM-typing; a simple string/symbol replacement is what it is. You can use typeof()
for this right now if you want to try it out and see how awful it is.
> what you ask for is almost useless.
Why?
> man wcscat.
Yes, wchar_t. That's not UTF-8, which you can at least write legibly in literals. How do you plan to embed wide characters in a string literal? Give me an example of a wide string literal that contains: 双葉ちゃん
> You can use [] to index.
For wchat_t. Not for UTF-8. We keep coming back to problems with literals. And of course wide strings still aren't clean; they still have a null terminator.
> Do you honestly think bstring can run efficiently without you noticing in a embedded system?
Then use C 1.0 on your system. Let's just hold back the closest thing there is to portable assembly for that.
May I point out that embedded systems are pretty beefy today? Unless you're programming on a HC11 you have more computing power and memory than on my old 286, where Pascal did fine. This will increasingly be the case.
What platform are you writing to?
> I just measured it
What are you waiting for? The source for your test, please!
> You're an inexperienced C programmer and you're working on security systems. No wonder you think it's a good idea.
First, I don't work on C anymore. Not at work anyway. I would appreciate if you do not make any claims about my skill. Other than being crass, you don't know me.
My arguments will stand on their own merits.
> You think that the only possibility is being more careful
A strawman. I'm well aware that security is layered. But such simple buffer overflows are just one more pointless source of vulnerabilities, as have been demonstrates thousands upon thousands of time. We come back to my box with open ports analogy; there are local escalation exploits aplenty.
> And no, it isn't even remotely DHM-typing
I never said it was. Please read my post more carefully:
> It's a bit like a primitive form
Note the like.
wchar_t
is worse than useless.
You use L"string"
to make wchar_t
literals, but don't do that.
> What are you waiting for? The source for your test, please!
Are you serious?
struct str { int len; char s[1]; };
struct str *m;
int i, n;
for (n = 0; n < 4096*1024; n += 1024) {
m = malloc(sizeof(struct str)+n);
for (i = 0; i < n; i++) m->s[i] = 0x80|(n&127);
m->len = i;
/* flush cache here */
fun(m);
}
Make sure your compiler doesn't cheat so you're actually measuring cache fetches. I used GNU lightning to flush the cpu cache before continuing.
Then play with passing (volatile)i
versus access to (volatile)m->len
to fun() so you can compare. I used something like this:
void fun2(const char *s, int len) {
int i;
for (i = 0; i < len; i++) {
if (s[i] == 0) break;
}
}
void fun(struct str *x) {
fun2((const char *)x->s, (volatile int)x->len);
}
This is entirely repeatable. You'll note that if you instead do something like this:
void fun(struct str *x) {
int i;
for (i = 0; i < x->len; i++) {
if (x->s[i] == 0) break;
}
}
it's a lot slower because every iteration loop can cause a page fault. Obviously you'll need to preallocate however much memory is in your system to force it to swap.
If your results don't match, try increasing the number of cycles, or increasing the load on your system. The figures I posted above give me a difference of greater than 10 seconds.
> I would appreciate if you do not make any claims about my skill. My arguments will stand on their own merits.
You brought up your own skill level as being relevant when you brought up security. Don't do that.
Your argument was that safety was important for security and security is important to you because it's a big part of what you do. You seem to believe that inexperienced programmers can be careful enough to write secure systems if they get enough help from the compiler.
I think that's retarded.
> I never said it was [DHM typing].
No, you said it was like it. It isn't. It's not even close.
C types aren't. They're simply convenient accessors. Accessing using "var" is pointless because "var" doesn't specify the size of the type. Morphing the type as the value changes as C# does isn't C.
> May I point out that embedded systems are pretty beefy today?
No you may not. I still use embedded systems with memory measured in bytes. A C 2.0 should be able to replace a C 1.0 if you're trying to correct defects in C 1.0.
On the other hand, if you're just singing wishes, you might as well get behind something like D or C++- something without any possibility of overtaking C.
ugh, wakaba fucked up my formatting.
> Are you serious?
Of course. :)
Thanks for providing the code. I'll poke at it when I get home. At the very least I'll learn something new.
> You brought up your own skill level as being relevant when you brought up security.
It wasn't meant that way. Rather, it's why I'm so concerned about buffer overflows. All the software we use here is either written in C, uses a C library, or is built on C. Most of the security advisories we've had to worry about were related to C's handling of strings.
> Accessing using "var" is pointless because "var" doesn't specify the size of the type.
No, but the compiler knows what var should be. I'm not sure I see the problem here. Can you give an example?
This is what we have now:
int foo( int x, unsigned char y, char *z )
{
int a = x;
unsigned char b = y;
char *c = z;
}
This is what I'd like, for no other reason than it makes code visually less cluttered:
int foo( int x, unsigned char y, char *z )
{
var a = x;
var b = y;
var c = z;
}
This is entirely at compile time. The types do not change. As you said, it's a bit like string substitution. So what am I missing?
As a bonus, if you change the type signature you don't need to worry as much about local types (e.g. going from assigning from a signed int to a signed int, to assigning from an unsigned int to a signed int because you forgot to change something in the body).
> Morphing the type as the value changes as C# does isn't C.
At compile time? Run time?
Why shouldn't C be able to do something like this at compiler time? What problems will it cause?
> I still use embedded systems with memory measured in bytes.
Like an HC11. It has 256 bytes of internal RAM.
I realize this is a brush-off, but I think assembly or Forth is more suitable for a device that is so constrained.
Okay, so your machine has very little memory. Chances are its word is 16 bits. This amounts to an eight extra bits per string, which is mostly in ROM. If you really can't afford that overhead in RAM... ADT.
Sorry, I missed this:
> You seem to believe that inexperienced programmers can be careful enough to write secure systems if they get enough help from the compiler.
Not at all.
I think it makes a class of possible vulnerabilities a lot less likely. It can't prevent them. It won't do anything else about the myriad other possible problems you get with a low-level language. But it'll dramatically reduce a common class of problems.
It's because I don't want to rely on fallible developer discipline that I believe we should have a safer default. Developers will make mistakes, so let's give them fewer chances to make a catastrophic one.
> It wasn't meant that way. Rather, it's why I'm so concerned about buffer overflows. All the software we use here is either written in C, uses a C library, or is built on C. Most of the security advisories we've had to worry about were related to C's handling of strings.
Then I apologize!
I simply disagree; I think it has to do with the irresponsible handling of strings. If you'll note, that can easily include injection and quoting attacks as well.
> At compile time? Run time?
> Why shouldn't C be able to do something like this at compiler time? What problems will it cause?
I'm talking about compile time as well; it's a readability thing. When I do audits, I use ctags
to track the sizes of the accessors when checking for pointer escapes (off-by-one errors, etc). If I see this:
do { char *i; /* 50 lines later */ something-with-i } ...
do { int i; /* 50 lines later */ ...
then it's hard to tell what the type of "i" is by looking at it.
I personally use very short functions, but I frequently have to look at code like this. var
would work exactly in this way except I would have to follow the assignment to see what its type is.
Compilers can do amazing things, and the cost (as you've noted) isn't always in run-time cycles.
But as I said, you can try using var
right now with some macros. Try something like this:
#define var(x,y) typeof(y) x=y;
to see how you like it. It'll be easier to talk about what a good idea it is when you've tried it out in real code for a while.
I find it very difficult to follow, and it seems obvious to me that this would be a good way to hide complexities and costs from the programmer. Something that I think contributes to a lot of defects and security bugs in the first place.
> I realize this is a brush-off, but I think assembly or Forth is more suitable for a device that is so constrained.
I agree, most of the time. Unfortunately, I have customers afraid of forth...
> I think it makes a class of possible vulnerabilities a lot less likely. It can't prevent them. It won't do anything else about the myriad other possible problems you get with a low-level language. But it'll dramatically reduce a common class of problems.
Trading one kind of problem for another isn't really winning; if programmers learn that s[-1]
contains the length of a string, you'll start seeing code like this: fill_buffer(s,0,s[-1]);
and while fill_buffer
will certainly be able to check that length <= s[-1]
, it won't know that this is wrong.
>>37
If you are going to reply to two or more people please put their number quoted like >> N.
> Yes, wchar_t. That's not UTF-8, which you can at least write legibly in literals. How do you plan to embed wide characters in a string literal? Give me an example of a wide string literal that contains: 双葉ちゃん
const wchar_t *foo = L"双葉ちゃん";
AGAIN: what the HELL are you talking about?
By the way, yes I'm convinced that you are not a skilled C programmer, because of your wrong terminology. A skilled C programmer would have read the standard. I don't know whether you're good at what you do, and I won't guess. But you're not a C expert.
man wc scat
> what the HELL are you talking about?
Something I'm wrong about. Why ask the rhetorical question?
> But you're not a C expert.
I never said I was; part of the reason I start language flamewars is they're an entertaining way to find gaps in my knowledge.
Anyway, here's another question: wchar_t is two bytes, not four, on Windows. Unicode needs 21 bits to represent without surrogate pairs. How will the indexing work now?
>>42
I just bumped into this: http://erikengbrecht.blogspot.com/2008/07/love-hate-and-type-inference.html
Parts of it match up with your arguments against type inference.
Interesting, though note that I'm not against type inference per-se, but think inferring a particular thing that is also called a type a bad idea.
SBCL's type inference has found a significant number of bugs in my own code- especially in code paths that I wouldn't otherwise test heavily.
I did a quick search, and found myself doing a lot of this:
if (strchr(s+1,'#')) { ... }
Your C2.0 with a native string type would have be write this:
if (strchr(s,'#',1)) { ... }
Not necessarily a small change, but certainly not a reduction in arguments. Things like strncmp()
would double in argument-count- can you imagine calling a string-function with 5 arguments and not finding an error?
> Things like strncmp() would double in argument-count-
I was thinking you're only pass the strings, and the strncmp() would internally access the length of each string. So it'd be a variadic function requiring two string arguments and the optional third length argument if you want to compare less than the full lengths.
> think inferring a particular thing that is also called a type a bad idea.
What if the inference information was made available to the editor? One much-touted benefits of clang is all the compile-time information made easily accessible to the rest of the world.
A clang or GCC plugin could emit a ctags index file with the inferred types.
>>49 What if you want to start from offset=1 for string "a", and offset=2 for string "b"?
You clearly need four arguments strcmp()
- which means your length would be number five.
>>50 What if you try it for a month? Or convert an existing application to it?
As I already pointed out, you can try using var
as a macro right now, and then you'll be in a better position to talk about it.
I think it's unnecessary at best, and costly at worse. I think it's a bit early to defend it with editor support when it's not even clear there's a benefit to it.
>>53 I agree, but I'm playing along: dmpk2k seems genuinely interested in this thought experiment. He started with "this is obviously a good idea", and has come back from that somewhat.
I'm sure you can interact on this subject without hyperbole.
It's hard to hold a position when there's good evidence against it. :)
In any case, food for thought.
You don't watch the news, do you?
>>56
I don't believe the people to whom I believe you're making reference are as well-educated and rationally-thinking as the average /code/ regular.
>>57 is better than those people because he's a programmer.
>>49
Variadric functions need to be able to figure out how many arguments there are.
So a variadric strncmp would need 3 args for no length, or 4 for a max length number.
That's hardly an advantage over strcmp with 2 and strncmp with 3
Anyway, enough fail. Too many people want to add generic types to C, basically a typeof(void*) that returns what struct it is.
But then the usual way is to simply have the first field of the struct be an id int or string, and you cast it to a struct like that and get the id and then cast appropriately.
It's the thing that people forget. You just do it yourself in C.
Don't like C's strings? Then do some yourself... I've written my own "buffer" library, called dynbuf, which I use in my webserver.
Here's some of the header:
typedef struct
{
char *data;
size_t size;
size_t start_offset;
size_t end_offset;
} dynbuf;
dynbuf *dynbuf_create (size_t initial_size);
void dynbuf_append (dynbuf * db, const char *data, size_t size);
void dynbuf_append_from_file (dynbuf * db, FILE ** file, size_t read_size);
/* Successive calls will consume more and more of the buffer. The returned string points into the buffer, and is good until the buffer is fully consumed.*/
char *dynbuf_gets (dynbuf * db);
void dynbuf_grow (dynbuf * db, size_t size);
size_t dynbuf_length (dynbuf * db);
/* Return the data pointer+start_offset. The buffer will grow before it fills and it will always be Null terminated.*/
char *dynbuf_show (dynbuf * db);
/* Om nom nom precious data bytes I must eat them. */
void dynbuf_consume (dynbuf * db, size_t size);
void dynbuf_destroy (dynbuf * db);
This is what C is about, not complaining that someone didn't do 2 seconds of work for you already, and then force you to use it by making it the standard part of the language that all std functions use.
>>59
Oh, and FILE ** file because it will close it and set your copy to NULL if it reaches the end. The suits the way that I use it better.
And then there's a less control-freak append_from_descriptor function which I use for sockets mainly, etc.
So this is normal C programming, and there will NEVER be buffer overflows!
The only downside is that data is never shuffled, no in-memory-copies.
So If you have a long running buffer, and are using it as some kind of FIFO pipe or something, and it is never fully consumed, then it's size is just going to grow and grow and grow. And it never realloc's down, only up.
This is because it never starts from the begining unless it is "empty", obviously. But then I don't use it for stupid things, because as a programmer I use the best tool for the job. Simple.
> This is what C is about, not complaining that someone didn't do 2 seconds of work for you already
Dear Anonymous,
Arguments for programmer discipline apply to PHP too, with the resulting never-ending stream of SQL injection attacks. By comparison, it's rarely seen in Python or Perl. PHP advocates tend to advance the same argument: if you don't like it, build it yourself.
Of course I can write my own string handling, although it'll take much longer than two seconds since I'm not you, but that's really not the problem, is it?
Also, please read a thread before feeling the urge to add to it. Someone else covered that already a couple weeks ago, and in a much nicer manner.
>Arguments for programmer discipline apply to PHP too, with the resulting never-ending stream of SQL injection attacks.
Isn't that a different issue though? That sounds like people are not planning ahead and just coming up with an ad hoc design as they go along, and failing to consider everything as they do so.
So they aren't really building anything at all, just screwing around, perhaps to explore and learn or what have you. Now if they hand in the results of screwing around as a finished product then lol. Of course.
Anyway I think that sometimes people blur the difference between a language and a framework too much. Several languages come with a framework of sorts, as a convenience. Just like in Java you don't have to use their ADT implementations or their GUI classes, you can just roll your own (but you'll need to link some C in there to get to openGL or GDI or whatever you use to present your own GUI to the user).
If a language is going to include a framework as a part of the actual languages standard specification, then there are 2 ways to do it. PHP is half-arsed pile of crud, as it has been incrementally expanded over the years.
One way to go is to not force or expect anything of the programmer, because you don't want to force them to do something that they otherwise wouldn't do just to use the implementation of the language, and the other is to force them but try to make it useful / not ACTUALLY in the way.
This second means that they have to target some specific kind of area / audience, and if they miss the mark then it fails.
A proper programming language would not be aimed for anyone specific area unless doing so would not hinder any other aspects of it, otherwise it isn't general, or will rub some people the wrong way.
If you pay attention to what you are actually typing, C is this language. But it doesn't come with a whole lot so you'll be writing a LOT of stuff yourself.
Now if you aren't the kind that writes stuff yourself then you'll be using other peoples libraries, in which case you might as well be using a different language that provides such things anyway, if you feel like it. But remember that chances are, these things that it provides are themselves wrappers around a C library.
As a side note, this is the reason why the VAST majority of useful libraries are coded in C, not C++.
>By comparison, it's rarely seen in Python or Perl.
Someone hasn't been on the Internet for very long.
> By comparison
>>65
That doesn't mean what you think it means.
Specifically, it doesn't mean ``relatively''.
Then what did I mean?
"Rare" is always relative to something.
Think a moment before dragging out the pedant hammer.
Ohhh C++.
If you go with C++ then you will most likley being using the object oriented features of it.
Then it will confuse you as to why variables that hold objects are always the value of that object. In an OO language the variable should be a reference to the instance of the object, something C++ doesn't do. You have to do that yourself. And if you do things right, you will be jumping through these hoops a lot.
C++ fails at the model its most used for in the most far reacing and basic way.
>>68
It's not about 'stack'.
C doesn't have a stack. It is required by the function that it knows the number of arguments.
So such code is possible:
int f(size_t argnum, ...)
It really has nothing to do with a 'stack'. You either talk about a specific machine or the ISO standard.
> Then it will confuse you as to why variables that hold objects are always the value of that object.
Do you mean... references?
> C++ fails at the model its most used for in the most far reacing and basic way.
C++ indeed fails for its size and complexity.
OOP fails for its design. OOP FAILS HARD.
C++ fails at OOP.
Having reference variables (variables that always hold a reference to an object) makes sense. Most modern OOP languages do that. C++ does not. You can certainly get a reference to an object, but C++'s default variable behavior is not to automatically provide the reference as it should.
>>71
what behavior? oh god, you're another fucking idiot.
It's like saying C should automatically provide pointers to objects. Are you too lazy to type *?
No, you are the idiot for not thinking through the ramifications of what I have said.
C shouldn't automatically supply pointers to objects. C++ should but can't because its more than OOP. This flaw makes it not well suited to OOP but that is the main programming it is used for.
If you pass an object to a function then it shouldn't be the value of the parameter by default. It should be a reference to the object.
If you don't understand why you would always want to work with a reference, then you don't know OOP.
This is why C++ fails, because it doesn't enforce a basic tenant of OOP it just accommodates it with extra syntax.
>C shouldn't automatically supply pointers to objects. C++ should
That would make the two languages more inconsistent and more confusing to use.
>If you pass an object to a function then it shouldn't be the value of the parameter by default. It should be a reference to the object
...and what syntax would be necessary should one want the value? The indirection *? Reference parameters in functions are confusing enough, let's not add more "features" to an already excessive set.
>>70
You're right in that C does not specify that parameters go on a stack. But that's where it ends. In common C ABI specifications, register-based parameter passing conventions (like with amd64, powerpc and sparc) behave exactly the same way, i.e. the caller manages the parameter stack.
And for most people, the ABI is a part of the target they are programming for. Thus practically inseparable from C the language.
>>1 here. I thought this topic would be dead, by now. Anyway, as someone suggested, I stayed on C for a while. I've learned quite a bit, but the problem is I don't know what to do next. Are there some fun libraries to poke around with? If so, recommendations would be nice.
Also, book recommendations for CL and Lisp would be awesome too. :D
>And for most people? Citation please. What do you mean most people? Most people don't know C.
Pedant. You know precisely what I mean. And I do not see any counterarguments coming from you.
Are you perhaps one of those people who, contrary to readily available evidence, believe that it is impossible to write a working program in C?
>>78
I'm one of those people that know C well.
C does not have a stack. Any other information is not related or inseparable from/with C.
>>79
You are also an insufferable pedant, and the sort of person for whom it is of paramount importance to always be right. Regardless of what this does to the general usefulness of the conversation at hand. Fuck you.
Indeed, the C standard does not specify a "stack" for the passing of parameters. This is very much true. As with many things, the C standard doesn't specify the down-and-dirty method of implementation for e.g. automatic variables, alloca()
or varargs functions. However, can you present a mechanism that achieves the requirements of the C standard with regard to parameter passing, automatic variables and varargs functions via some mechanism that is not a parameter stack?
Thus, as is usual for the C standard, it stays just the minimum amount on the side of not specifying a stack-based mechanism. I claim that a future C standard, were it to specify a stack-based mechanism, would differ from the current standard only in its explicit use of the word "stack".
For all intents and purposes, the conventions of the target architecture that actual people program for are inseparable from the language as it is seen by the programmer. Thus the average C programmer does not give a rat's ass whether the standard specifies a stack for yada-yada or not: practically every implementation of C manages frames on the stack, alloca()
s memory on the stack and passes parameters for varargs functions on the stack.
>>80 Sure! I can be pedantic too!
ZetaC didn't use a parameter stack for arguments. It used a heap.
However an "expert C programmer" familiar with the kind of C you see on unix-like ABIs (Windows, Linux, MacOS, and so on) would have lots of problems with ZetaC- which although strictly conforming to C's specification, did strange things in order to be useful to the surrounding lispm.
>That would make the two languages more inconsistent and more confusing to use.
Yes it would, so given C++'s OOP usage I say its fundamentally flawed.
>...and what syntax would be necessary should one want the value? The indirection *? Reference parameters in functions are confusing enough, let's not add more "features" to an already excessive set.
You are not getting what I am saying. There should be 0 syntax for the default of passing objects as parameters. It is the default and correct behavior to pass by reference so it should require 0 syntax to accomdate (known as common sense).
Should there be syntax to pass an object by value? Maybe. Or you could just create a new object to pass in by reference because thats what the compiler is going to do anyway. Either way those would happen so infrequenly the extra syntax or hoop would be acceptable. And please notice I said passing objects by reference, other variables should be passed by value default.
> It is the default and correct behavior to pass by reference
Elaborate?
Yeah, I think pass-by-reference is about the dumbest part of most languages that support it.
In FORTRAN it was so bad you could accidentally change the value of CONSTANTS like "4" if you weren't careful...
On OOP, you create and work with an instance of an object. If you need to work with the instance of the object it makes complete sense that you would pass a reference to that 1 true instance of that object to a different scope.
If you pass an object by value what you are doing is creating another copy of that instance that is seperate from the original instance. So if you work with a copy, then you need to sync up those copies at some point or some other such extra work.
C++ makes you pass in the value of the reference to that object (because everything is pass by value), which is extra syntax. However, when it comes to objects you will always want to be working with the instances you create, so most of the time (if you are doing it correctly) you are using extra syntax to properley work with an object.
While I never pass the value of objects because its just bad OOP, it might be needed in some weird case so it sould be accounted for and that should require the extra syntax.
I think its just the poor implementation in FORTRAN. Modern OOP languages that have a distinction between value and reference variables make working with OOP more effective than C++.
C++ is an extention to C to accomodate many different programming models, so I understand why this can't be. But C++ would be a good language if they ditched the reliance on C and went full OOP (as C++'s most popular use is OOP).
MS did this with C#. So if someone wants to lear a C syntax language that is truley OOP then they should go with C#. Also C# is great because of the way it implements templates. C++ is still better than Java at runtime with templates but its still lacking as its just a macro basically. (and don't go on a trip about how C# is locked in to Windows and the desktop because it really isn't).
C++ programmers typically pass pointers to objects, and do not normally copy the object itself. However, they do have the option of pointer, copy, and reference access:
void foo_pointer (Object *x);
void foo_copy (Object x);
void foo_reference (Object &x);
They don't usually refer to these things as "call by value" or "call by reference" - those terms are uncommon amongst C++ programmers, but common amongst Visual Basic programmers where there are no pointers.
> If you pass an object by value what you are doing is creating another copy of that instance that is seperate from the original instance. So if you work with a copy, then you need to sync up those copies at some point or some other such extra work.
Could you provide an example of this?
I'm a fan of being explicit about mutation and restricting possible scope of change. I think passing by reference isn't worth the hazard it presents, at least in a high-level language.
If everything has pass by value semantics, I can be confident in the state of an object, even if I pass it to other methods; it will never change unless I explicitly assign to it.
If globals are generally a bad idea, I don't see why pass by reference should be any different. I think the latter is a restricted from of the former, and should be marked explicitly -- here be dragons.
Well, better than being plain stupid.
> As with many things, the C standard doesn't specify the down-and-dirty method of implementation for e.g. automatic variables, alloca() or varargs functions.
It does specify crystal clear "vararg" functions.
Are you perhaps confused with K&R1 which refused to explain how one would define a function similar to printf?
>C++ programmers typically pass pointers to objects, and do not normally copy the object itself.
Yes, and my gripe is the extra syntax required to do something that should be the default behavior.
>They don't usually refer to these things as "call by value" or "call by reference" - those terms are uncommon amongst C++ programmers, but common amongst Visual Basic programmers where there are no pointers.
It is not just VB programmers, its any programmers of modern OOP languages like VB, VB.Net (which is different from VB, just has the same style of syntax), C#, Java and others.
VB properly abstracts pointers. Passing any object to another scope passes a pointer (well the value of a reference) to that scope by default. It takes extra syntax to copy an object to mimic passing a value because its not something one would typically do in OOP.
The idea here is that once you create an instance of an object, when you work with that object you are always working with that instance. Lets say I have a car object and need to pass it to the paint function. Paint takes the car to paint and the color to paint it as parameters. If I pass car by value, a new copy of the 1 car I need to paint is made, it is then painted and then..... uhhhh mmm. What I wanted to do was paint the car, not a copy. So my paint function will need to return a painted car, I will then need to sync up the state of the returned car and the car I passed in. I will need to do extra work outside of the paint function, which makes little sense because the paint function needs to paint the car and be done.
If a reference is passed in, then the 1 car I am working with is painted and the paint function doesn't need to return anything. Once it is done executing I have a painted car.
As for globals, there are many reasons they are bad and the opposite is true for reference type variables.
One reason globals are bad and reference type variables are not is that a programmer can tell when a reference variable is likely going to be changed but can't for a global.
My paint function would have a signature that includes the car and the color. Just from the signature of the function (the parameters it requires) I can tell that the car can be modified. If my paint function modified a global variable I cannot tell from its signature that it has anything to do with a global and have to check it line by line. Any function can modify a global, but only function that have a signature with a car in it can modify an instance of a car, and it will only modify the instance that is passed in to it.
Also, syncing state between 2 objects that initially start of as copies is, needlessly comples, error prone, and requires more memory.
> So my paint function will need to return a painted car, I will then need to sync up the state of the returned car and the car I passed in.
painted_car = car.paint()
Unless you're multithreading, there is no need to sync an object; what you get back from the call is the newest version of the object. And only crazy people want to use references with threads.
> One reason globals are bad and reference type variables are not is that a programmer can tell when a reference variable is likely going to be changed but can't for a global.
How can they tell if a referenced variable will change? Here's a method call:
foo.bar( baz )
Will baz change or not? You don't know, unless you look at bar(), and all method calls inside bar() that use baz, and all children calls in turn that use baz. I feel pretty strongly that's bad news.
By comparison, if it's a value, you know if baz will change: since you haven't used assignment here, no.
> Also, syncing state between 2 objects that initially start of as copies is, needlessly comples, error prone, and requires more memory.
The only one I agree with is the increased memory usage, and even that can be mitigated. Note that I said semantics. In a high-level language, what the machine code does underneath isn't really a concern; if the compiler can prove that no modifications will be made -- which is trivial with copy semantics -- then it'll pass a reference along. If it can't, use a copy-on-write scheme. Or just copy it.
If you really need actual references as an optimization, you can use it. I just disagree it should be the default.
> It is not just VB programmers, its any programmers of modern OOP languages like VB, VB.Net (which is different from VB, just has the same style of syntax), C#, Java and others.
VB has references:
Sub Foo(ByRef X As String)
X = "Foo"
End Sub
C++ has references:
void Foo(string &X) {
X = *new string("Foo");
}
Perl has references:
sub Foo {
$_[0] = "Foo";
}
FORTRAN only has references. Java does not. Smalltalk does not. Common-Lisp does not. Most lisps do not. Python does not. Most C++ programmers don't use them. Perl goes through enormous contortions to detect problems at run-time caused by references. As far as I know the only language with the encouraged and pervasive use of references is Visual Basic.
Perhaps you're confusing references with something else?
Is paint a static method of car? Cars don't paint themselves so having a paint member on car wouldn't make sense. In a proper model, car would be part of some carFactory class or some helper function in an appropriate namespace. In OOP just because you want to do something to an object does not always mean that class should be the one doing it.
>How can they tell if a referenced variable will change? Here's a method call: ...
I can tell you that the bar member will use baz (I mean really it should) and can modify it. To know if it does change you do of course have to inspect bar. To see everywhere that baz will change you have to inspect all the functions that have baz in the signature (or the scope its created in of course). Now if we have a global, it could be changed in bar. It could be changed anywhere. I have no idea where in the program that global is going to be used, it can be used and changed anywhere.
Changes to the reference are limited and easy to indentify where they are potentially going to happen. Changes to globals can happen anywhere and the entire code needs to be inspected.
>By comparison, if it's a value, you know if baz will change: since you haven't used assignment here, no.
The problem with the value of baz is baz is now not the object you passed in.
A reference variable is one that evaluates to a pointer. Your examples are true enough, but what I am talking about is objects and OOP.
In VB, VB.Net, C#, Java and some other languages there are 2 types of variables. Value types and reference types. Value types evaluate to the value of the variable (things like numbers and strings) and reference types evaluate to a pointer to the value (all object variables).
In VB you would never want to do this:
Sub Foo(ByRef X As Object)
X = New Object
End Sub
What you have just told it to do is pass a reference to the reference passed in. What you want is the value of the variable to pass in because all object variables are references. So you would want the signature to read: "ByVal X As Object" to get the reference to the object being passed. (The VB compiler actually isn't that dumb and will treat the above function as if the X variable is passed by value automatically and strangley without a warning).
Value type variables can be passed either by the value or by a reference to it.
The distinction between the 2 types of variables makes OOP easier and less prone to mistakes as the language and compiler treat the variables properly. C++ does not can cannot have this distinction so it is up to you to add the etra syntax to properley work the OOP way.
Also, all variables in Python are references.
> Cars don't paint themselves so having a paint member on car wouldn't make sense.
Sure, but that's besides the point -- if you don't like the example, substitute foo/bar/baz. I draw your attention to the assignment.
> To see everywhere that baz will change you have to inspect all the functions that have baz in the signature
Indeed. That's a big problem. Maintain any non-trivial codebase and this is an disaster waiting to happen. Let's say there are two method calls that use the same object and you have call-by-reference, does their order matter? You don't know without knowing the innards of all its children. If you ask me, that's taking the principle of least knowledge in the back alley and gang-raping it.
> It could be changed anywhere.
Right. And references can be changed anywhere in your child call hierarchy, which is a bit of an improvement, but leaves a lot to be desired for code comprehensibility. Now how about restricting it to local scope so reasoning about it becomes pretty easy?
For example, random code:
x = [ "hay", "guyz" ]
y = foo( x )
z = bar( x )
puts( x.join )
What will it print if we're using call-by-value semantics? Call-by-reference?
> The problem with the value of baz is baz is now not the object you passed in.
You're going to have to demonstrate how this is a problem. If you change baz inside a method/function, and you want to keep the changes, return it and assign back to baz -- or better yet, give it a new variable with a descriptive name. It's very clear to any maintenance programmer that something might have changed with baz. It's very clear to you too in several month's time.
> VB, VB.Net, C#, Java
None of these are very good examples of proper OO languages. Assuming that what they do is the "proper OO way" is pretty naïve.
C++ is far from a proper OO language too, but your argument that it's bad because it doesn't work like the Java-inspired language family is really not valid in any way.
Hijacking terminology makes talking with you very difficult, and you're using definitions that other people in this field don't use.
References are not the same thing as pointers. Variables aren't "evaluated" except in interpreted languages.
You're complaining that C++ makes you write:
void Fun(Object *Foo);
when you want to write:
void Fun(Object Foo);
despite the fact that would confuse C and Objective-C programmers. Neither of those are references. Saying "reference" to someone who knows C++ makes people think you are talking about this:
void Fun(Object &Foo);
which is identical to VB's ByRef
which stands for by reference. It just so happens that C++ and VB share a definition of Reference.
If you wanted to be understood, you would say "I hate that C++ doesn't automatically make all class-variables pointers to classes by default"
Then we could have a meaningful discussion about what's involved in that, why that would be good, and why it would be bad.
Instead you come off as critiquing something you don't understand, and you really don't know what you're talking about. Saying things like "the OOP way" and "proper OO languages" reinforces this.
It makes it seem like you believe that Object Oriented Programming Languages never existed before Visual Basic. Now by confusing references and pointers, Java and Python can be Object Oriented languages too- but these are also very young indeed!
> Sure, but that's besides the point -- if you don't like the example, substitute foo/bar/baz. I draw your attention to the assignment.
If I need to pass some object foo to a function bar, and that function needs to change foo, it would make the most sense to give that function foo to change. Giving it foo and returning baz and then making baz = to foo outside of the function means the function didn't accomplish what it needed.
I should be calling foo(bar).
Calling bar = foo(bar) isn't really all that OO. And if I need to return a result from foo, such as if it was successful or not I need to add more complexity.
if(foo(bar))
is better than
bar = foo(bar, &baz)
if(baz)
(or even worse checking for expected results of the foo call on bar.
both will work and are readable enough, the the 1st one follow OO design better. It encapsulates what foo is doing much better. Unless a function creates a new instance of an object or passes instances between application tiers, returning objects from functions isn't good OO design.
>Let's say there are two method calls that use the same object and you have call-by-reference, does their order matter?
I would say its as easy as knowing what each call accomplishes. You don't need to examine the code.
I didn't hijack anything. I am not sure if you understand the difference between value type and reference type variables.
>void Fun(Object &Foo);which is identical to VB's ByRef which stands for by reference.
For example, that statement is not true.
void Fun(Object &Foo);
In VB.Net would be:
Sub Fun(ByVal Foo As Object)
It is ByVal becase Foo is a reference (because it is an object which makes it a reference type variable), and you wouldn't pass a reference by reference. You pass in the value of the reference.
>If you wanted to be understood, you would say "I hate that C++ doesn't automatically make all class-variables pointers to classes by default"
No, because pointers wouldn't be the answer. What I am saying is that C++ isn't very good at OOP because it does not treat objects, the basis of OOP, differently from other variables in a way that lends itself to the style of OOP.
>It makes it seem like you believe that Object Oriented Programming Languages never existed before Visual Basic.
I am using commonly used OOP languages as examples because it makes for a more practical discussion.
Also, VB and VB.Net are very different languages. VB isn't very OOP because it doesn't cover other OOP basic concepts like inheritence well so sticking to VB.Net is better.
And just to add, C++'s multiple inheritence is fucking evil.
> Giving it foo and returning baz and then making baz = to foo outside of the function means the function didn't accomplish what it needed.
It did. The change is available in the object being returned. There is no functional difference except that one is being explicit about change.
> Calling bar = foo(bar) isn't really all that OO.
I like purity, but I'm more interested in what works. The example above isn't OO (it's procedural), but let's run with the idea. Why do you care if it's OO or not? Think carefully about why OO exists, and we'll argue about it.
> is better than
Actually, I think both are poor pieces of code. For the first, why are you mutating an object like that inside a comparison? The same applies with foo() in the second: you're trying to do too much with one function.
Also, sane languages allow multiple return values, but if you're getting multiple return values -- which is what you're attempting with the second example -- that's a code smell.
> It encapsulates what foo is doing much better.
How? Both have exactly the same external effect: change bar and return a status about the change.
> I would say its as easy as knowing what each call accomplishes.
Well then, what does each one print? Give it a try and tell me what you'll get with value and reference semantics.
> I didn't hijack anything.
You're redefining terms to mean something other than their accepted meaning.
> void Fun(Object &Foo);
>
> In VB.Net would be:
>
> Sub Fun(ByVal Foo As Object)
No it wouldn't, because if Fun modifies Foo by assignment, that is using:
Foo = Bar;
then the C++ value modifies the Foo as seen by the caller whereas it doesn't modify the Foo as seen by VB.NET's caller.
http://www.cprogramming.com/tutorial/references.html
> What I am saying is that C++ isn't very good at OOP because it does not treat objects, the basis of OOP, differently from other variables in a way that lends itself to the style of OOP.
I don't think you have any idea what you're talking about. You clearly do not understand C++.
> I am using commonly used OOP languages as examples because it makes for a more practical discussion.
You're using VB and VB.Net because you don't know any other object oriented languages. I cut my teeth on Simula 67, so I give the term "Object Oriented" a quite wide berth
> Also, VB and VB.Net are very different languages. VB isn't very OOP because it doesn't cover other OOP basic concepts like inheritence well so sticking to VB.Net is better.
sigh
Demonstrating you know something about VB and VB.Net doesn't demonstrate that you know anything about C++.
> And just to add, C++'s multiple inheritence is fucking evil.
Like this for example. Perl supports multiple inheritence. Python supports multiple inheritence. CLOS supports multiple inheritence. Eiffel supports multiple inheritence (sortof).
There's nothing wrong with multiple inheritence: It solves very real problems which is why C# and Java have added interfaces, which solve some of those problems, without the ability to share code.
> What I am saying is that C++ isn't very good at OOP because it does not treat objects, the basis of OOP, differently from other variables in a way that lends itself to the style of OOP.
Some languages approach OOP with a much greater focus on message passing instead of objects. I was introduced to OOP via C++, but when I got into languages like Lisp, Smalltalk, et al., I realized you can do OOP in a variety of ways. I think you should look into these sometime and expand your view of OOP and how it can work.
>There is no functional difference except that one is being explicit about change.
Yes, but the big difference is the scope of where the change is taking place. Its better encapsulated to change the object in the function that is repsonible for making the change, instead of creating a copy of the object and setting it in the scope of the call.
>Why do you care if it's OO or not? Think carefully about why OO exists, and we'll argue about it.
I care in this case because its a discussion of C++'s OOP abilities.
>For the first, why are you mutating an object like that inside a comparison? The same applies with foo() in the second: you're trying to do too much with one function.
You know, I thought about that after I wrote it. For clarity it should set the value of some bool that was created to store the result.
>How? Both have exactly the same external effect: change bar and return a status about the change
Almost. Bar changes foo. Or Bar changes a copy of foo and you need to set foo to bars return in the calling scope. The actaul changing of the passed in parameter happens in bar if its a reference. The actual change of foo happens in the calling scope if its a value and bar return the result.
>You're redefining terms to mean something other than their accepted meaning.
Not exactly. The term evaluate does not always mean a function that exectures arbirary code at runtime. Especially in the context I used it. I used the word evaluate because Sun's Java docs used it.
>then the C++ value modifies the Foo as seen by the caller whereas it doesn't modify the Foo as seen by VB.NET's caller.
Are you saying that if I passed in an object Baz in VB.Net to:
Sub Fun(ByVal Foo As Object)
Dim Bar As New Object
Bar.Color = "blue"
Foo = Bar
End Sub
by calling something like:
Dim baz As New Object
Baz.Color = "red"
Fun(baz)
Print Baz.Color
that it will print "red" as the color? Because it will print "blue". Just as a similar block wriiten in C++ with "void Fun(Object &Foo);" would.
I hope you chose to modify Foo by assignment to illustrate a point because that is just not practicle in most cases and usually a bad coding choice.
>I don't think you have any idea what you're talking about. You clearly do not understand C++.
I do and you need to remember I am focusing on its OOP abilities. I understand it is limited in many respects by its compatibility with C and it other programming paradigms. This is what I am pointing out.
>You're using VB and VB.Net because you don't know any other object oriented languages.
I was using VB.Net because another poster brought it up.
I was not claiming C++ is the only language with multiple inheirtence. But comparing it to Python is a little odd as Python's implementation is limited (but still troublesome as I feel all implementaiton of multiple inhieritence are).
>There's nothing wrong with multiple inheritence
I would say there are problems with the way its implemented most of the time such as in C++. After working with OOP languages that don't support it, I find that it allows for better class creation. I know there are times I wish I could use it but am much happier with the heirarchy after not doing so. I have found the members of the heirarchy to be more extensible down the line instead of trying to cram it all in to the fewest number of classes.
> I care in this case because its a discussion of C++'s OOP abilities.
Which you have yet to justify as anything less than "I don't like typing the asterisk", without explaining what the real problem is.
I program in CL most days and I don't like typing the parenthesis. As soon as someone comes up with a way for me to get some of the flexibility I get out of CL without doing it I'll be a happy guy. Until then, I keep my bitching by-and-large to myself.
> Especially in the context I used it. I used the word evaluate because Sun's Java docs used it.
No they don't you liar. I challenge you to find a place on sun.com that says "A reference variable is one that evaluates to a pointer."
> Are you saying that if I passed in an object Baz in VB.Net to
> (snippets omitted)
No.
I said:
Sub Foo(ByRef X As String)
X = "Bar"
End Sub
...
Dim Y
Foo(Y)
Print(Y)
and:
void Foo(string& X) {
X = *new string("Bar");
}
...
string Y;
Foo(Y);
cout << Y << endl;
are equivalent, and that this is what a C++ programmer calls a reference. The following:
Sub Foo(ByVal X As String)
X = "Bar"
End Sub
and:
void Foo(string* X) {
X = new string("Bar");
}
are also equivalent; they do nothing except waste memory and time because the above change to "X" doesn't affect the caller of Foo. These are what C++ programmers call pointers and what VB programmers call Objects. Some very misguided tutorials refer to these as reference types. Those tutorials are usually written by non-programmers.
> I do and you need to remember I am focusing on its OOP abilities. I understand it is limited in many respects by its compatibility with C and it other programming paradigms. This is what I am pointing out.
You have no idea what you're talking about. CLOS, Smalltalk and Simula pioneered OOP in three completely separate respects: method resolution, message passing and data hiding. Visual basic is about as object-oriented as a bucket of rocks; barely meeting some very loose definition of the term "Object".
C++ uses objects in the simula-sense. Java uses them in the smalltalk-sense. Saying one is more object-oriented than the other is retarded.
C++ has a lot of problems, but being "less object oriented than visual basic" is a crap load of shit.
> I would say there are problems with the way its implemented most of the time such as in C++.
And that's because you matter how?
Either justify a broad statement like "C++'s multiple inheritence is fucking evil." or shut the fuck up, and at this point I'd prefer the latter; you don't seem to have anything interesting to add.
> After working with OOP languages that don't support it, I find that it allows for better class creation.
You're wrong.
> I know there are times I wish I could use it but am much happier with the heirarchy after not doing so.
Double wrong. You've never used it before. You're a big fat liar.
> I was not claiming C++ is the only language with multiple inheirtence. But comparing it to Python is a little odd as Python's implementation is limited (but still troublesome as I feel all implementaiton of multiple inhieritence are).
Liar liar pants on fire.
> I have found the members of the heirarchy to be more extensible down the line instead of trying to cram it all in to the fewest number of classes.
Wronger than wrong.
Multiple inheritence creates more classes, not less. It gives you the ability to hook method implemention into your interface classes, and specifies a method resolution order for interacting with that data.
It's not evil, but it is surprising if your mixins interact with local data. This is why C++ programmers recommend you don't do that.
> Its better encapsulated to change the object in the function that is repsonible for making the change,
What details about the change are you exposing by assigning from a return?
> instead of creating a copy of the object and setting it in the scope of the call.
This makes explicit that there was change, but doesn't say what was changed in the object or how. I don't see what you gain by hiding this fact.
> The actual change of foo happens in the calling scope if its a value and bar return the result.
Sure, but as I've argued above I believe this is superior. It guarantees that the fact there was a change is known, although not what the change was. There's much less chance you'll break something by reordering calls.
Hell, I'll go even further and say that single-assignment is a good idea. Then there's zero chance you'll break something by reordering, since you can't reuse a variable name. Of course, that's mutually exclusive with loops, so it only works in languages that use recursion solely.
> As soon as someone comes up with a way for me to get some of the flexibility I get out of CL without doing it I'll be a happy guy.
Forth!
I kid. I have hopes for Factor though. It's actually growing useful libraries.
> No they don't you liar.
Let's keep this civil? D:
I do enjoy Forth a lot, but sadly I don't think quite as well in Forth as I do in CL.
I have a hard time getting excited about Factor because it seems to combine the worst parts of both languages, and it seems far more like postscript than like Forth.
Although the real reason I haven't given Factor a real effort is that it doesn't work very well on my machine (Xv), and I haven't heard quite enough praise to work past the technical problems.
>Which you have yet to justify as anything less than "I don't like typing the asterisk", without explaining what the real problem is.
As I have stated multiple times, the default behavior of C++ does not treat object variables as an instance of the object in various scopes. It is left up to the programmer to implement extra syntax and logic to to treat object instances properley.
>No they don't you liar. I challenge you to find a place on sun.com that says "A reference variable is one that evaluates to a pointer."
Well a quick search shows that the word evaluate is used in many different contexts in Sun's Java docs.
>No.
>I said:
>...
>Some very misguided tutorials refer to these as reference types. Those tutorials are usually written by non-programmers.
This is where there is a disconnect. Your example uses strings which are value types in VB.Net and not treated the same as objects. I am talking about objects which are reference types.
Here is an early article by Jeffrey Richter about those types.
http://msdn.microsoft.com/en-us/magazine/cc301569.aspx
Who is Jeffrey Richter? He has contributed both design and code to the following products: Windows (all 32-bit and 64-bit versions), Visual Studio .NET, Microsoft Office, TerraServer, the .NET Framework, "Longhorn" and "Indigo".
So I think he knows what he is talking about.
>Visual basic is about as object-oriented as a bucket of rocks;
I have already agreed with you on that. My statement was regarding C++.
>Saying one is more object-oriented than the other is retarded.
It is possible to say a language is more OOP than another.
>Either justify a broad statement like "C++'s multiple inheritence is fucking evil." or shut the fuck up, and at this point I'd prefer the latter; you don't seem to have anything interesting to add.
Let's focus on one thing at a time. First we need to clear up your misconcpetions about reference variables.
>What details about the change are you exposing by assigning from a return?
The very detail that the change occured to the object passed in.
>This makes explicit that there was change, but doesn't say what was changed in the object or how. I don't see what you gain by hiding this fact.
Making the change through assignment won't tell you what in the object changed either. You gain less memory overhead and repetition. If a function that takes an object and changes that object, one would expect the object to be passed in to the the one changed. Why would one write a function that takes an object, creates a copy of it, changes that copy, and returns the copy? If one did not want to change the original instance of the object, a copy of that object shoud be made by the programmer in the scope of the call and pass in the copy.
>Sure, but as I've argued above I believe this is superior. It guarantees that the fact there was a change is known, although not what the change was.
The implementation of the function should let one know that the object was changed. In the case where one knows nothing about the code they are looking at and what it does, in the scope of the call it is more obvious that the object may have changed. But at the same time, you need to know a little about the code you are working with. One doesn't just start calling functions without knowing what they do first.
>There's much less chance you'll break something by reordering calls.
Reordering calls to what exactly?
If I had:
foo = one(foo)
foo = two(foo)
foo = three(foo)
I could do the same (as a reference) with:
one(foo)
two(foo)
three(foo)
Re-ordering those calls in either case wouldn't affect anything the other doesn't affect, at the end of each call, food is the same.
> It is possible to say a language is more OOP than another.
Sure. It's possible to say the sky is green and grass is blue. It isn't meaningful to say those things though, and you're not communicating with other people when you do.
As far as I know, you're arguing that the C++ "*" in variable and function prototypes means C++ is less object-oriented than Visual Basic. I parse that to mean you dislike the asterisk. You keep arguing with me about it though.
You've also made baseless arguments that C++ is less object oriented than Visual Basic because of multiple inheritance. You have so far refused to substantiate that.
> This is where there is a disconnect. Your example uses strings which are value types in VB.Net and not treated the same as objects. I am talking about objects which are reference types.
Use objects then:
Sub Foo(ByRef X As Object)
Set X = CreateObject()
End Sub
versus:
Sub Foo(ByVal X As Object)
Set X = CreateObject()
End Sub
The former modifies the caller's idea of X and is thus called a reference. This definition meshes with what C++ users refer to as reference.
"Reference types" versus "Value types" is uncommon amongst C++ programmers. You can point to Jeffrey Richter all you want, but you still haven't explained yourself. What exactly is Jeffrey Richter supposed to be supporting you on? That Visual Basic has things called reference types and value types? Or that C++ does?
Care you provide an exact citation and explain exactly what you're arguing?
Care to justify your baseless arguments about how something is more or less OOP?
> Well a quick search shows that the word evaluate is used in many different contexts in Sun's Java docs.
You made it sound like you already had the appropriate sound-bite ready and were citing Sun as an authority using that term. I accept them as an authority, but you still need to provide the evidence that they define pointers in that way. Finding the passage "A reference variable is one that evaluates to a pointer." would do that.
> Making the change through assignment won't tell you what in the object changed either.
So encapsulation is preserved.
> Why would one write a function that takes an object, creates a copy of it, changes that copy, and returns the copy?
Because correctness is more important than efficiency. People maintain code, therefore to improve the likelihood that code changes are correct you trade off efficiency. What's the point of a super-fast algorithm if it gets the answer wrong.
If you really need that performance in a profiled chunk of code, references are still there, just not the default. You know what you're getting into.
In addition, for a high-level language, just because it has pass-by-value semantics doesn't mean it's doing copying under the hood. As I mentioned earlier, it's trivial to determine if a basic block will modify a variable. Why should a programmer waste time worrying about a detail that a compiler is guaranteed to get correct (unlike the person)?
And last, with NUMA architecture, actual copying will eventually become a performance advantage. You want to keep data you're modifying local to your processor, otherwise RFOs will kill any chance at near-linear scaling with cores.
> I could do the same (as a reference) with:
Yes, but what if you start with the second. You're faced with the following code:
x = 1
foo( x )
bar( x )
baz( x )
What is the value of x at the end? Is it safe to reorder the calls? If you have this:
x = 1
foo_x = foo( x )
bar( foo_x )
baz( foo_x )
You know the following at the end:
>That Visual Basic has things called reference types and value types? Or that C++ does?
That VB.Net, C#, Java etc make a distinction of how objects variables are treated in OOP. That distinction means the programmer is always working with instances of the objects they create. It is OBJECT Oriented Programming.
A object variable in a language that has reference variables is ALWAYS going to be refering to the instance of the object created in any scope it is passed to. C++ certainly can do this, but this is not its default behavior. You have to create the reference or pointer to give it to different scopes. It takes extra in C++ steps to do a very basic OOP thing properley. So as an OOP language I do not see C++ being that great at it. The language itself should abstract the proper way to work with objects. I feel the proper way is to always refer to the instance of an object (as I understand OOP and working with objects in it), these other languages just handle it with references to that instance and the solution is a good one C++ already abstracts many things from the programmer, so it should abstract this basic conept of OOP (but can't of course lest it break its other paridigms).
>Finding the passage "A reference variable is one that evaluates to a pointer." would do that.
Yes, I did a quick search for that again. I can't remember the first search I did to find that. It is in there somewhere, but the documentation is extensive.
>So encapsulation is preserved.
Except implelementing the change to the instance is left up to the calling scope. If I pass a car to a paint funciton, I want that function to paint the car I gave it. I don't want it to paint another car. And then make that other car the car I just told the function to paint.
>In addition, for a high-level language, just because it has pass-by-value semantics doesn't mean it's doing copying under the hood.
That could be true, but now the programmer needs detailed knoweldge of the compiler optimizations making that not so high level.
>Yes, but what if you start with the second. You're faced with the following code:...
Your example makes sense. But I would contented that if someone else where to be modifying the code, they would at least need to know what those functions do. If they had no idea, they would most likey have no need to reorder those calls or insert calls inbetween them that would affect later calls. If that person does not understand that chunk of code, then they wouldn't be modifying it as their modifications wouldn't have anything to do with it, because that person would have an understanding of the modifications they are trying to accomplish.
> That distinction means the programmer is always working with instances of the objects they create. It is OBJECT Oriented Programming.
> The language itself should abstract the proper way to work with objects.
Whoops, accidentally posted without finishing my message. I'm a moron. :(
Anyways--I was quoting those to say that you seem to believe there is one "proper" way to approach object-oriented programming, and that simply isn't true. As I said in >>100, you should look into other languages that present OOP in different ways. As it is your view of OOP seems very narrow since, like I said, you keep making mention about "proper" OOP. There are all kinds of ways to do OOP, and it will make you a better programmer to learn about them. I also believe that would really change your point of view about the reference topic at hand.
But even if it doesn't, it won't hurt to expand your view.
> (babbling removed)
> The language itself should abstract the proper way to work with objects.
> (babbling removed)
Just so we're clear: Your objection is that C++ is less object-oriented because you have to type the asterisk *
as part of the type name.
There are a number of benefits in making the copy-constructor the default: RAII is impossible without it, and garbage collection can be done on a per-class basis (think about how you would implement automatic reference counting without the copy constructor).
> if someone else where to be modifying the code, they would at least need to know what those functions do
Right. So instead of tracking just what the function is supposed to return, they need to also track what effects there are on every argument.
Documentation isn't enough, because what if I change a sub-call to mutate a reference?
> because that person would have an understanding of the modifications they are trying to accomplish
With references you need to understand a whole lot more -- in a degenerate case the whole codebase -- in order to make the exact same change.
> That could be true, but now the programmer needs detailed knoweldge of the compiler optimizations making that not so high level.
No, they do not. Their code works just as they intended, no matter how the compiler implements it.
It is only if they really need to optimize the code they need to know anything about the internals of the compiler, but that is true of any kind of optimization, not just this particular one!
I am not contending that there is 1 true way to OOP. Just that C++ just doesn't do it well.
>Just so we're clear: Your objection is that C++ is less object-oriented because you have to type the asterisk * as part of the type name.
To oversimplify it, yes.
>There are a number of benefits in making the copy-constructor the default: RAII is impossible without it,
Not really. In C# and VB.Net there is the using statement and IDisposable interface along with manual garbage collector control (a topic many seem to be confused about because even though the GC is nondeterministic and automatic, you can easily control when it collects and what it collects). RAII is completley possible in .Net with its reference variables.
>and garbage collection can be done on a per-class basis (think about how you would implement automatic reference counting without the copy constructor).
You mean collect all instances of a given class in scope? Not that that sounds like a bad idea, I can't think of where I have seen that or why someone would want to do that.
>Right. So instead of tracking just what the function is supposed to return, they need to also track what effects there are on every argument.
Yes, it would make sense that if one were going to modify some code that calls a function, they would need to know what that function is doing.
>Documentation isn't enough, because what if I change a sub-call to mutate a reference?
I would say you need to stick to the idea that each object variable should always refer to an instance of the object.
>No, they do not. Their code works just as they intended, no matter how the compiler implements it.
So your coding and you think to yourself: "This makes it clear, but doubles the memory footprint and invokes extra initilization instructions in code, but it doesn't matter because the compiler will know I am fucntionally working with 1 instance of this object."
or
"Since I am working with 1 instance of an object, my code will refelect that."
> it would make sense that if one were going to modify some code that calls a function, they would need to know what that function is doing.
This has already been covered with the last paragraph in >>113.
> I would say you need to stick to the idea that each object variable should always refer to an instance of the object.
I don't know what this means; you're going to have to rephrase it. D:
> RAII is completley possible in .Net with its reference variables.
http://www.hackcraft.net/raii/
disagrees with you. Please justify your claim.
Furthermore, .NET needs to not just be able to express RAII idioms, but be able to do so better than C++ in order to back this claim: "C++'s OOP usage I say its fundamentally flawed."
> > and garbage collection can be done on a per-class basis
> You mean collect all instances of a given class in scope?
No. I mean what if you want some classes to collect their garbage automatically, and others that I want to collect explicitly?
> I am not contending that there is 1 true way to OOP.
Fine. Look here:
> Just that C++ just doesn't do it well.
If you meant there were multiple ways to do OOP, you would say "Just that C++ doesn't do any of them well".
That is of course a challenge for us to find one thing OOP that C++ does well. I've already suggested RAII and object-specific garbage collection. If you manage to find some language that can do either of these better than C++, I can probably come up with more things.
>disagrees with you. Please justify your claim.
There are a lot of people confused about the .Net GC. Its default behavior is to be non-deterministic, as in you don't know when it will run. But, what few people seem to know is you can manually control it and dertermanistically clean up your objects whenever you would like.
>Furthermore, .NET needs to not just be able to express RAII idioms, but be able to do so better than C++ in order to back this claim: "C++'s OOP usage I say its fundamentally flawed."
A language does not have to be able to effectivley implement RAII to be OOP as RAII is an OOP design pattern.
>No. I mean what if you want some classes to collect their garbage automatically, and others that I want to collect explicitly?
Again, in .Net you certainly can do that.
>If you manage to find some language that can do either of these better than C++, I can probably come up with more things.
I am not trying to get down to some nitty gritty things. I am focusing on how C++ treats objects, the basis of OOP.
> But, what few people seem to know is you can manually control it and dertermanistically clean up your objects whenever you would like.
You can implement LISP in nearly any language. That doesn't mean any language is as powerful as LISP.
> A language does not have to be able to effectivley implement RAII to be OOP as RAII is an OOP design pattern.
No, but you made a broad claim that C++ does OOP poorly. You have offered a single example (You don't like typing asterisks- and you feel they shouldn't be the default), to which I've offered two counterexamples why they should.
RAII is something C++ is going to do better than Visual Basic. RAII is useful, and it's an OOP design pattern.
> Again, in .Net you certainly can do that.
Can is not the same thing as do. People don't do that in .NET languages because it is difficult at best.
> I am not trying to get down to some nitty gritty things. I am focusing on how C++ treats objects, the basis of OOP.
You're unfocused.
You said C++ does OOP worse than Visual Basic because the asterisk should be the default. I provided two reasons that the asterisk should not be the default. Since you haven't argued against them being good reasons, I'll take it you accept my premise.
Therefore, because they do not have copy constructors, Visual Basic and .NET languages do OOP poorly. ∎
>You can implement LISP in nearly any language. That doesn't mean any language is as powerful as LISP.
What does a LISP implementation have to do with the .Net GC?
>You have offered a single example (You don't like typing asterisks- and you feel they shouldn't be the default), to which I've offered two counterexamples why they should.
That is the symptom you are focusing on. I am taking issue with the broader approach of how C++ lacks object instance abstraction. You choose to focus on one aspect of that.
>Can is not the same thing as do. People don't do that in .NET languages because it is difficult at best.
Difficult according to who? Implementing IDispoable to give you a function to do cleanup and calling GC.Collect is not difficult.
>I provided two reasons that the asterisk should not be the default. Since you haven't argued against them being good reasons, I'll take it you accept my premise.
I have, you just dismiss them because you are so stuck in thinking that C++ is so good at OOP.
>Therefore, because they do not have copy constructors, Visual Basic and .NET languages do OOP poorly.
There seems to be this obsession with copying object instances. You keep bringing up Visual Basic which I have already said is kind is moot because VB.Net is more OOP (in that it implements more OOP idioms) and Visual Basics next version is still in the works and not release.
If you really want to copy an instance of an object, the .Net framework does have the ICloneable interface.
But the copying of objects is something to generally be avoided. C++'s STL String classes go to great lengths to only copy the instance on write so it can be passed around by reference other times internally. .Net does something similar, buts it goal is to treat the string object as a value type making the string reference immutable.
But because making all these string copies on write incurres a huge performance overhead, there is the .Net StringBuilder class to work with mutable strings.
So in the end, all copying object instances does is produce complexity and performance issues. And providing a default constructor to do it that you are probably going to override anyway doesn't make a language OOP better.
> What does a LISP implementation have to do with the .Net GC?
What does .Net's theoretical capabilities have to do with what C++ makes simple?
> That is the symptom you are focusing on. I am taking issue with the broader approach of how C++ lacks object instance abstraction. You choose to focus on one aspect of that.
I keep asking you to explain yourself. Here's your chance again.
I am only focusing on what you bring up specifically. Broad claims get broad counterclaims.
> Difficult according to who? Implementing IDispoable to give you a function to do cleanup and calling GC.Collect is not difficult.
More difficult than an asterisk.
> I have, you just dismiss them because you are so stuck in thinking that C++ is so good at OOP.
Excuse me?
Where did I say that C++ is good at OOP?
I was arguing that Visual Basic isn't "better" at OOP than C++, and perhaps that C++ isn't obviously bad at OOP.
> There seems to be this obsession with copying object instances.
Copying things is a good way to do a lot of things. You have even agreed that some of those things are good.
> You keep bringing up Visual Basic which I have already said is kind is moot because VB.Net is more OOP (in that it implements more OOP idioms) and Visual Basics next version is still in the works and not release.
And I think you're a jerk for taking a term like "OOP" and redefining it such that there are no languages that implement "OOP" according to your private definition (that you have still refused to share with us) until Visual Basic.NET.
Specifics, please!
> If you really want to copy an instance of an object, the .Net framework does have the ICloneable interface.
Does every object implement ICloneable?
> But the copying of objects is something to generally be avoided.
Says you. If copying is the default, and I can override the copying semantics, I can get COW trivially.
Taking away programmer power by adding restrictions on the programmer is not something to be done lightly. Here is a way that C++ is more powerful than .Net, and it is something that is useful to expose to objects- it lets you use a natural method of interacting with the objects, and yet still lets you provide a high-performance implementation.
> But because making all these string copies on write incurres a huge performance overhead, there is the .Net StringBuilder class to work with mutable strings.
What the hell are you talking about now?
Who is talking about copying strings? Why are you talking about copying strings?
I was talking about minimizing copies by making it easy to share strings. Exposing that at the language level makes it easy to have a "string" class that equally-well works on large (multi-terabyte) strings and small in-memory chunks. Such a beast requires string-operations to be an interface.
.Net cannot implement this, but C++ can. That seems like a very useful object-oriented thing.
>What does .Net's theoretical capabilities have to do with what C++ makes simple?
So your point is that the .Net garbage collector has something to do with implementing LISP in C++. That makes no sense.
>I keep asking you to explain yourself. Here's your chance again.
And as I have repeated many many times. Reference variables always refer to an instance of an object in any scope they are in or passed to. In OOP, one primarily is working with instances of ojects. C++ does not abstract the object instance references. It is left up to the C++ programmer to do that themselves. If they are doing OOP, that means they are doing this extra work a lot.
>More difficult than an asterisk.
So if an object allocates resources, clean up of those resources is automatic when passing a pointer. You have no idea what is going on here buddy.
>I was arguing that Visual Basic isn't "better" at OOP than C++, and perhaps that C++ isn't obviously bad at OOP.
I hope you meant VB.Net because I have stated more than once that Visual Basic isn't good at covering the OOP bases.
>Copying things is a good way to do a lot of things. You have even agreed that some of those things are good.
I understand the need to do it sometimes in order to point out that focusing on working on object instances does not mean you have to sacrifice the ability to copy where it is appropriate. But at the same time, the copying constructs the C++ STL and .Net's string object make the string object very complex. Complexity that is needed for a library object like string, but not complexity one is going to be implementing in many if any other classes they make. And few classes have the ubiquity of string.
>And I think you're a jerk for taking a term like "OOP" and redefining it such that there are no languages that implement "OOP" according to your private definition (that you have still refused to share with us) until Visual Basic.NET.
As I have said many times before, I am focusing on object abstraction. It Object Oriented Programming. So by definition, the programming style is oriented towards working with objects and I have not needed to delve much deeper at this point.
>Does every object implement ICloneable?
It would not make sense for every object to implement ICloneable as the members of ICloneable would in fact be members of Object, as every type implicity inherits object.
Object has a protected MemberwiseClone function that creates a shallow copy of the object (that should actually be exposed by ICloneable). If you want to make a deep copy, then you put the code you would put if you were writing a copy constructor.
Since we aren't ever passing reference variables by value, there isn't a whole lot of need for a copy constructor. You can easily replicate that sort of thing by calling the clone method when using an object as a parameter. C++ like to make copies of objects in lots of different places. .Net does not, it would prefer one work with a single instance of an object. So if you need to work with a copy, you gotta create it.
C++:
function(Object myObject)
C#:
function(Object myObject.Clone)
do the same thing. And writing your Clone function is not any more difficult that writing custom logic in a copy constructor, and just trivially annoying if you want a shallow copy as you have to call and return the protected MemberwiseClone member of Object where in C++ you don't have to write anything.
>Taking away programmer power by adding restrictions on the programmer is not something to be done lightly.
As explained above the how to do this is trivial in .Net.
>Here is a way that C++ is more powerful than .Net, and it is something that is useful to expose to objects- it lets you use a natural method of interacting with the objects, and yet still lets you provide a high-performance implementation.
The natural way to work with an object is to work with one instance of that object.
And I just covered in my last post how the string objects copying (in .Net and C++) is not very per formant when you want to change the value of a string. Creating extra copies of what is essentially 1 thing does not lead to performance. Extra memory + extra instructions to create the copy of the one thing + extra instructions to sync the copy and the original instance + extra instructions to handle the memory of the copy once it is not needed and it will not be needed again very soon is not performed.
If you want to make a copy of an object, the you should be making another instance of that object based off of the first and treating the new object as a completely separate instance. In other words, copies are 2 separate instances. Copies should not be multiple instances of an object that eventually become 1 instance of an object. Its like building a car. They don't make 3 new full cars that are exactly the same, then modify 2 of them, then take parts from those 2 and put them on the first car and destroy what wasn't used and make 1 new car.
>Who is talking about copying strings? Why are you talking about copying strings?
In the context given, why not to copy instances of objects. Strang you didn't understand that. Thought I would bring up a very common class that uses copies of its instances internally to produce horrible performance.
If you know your stuff, you should know concatenating strings (when the number of concatenations is unknown using the + operator at design time especially) using the STL String class in C++ in not very performant. It is the same story in .Net and Java because they work with strings in a similar matter. To make them mutable, they gotta copy their contents.
>I was talking about minimizing copies by making it easy to share strings.
Well that's strange because that is my whole point of abstracting objects as references.
>.Net cannot implement this, but C++ can. That seems like a very useful object-oriented thing.
That's just wrong. In .Net and C++ the STL still makes intermediate instance copies. They both suck at it and they both require one to use a class built for certain purposes when working with mutable strings of write their own.
Now please pay attention so I don't have to repeat myself again.
> C++ does not abstract the object instance references. It is left up to the C++ programmer to do that themselves. If they are doing OOP, that means they are doing this extra work a lot.
You seem to be under the impression C++ programmers usually write code like this:
Foo *x = new Foo(1234);
x->Moo("Bar");
delete x;
They don't. Instead it looks like this:
Foo x(1234);
x.Moo("Bar");
And yet you insist:
function(Object myObject)
is somehow worse than typing:
function(Object myObject.Clone)
> So if an object allocates resources, clean up of those resources is automatic when passing a pointer. You have no idea what is going on here buddy.
If an object allocates a file, you similarly have no idea. It just so happens one resource is automatically managed.
Automatic resource tracking isn't anathema, and is in fact very very expensive. Requiring it is like calling the programmer stupid.
> The natural way to work with an object is to work with one instance of that object.
That's one kind of object-orientedness; the kind C++, CL (sortof), Simula, Perl, and Python uses.
Another variety is the kind Smalltalk, Ruby, Java, and C# use which is the message-passing model, whereby you're not working with an instance of an object at all but instead communicating with an object by-way-of messages.
>You seem to be under the impression C++ programmers usually write code like this:...
The function signatures are very different.
>Automatic resource tracking isn't anathema, and is in fact very very expensive. Requiring it is like calling the programmer stupid.
So then what was your point?
>That's one kind of object-orientedness; the kind C++, CL (sortof), Simula, Perl, and Python uses.
Another variety is the kind Smalltalk, Ruby, Java, and C# use which is the message-passing model
They use different techniques to accomplish the same thing, working with object instances. However, in C++ it is left up to the developer to explicitly add extra operators to work with object instances in different scopes. The latter languages abstract working with object instances to the point that you are always doing so. OOP wants you to always work with object instances. Object is defined as an instance of a class. So you could call it class instance oriented programming. If variables by default are not oriented twords working with class instances, then they are not doing OOP well. C++ is jut doing OOP, not doing it well.
> The function signatures are very different.
So what?
The function signatures for similar VB or C# code is different too.
> They use different techniques to accomplish the same thing,
Uh no. No they don't.
Smalltalk based systems can have doesNotUnderstand
messages, and CLOS objects don't have methods or messages.
> C++ is jut doing OOP, not doing it well.
You keep saying that, and I keep demonstrating that object-oriented covers a very wide berth. For you to be correct, C++ cannot do any aspect of OOP very well.
You offer a single example that C++ does OOP poorly by showing that it performs a single task poorly, and saying that task means OOP. For you to be correct, that would mean OOP must not have existed until VB and .Net, because that task is meaningless in Smalltalk and in CLOS.
I think it's pretty clear you are complaining about typing an asterisk. The burden still remains why this is essential to OOP, or you need to demonstrate that C++ doesn't do any aspect of OOP very well.
Alternatively, you can coopt the term "OOP" to only include languages that have the message-passing semantics and a distinction between machine and object types.
>The function signatures for similar VB or C# code is different too.
Different in that you don't need extra syntax to pass an object instance to the functions scope. There is syntax that exists for value to non-object variables, but that syntax is moot for objects because no matter what you try, its passed by reference.
>Uh no. No they don't.
Your response in context indicates that you are trying to say that C++ and Python (and the others mentioned) work with object instances the same way. I can tell you that just isn't true. All Python variables are references to the variables value.
>You keep saying that, and I keep demonstrating that object-oriented covers a very wide berth. For you to be correct, C++ cannot do any aspect of OOP very well.
You keep drifting away from the aspects of how C++ treats objects and that object appears in the name OOP.
>You keep saying that, and I keep demonstrating that object-oriented covers a very wide berth. For you to be correct, C++ cannot do any aspect of OOP very well.
It is pretty common to say that some languages are poor at OOP. Visual Basic is said to be a poor OOP language because of the way it handles inheritance and some other things. So for you to be correct, you would have to contend that all languages that implement some OOP idioms do so well because they just implement them. How they are implemented is irrelevant. I would say that's not correct.
>I think it's pretty clear you are complaining about typing an asterisk. The burden still remains why this is essential to OOP, or you need to demonstrate that C++ doesn't do any aspect of OOP very well.
As stated before, the asterisk is a symptom of how C++ fails to abstract object instances and leaves that to the developer. Something as basic as working with instances should be inherent to the language.
>Alternatively, you can coopt the term "OOP" to only include languages that have the message-passing semantics and a distinction between machine and object types.
Well it would be nice if there was 1 true term to describe all of OOP. So we can narrow down the main languages of discussion here to Class Based OOP (which includes C++ and C++ does not try to implement any other style of OOP like Prototype Based OOP). In class based OOP object identity is fundamental. C++ does not abstract an object's instance as being unique by default. The way it implements identity is to create copies when passing to other scopes destroying the uniqueness of the identity. Uniqueness is part of the definition of identity in Class Based OOP.
So for C++ to do Class Based OOP well, it should handle identity as unique by default and not leave it up to the developer to put in to place extra syntax and procedures to ensure this basic idiom of Class Based OOP.
Is that specific enough?
> As stated before, the asterisk is a symptom of how C++ fails to abstract object instances and leaves that to the developer. Something as basic as working with instances should be inherent to the language.
You don't always want to abstract away instances, as people have pointed out in other posts. Not every language abstracts everything. Your complaint seems to be that C++ makes you type a little more than other languages, and that's true. That verbosity provides finer degrees of control. If you want finer control over things you use a language like C++. If you don't, use something else.
This all just seems to be an argument of personal preferences. You don't like typing an asterisk, so you don't like C++. There's nothing wrong with that, but you should just say so instead of turning it into this issue of C++ having some sort of fundamental problem.
> that syntax is moot for objects because no matter what you try, its passed by reference.
No it isn't.
I've already established what that means. By reference means using ByRef
in VB and &
in C++. The only language I'm aware of where all objects are passed by reference is Fortran.
Stop coopting terminology. I'd prefer you use made up words if you don't know what they mean.
> Your response in context indicates that you are trying to say that C++ and Python (and the others mentioned) work with object instances the same way. I can tell you that just isn't true. All Python variables are references to the variables value.
Then get your fucking eyes checked.
I said the OOP that Smalltalk supports and the OOP that CLOS supports doesn't map comparatively to C++ or Python. I also said that C++'s OOP doesn't map comparatively to Visual Basic.
I've been saying there are many kinds of OOP. For some reason, you keep arguing with this.
> So for you to be correct, you would have to contend that all languages that implement some OOP idioms do so well because they just implement them.
The onus isn't on me to do so. I'm not saying any particular language is poor at OOP. I'm not saying any particular language is good at OOP either.
You're the one making blanket statements that you cannot back up with anything but an asterisk complaint.
> Something as basic as working with instances should be inherent to the language.
C++ works with instances just fine, look:
x.bar = 1234;
See?
There are many kinds of OOP. For some reason, you keep arguing with this. You need to focus on exactly what is wrong with C++'s kind of OOP instead of trying to redefine object-oriented programming to exclude every programming style and language except your Visual Basic dot net.
> So for C++ to do Class Based OOP well,
Rejected. "Class based OOP" means something very specific to programmers, as does "By Reference". You've already attempted to coopt the latter to be synonymous with "pointer", so I reject this entire paragraph on the grounds you don't have any fucking clue what you're talking about.
Point to an external reference that describes class-based oop the way you just did if you want to argue about it.
> Is that specific enough?
No it isn't. You just said C++ doesn't support OOP because x == x
might not be true. Python therefore doesn't support OOP either. Neither does SmallTalk, for that matter.
Just shut the fuck up already. You clearly don't have any idea what you're talking bout.
>You don't always want to abstract away instances, as people have pointed out in other posts.
You do when the very basis of Class Based Object Oriented Programming is to maintain unique identity of class instances.
I have not seen any valid reasons to refer to the value of an object instance.
>Not every language abstracts everything. Your complaint seems to be that C++ makes you type a little more than other languages, and that's true. That verbosity provides finer degrees of control.
Class Based Object Oriented Programming languages should abstract the very basic workings of Classes in Object Oriented Programming.
How is greater control accomplished when working with object instances in a practical manner?
>This all just seems to be an argument of personal preferences.
It is a discussion about how C++ drops the ball on a fundamental of Class Based Object Oriented Programming.
>but you should just say so instead of turning it into this issue of C++ having some sort of fundamental problem.
Except that I have explained some of the fundamentals of Class Based Object Oriented Programming and how C++ does not adhere to those fundamentals by defaults.
> You do when the very basis of Class Based Object Oriented Programming is to maintain unique identity of class instances.
You keep using that word. I do not think it means what you think it means.
You're clearly arguing something you're very impassioned about. It just doesn't make any sense beyond you "don't like the asterisk".
Class-based object oriented programming is where inheritance is derived through taxonomy- that is, short for classification. See: http://en.wikipedia.org/wiki/Class-based_programming for more details.
Identity comes from mathematical identity and refers to a comparison that remains true regardless of (or without comparing) any of the constituent variables. See http://en.wikipedia.org/wiki/Identity_(object-oriented_programming) for more details.
While C++, Smalltalk and Java are all "class-based", that doesn't mean that their object models are identical; It is impossible to implement some algorithms in C++ that are trivial to implement in Smalltalk, especially those that depend on #doesNotUnderstand:
.
> I have not seen any valid reasons to refer to the value of an object.
See >>108.
> How is greater control accomplished when working with object instances in a practical manner?
By "control" I meant being able to decide when I want to pass by value and pass by reference. By "verbosity" I meant that damned pesky asterisk. I don't entirely understand your question though; I work with objects in "practical manners" all the time and pass them by value.
> It is a discussion about how C++ drops the ball on a fundamental of Class Based Object Oriented Programming.
See >>131.
> Except that I have explained some of the fundamentals of Class Based Object Oriented Programming and how C++ does not adhere to those fundamentals by defaults.
No you haven't. All you have explained is that C++ doesn't pass objects by reference by default, and you act like this makes proper OOP impossible with C++.
>I've already established what that means. By reference means using ByRef in VB and & in C++.
And as I have corrected you, that is not true.
The signatures:
Function(ByVal thing As Object)
Function(ByRef thing As Object)
When compiled in VB.Net (or VB) they mean the same thing which is Function(ByVal thing As Object).
thing holds a reference to a value of type Object. Passing thing ByRef will pass a reference to the reference of thing's value because thing is a reference type variable. This is something that has no practical application, so the compiler ignores ByRef and treats it as ByVal. ByVal and ByRef are used for value type variables (which is basically anything that is not an object).
>The only language I'm aware of where all objects are passed by reference is Fortran.
Like C++, VB.Net, C#, Java etc. are pass by value languages. The confusion comes in because thee latter languages have reference type variables where the value of the reference is what you are passing. So I would just avoid the whole "pass by value"/"pass by reference" language terms because that just adds confusion and I wouldn't want to co-opt what those mean.
>I've been saying there are many kinds of OOP. For some reason, you keep arguing with this.
Which is why I went on to further define the type of these languages as Class Based OOP which has a more specific definition.
>I'm not saying any particular language is poor at OOP. I'm not saying any particular language is good at OOP either.
So your counter-point is that C++ implements object instances correctly? Or is it that there is no correct way for a language to implement object instances.
If it is the former, then you need to understand that my point is that while a programmer can correctly implement object instances in C++, C++ as a language does not correctly implement them for the programmer. Since OOP features are high-level features of the language, it should include this abstraction (but of course it can't).
If it is the latter, then you must be contending there is no basic definition of any style of OOP or its constructs. But if that was true, anything could call itself OOP.
You seem to want to weasel out of any rational discussion of this by saying that there is no definition of anything regarding OOP, and any language can implement whatever it wants and call itself OOP.
>C++ works with instances just fine, look:
Now pass that object to another scope. The language will break the instances unique identity unless you specifically tell it not to. The language should always maintain instance identity.
>You need to focus on exactly what is wrong with C++'s kind of OOP instead of trying to redefine object-oriented programming to exclude every programming style and language except your Visual Basic dot net.
Its kind of OOP is Class Based OOP. Look up exactly what that means yourself from a reliable source. And focus on how it defines identity.
>"Class based OOP" means something very specific to programmers
See, at least you admit that it means something specific. No you must learn what those specifics are.
>as does "By Reference". You've already attempted to co-opt the latter to be synonymous with "pointer"
I have not. I understand that C++ does not have a direct correlation for a reference type variable. C++ pointers and references are used to accomplish functionality similar to what reference variables do, but I have never said they were exactly then same, just comparable.
>Point to an external reference that describes class-based oop the way you just did if you want to argue about it.
Well Wikipedia came up first, you can Gogole yourself if you don't like it.
http://en.wikipedia.org/wiki/Class-based_programming
"The most popular and developed model of OOP is a class-based model, as opposed to an object-based model. In this model, objects are entities that combine state (i.e., data), behavior (i.e., procedures, or methods) and identity (unique existence among all other objects)."
Note the words identity and unique. Combing this uniqueness with the default copying behavior when passing objects to other scopes. Copies of the same instance are not unique, hence the use of pointers and references, that the language does not abstract, to enforce uniqueness. It requires the programmer to implement uniqueness (a fundamental) themselves. Identity is fundamental to this style of OOP and the language should be thing thing implementing it, bot the programmer.
>You clearly don't have any idea what you're talking bout.
You clearly need to take more time to learn these concepts.
> You seem to (gibberish snipped) by saying that there is no definition of anything regarding OOP, and any language can implement whatever it wants and call itself OOP.
That's exactly what I'm saying!
I keep asking you to define OOP; I said we cannot have a meaningful discussion about what is less OOP without one, and that's what you seem to want to do.
The definitions you've provided exclude Smalltalk and Common-Lisp, which I think takes an enomrous amount of hubris.
Then you harp about "class-based object oriented programming" in a way that makes no sense, and have the nerve to call me a weasel.
You say:
> And as I have corrected you, that is not true.
and yet, to what end? You cannot compare what C++ calls a reference with what VB.NET calls a reference if they aren't the same thing. You clearly seem to know this:
> I understand that C++ does not have a direct correlation for a reference type variable.
and yet they both have something similar to what C++ calls a reference. You then use a term called "call by reference" which has a shared meaning amongst both C++ and VB programmers and have coopted it to be a term that can only be meaningful amongst VB.
Your sole appeals to authority include some vague reference to a Java documentation that you can't provide a link to, and a VB programmer who doesn't agree with you either.
I'm happy to have a meaningful discusson about what's wrong with C++, but so far all you've got is a dozen posts with nothing more meaningful than you don't like the asterisk.
Then I noticed this:
> (an object oriented) language should always maintain instance identity.
Says you. Unjustified. Without example, or citation.
I've offered a number of reasons why a language shouldn't do that, and we've demonstrated that a number of algorithms are hard to express when a language does that.
dmpk2k even pointed out that it makes debugging a lot easier, because it's harder for errors to creep backwards deep from within a function.
The onus is still on you to say why the copy-on-call-semantics are the antithesis of object-oriented programming.
OK, lets get some of these basics out of the way.
Do you agree:
That at a high level we can define Object-oriented programming (OOP) as a programming paradigm that uses "objects" and their interactions to design applications.
And at a high level objects are conceptual entities that generally correspond directly to a contiguous (to the program) block of computer memory of a specific size at a specific location. I say "contiguous (to the program)" because the memory may not be contiguous at run-time, but that implementation detail is abstracted from the developer, at least in the languages of dicussion.
And that C++ implements a specific OOP style called Class Based OOP. Also, VB.Net, C#, Smalltalk and Java also implement this type of OOP. And that while there are many languages that could be discussed, it is better to just limit the discussion to only a few similar languages that implement these concepts in different ways.
And that Class Based OOP implements classes of objects. In this model objects are entities that combine state (data), behavior (procedures, or methods) and identity (unique existence among all other objects).
And that identity is realized through references.
And that a reference (at a high level, not a specific implementation of it yet) contains the information that is necessary for the identity property to be realized and allows access to the object with the identity.
So objects have an identity and the identity is access through a reference to that identity.
And that reference is implemented in these languages by:
C++ - the use of pointers and references (which will be called C++ references to distinguish between the reference concept and C++'s implementation of it)
And that VB.Net, C#, Java implement reference by using the concept of reference type variables where the programmer works with a variable that is only a type-safe managed pointer to the actual object instance.
Do you agree?
> That at a high level we can define Object-oriented programming (OOP) as a programming paradigm that uses "objects" and their interactions to design applications.
No.
At least, not if "their interactions" requires the methods or receiver be associated with the object (or its class). CLOS for example, uses multiple-dispatch only, and I certainly consider it object-oriented.
C++ also has something like multiple-dispatch that they call "overloading" (common) or "dynamic dispatch" (less common).
> And at a high level objects are conceptual entities that generally correspond directly to a contiguous (to the program) block of computer memory of a specific size at a specific location. I say "contiguous (to the program)" because the memory may not be contiguous at run-time, but that implementation detail is abstracted from the developer, at least in the languages of dicussion.
This is jibber-jabber, and means nothing substantial. What you've said is true of all Von-Neumann systems, and has nothing to do with object-oriented programming.
> And that C++ implements a specific OOP style called Class Based OOP. Also, VB.Net, C#, Smalltalk and Java also implement this type of OOP.
Accepted.
C++ also implements other OOP styles, including Dynamic Dispatch, and a form of Generic functions. It may implement other OOP styles as well.
> And that Class Based OOP implements classes of objects.
Rejected. Class-based means that the receiver or methods are actually inside the class instead of in the object, and specifically with regards to inheritence, Class-based means that when a message is not understood, or a method does not exist, the class has some number of superiors that are searched in place of the class.
> And that identity is realized through references.
Pleonastic jibber-jabber. C++ objects have a mathemtical identity whether or not there are any references to them.
> And that a reference (at a high level, not a specific implementation of it yet) contains the information that is necessary for the identity property to be realized
Rejected. You haven't demonstrated it is necessary, and I can come up with at least one counter-example.
Smalltalk objects may have multiple references (by way of proxy objects) that point to the same object, but do not share the same identity.
> and allows access to the object with the identity.
Clearly not. C++ objects have a mathematical identity whether or not there are any references to them.
> So objects have an identity and the identity is access through a reference to that identity.
Rejected.
> And that reference is implemented in these languages by: C++ - the use of pointers and references (which will be called C++ references to distinguish between the reference concept and C++'s implementation of it)
Accepted, although your terminology is confusing.
> And that VB.Net, C#, Java implement reference by using the concept of reference type variables where the programmer works with a variable that is only a type-safe managed pointer to the actual object instance.
Accepted, although your terminology is confusing.
> Do you agree?
No.
>At least, not if "their interactions" requires the methods or receiver be associated with the object (or its class)
It does not. Its Objects AND their interactions. Does not imply that the object explicity owns any particular interaction.
>This is jibber-jabber, and means nothing substantial.
Your right, it is not substantial because it does not decribe the implementation of an object, just a concept of an object at a high level. And it does have to do with OOP.
>C++ also implements other OOP styles, including Dynamic Dispatch, and a form of Generic functions. It may implement other OOP styles as well.
This I understand. It also implements other programming styles. But the discussion should be limited to Class Based OOP because it is arguable the most popular OOP style used by C++ develoepers.
>Rejected. Class-based means that the receiver or methods are actually inside the class instead of in the object
I will agree. The working was poor. Do we say:
An object is an instance of a class. A class defines behavior (i.e., procedures, or methods) while the instance contains state (i.e., data), and identity (unique existence among all other objects).
>Pleonastic jibber-jabber. C++ objects have a mathemtical identity whether or not there are any references to them.
Because all languages do not do everything in the same way, we need to define the concept, and then the implementation.
In this context a reference is the concept describing the means by which the programmer interacts with an object. Object instances inheirently have an indentity. Their identity is the fact that they exist and are a unqiue instance. A reference is how one gets the identity. Objects have identity, and references are how a programmer realizes that identity.
By mathmatical identity do you mean like ≡ (not a congruance relation)?
>Rejected. You haven't demonstrated it is necessary
Without a reference to an objects identity, the program would be unaware of its existance.
>Smalltalk objects may have multiple references (by way of proxy objects) that point to the same object, but do not share the same identity.
You have just decribed an alternative way Smalltalk can realize reference. While the proxy objects will each have their own identity, the objects reference the identity of the object they are proxy for. It is just another implementation to meet a specific need but it does not make a case for the concept that objects have identity and references realize that identity.
>Accepted, although your terminology is confusing.
To be clear.
The basic way C++ realizes the concept of reference is through pointers (* operator) and C++ references (& operator).
The basic way VB.Net, C# and Java realize the concept of reference is by distinguishing all objects as "reference type" variables. Object variables contain a type-safe managed pointer to the actual object instance. It is type safe because pointer arithmatic is not allowed. It is managed because the runtime that is executing the code is responsible for memory management and maintaining the pointer to the object instance.
These are not the only way these languages realize reference, but it is the most fundamental. (proxy objects, weak references and the like are another topic of discussion).
If you don't like some of these definitions you are gonna have to offer up your own.
> Its Objects AND their interactions. Does not imply that the object explicity owns any particular interaction.
Then the definition is meaningless; C meets this criterea. It simply doesn't let you create new methods, nor new object types.
> we need to define the concept (identity), and then the implementation.
Agreed.
Care to supply one?
> In this context a reference is the concept describing the means by which the programmer interacts with an object. Object instances inheirently have an indentity. Their identity is the fact that they exist and are a unqiue instance. A reference is how one gets the identity. Objects have identity, and references are how a programmer realizes that identity.
This definition fails for Smalltalk. There is no way to get the identity of an object in Smalltalk, because of doesNotUnderstand:
and how common it is to use for proxies.
> By mathmatical identity do you mean like ≡ (not a congruance relation)?
Strict equivalence is often interchangable with mathemtical identity.
> The basic way VB.Net, C# and Java realize the concept of reference is by distinguishing all objects as "reference type" variables.
Okay.
> Object variables contain a type-safe managed pointer to the actual object instance. It is type safe because pointer arithmatic is not allowed.
This specifically disallows C++, and is thus tautological. You must demonstrate that C++ does object oriented poorly by defining what object-oriented is.
If your definition of object-oriented-programming requires garbage collection and managed pointers, you're on shaky ground. I don't think you can prove that object-oriented-programming requires those things.
Since Java and C# let you defeat the garbage collection system, does that mean they aren't object-oriented?
> Without a reference to an objects identity, the program would be unaware of its existance.
The program isn't an aware thing. What exactly do you mean? That access to the reference can be lost?
> These are not the only way these languages realize reference, but it is the most fundamental. (proxy objects, weak references and the like are another topic of discussion).
I disagree that it is the most fundamental. C++ programs don't usually interact with the references, as I've pointed out.
> If you don't like some of these definitions you are gonna have to offer up your own.
I already did; I keep pointing to Wikipedia. You kept coopting them. It seems important for you to refer to these things with visual-basic-like terminology, and I don't think it matters that much.
>Then the definition is meaningless; C meets this criterea. It simply doesn't let you create new methods, nor new object types.
Well if you can't design your own objects, then the language is not OOP.
>This definition fails for Smalltalk. There is no way to get the identity of an object in Smalltalk
Identity is the concept of the unique existance of an object instance. You can certainly work with object instances in small talk. The various waysyou work with objects instances is the concept of reference.
>This specifically disallows C++
Of course it does, it explains how the other languages implement the concept of reference to an object identity. They don't implement it in the same way and they don't have to. And I am not contending there is 1 correct way to implement the concept of reference.
>The program isn't an aware thing. What exactly do you mean? That access to the reference can be lost?
Without a reference to an object identity, there is nothing for the language to work with. If you delete all references to an object that is left in memory, and then through some meachnism identify that block of memory as being that object, you are just implementing the concept of reference.
>I disagree that it is the most fundamental. C++ programs don't usually interact with the references, as I've pointed out.
So we say that C++ programs most typically work with objects by using object variables and pointers.
>I already did; I keep pointing to Wikipedia. You kept coopting them.
If you are goning to reject something, offering your definition is the best best objection.
>It seems important for you to refer to these things with visual-basic-like terminology, and I don't think it matters that much.
When talking about Visual Basic implementations. Remeber, lets get the definitions of the concepts hammered down and their implementations. The concept is not language specific.
> Well if you can't design your own objects, then the language is not OOP.
In C, objects are created with malloc()
. C simply doesn't have the ability to create object types. CLOS doesn't either- all objects must be classified, but types cannot be used in dispatch specialization.
> Identity is the concept of the unique existance of an object instance. You can certainly work with object instances in small talk. The various waysyou work with objects instances is the concept of reference.
So if I do:
1 print.
am I modifying the object "1" or a copy of it? How about this?
1 + 1.
How about this?
1 foo: 1
How about this?
x foo: bar
Fact is, with smalltalk, you cannot tell what object you're actually working with. You don't know if it's the only instance of an object, if it's making a copy, nor anything about it. Metaclasses can be used to make transparent copies like C++ does, or to forward the messages via proxy. No extra syntax required.
If you cannot see the identity (that is, there is no un-overloadable comparison operator like eq
in CL), then it might as well not be there.
> Without a reference to an object identity, there is nothing for the language to work with.
What exactly does this mean in C++:
x.y();
Are we working with a reference to "x" in your twisted universe?
If so, then how exactly does this prove does C++ do OOP poorly?
> If you are goning to reject something, offering your definition is the best best objection.
I offered you other people's definitions.
>In C, objects are created with malloc(). C simply doesn't have the ability to create object types. CLOS doesn't either- all objects must be classified, but types cannot be used in dispatch specialization.
C as a language does not provide mechanisms oriented for working with objects. CLOS does.
>If you cannot see the identity (that is, there is no un-overloadable comparison operator like eq in CL), then it might as well not be there.
Smalltalk has the = operator for equality and the == operator for identity as these are 2 different concepts.
>What exactly does this mean in C++: x.y();Are we working with a reference to "x" in your twisted universe?
If x is an instance of an object, be it a pointer, C++ reference or a variable initialized to an instance of an object in scope then yes x is a reference to an object.
>I offered you other people's definitions.
You seem to be focusing on misunderstand implementations of these concepts to prove that that don't exist or don't mean what it is accepted that they mean.
How would you define identity and reference in OOP?
> C as a language does not provide mechanisms oriented for working with objects. CLOS does.
Unqualified. What are the most axiomatic mechanisms that a language must have in order to be considered "object oriented"?
Several languages are considered object oriented, but have orthogonal mechanisms including "message passing/sending", and "multiple dispatch", "dynamic dispatch" (vtables), "direct dispatch", and so on.
> Smalltalk has the = operator for equality and the == operator for identity as these are 2 different concepts.
You're wrong. They're both messages. They mean whatever the receiver thinks they mean. Proxy objects usually redefine the ==
operator in order to make their use completely transparent.
It's also important in order to implement storage/prevalence mechanisms, so you'll see those applications redefine ==
as well.
Smalltalk implementors could also choose to redefine ==
for Symbols if they were implementing Smalltalk in Self.
> If x is an instance of an object, be it a pointer, C++ reference or a variable initialized to an instance of an object in scope then yes x is a reference to an object.
What if "x" is an instance of an object, but isn't a pointer?
string x;
In C++: This is clearly not a pointer, and clearly not a reference. What is it if not an instance of a string?
> You seem to be focusing on misunderstand implementations of these concepts to prove that that don't exist or don't mean what it is accepted that they mean.
You are the one that made the extremely broad claim that C++ did OOP poorly, so it is your job to defend that. I made no such claim, so the onus is not on my to attack it.
I don't think the words you're using have been meaningful; they either have overloaded definitions, have been completely irrelevant, or even outright bogus. You're very clearly interested in demonstrating that "C++ does OOP poorly", but have offered nothing to support this except that you don't like the asterisk.
You've tried to make garbage-collection an essential component of object-oriented programming (by defining references as managed), but you've failed to do so as both Java and Smalltalk make it possible to defeat the garbage collector for the purpose of implementing proxy objects.
You've also tried to define object-oriented programming very narrowly, but won't commit to saying languages that fail your definition (like CLOS and Smalltalk) aren't object-oriented.
>Unqualified. What are the most axiomatic mechanisms that a language must have in order to be considered "object oriented"?
Well lets not get ahead of ourselves here. I am pointing out one basic way that C++ does not OOP well. So you need to focus on the OOP concept of reference/identity. You keep wanting to get away from this or go around in circles because I suspect you either just don't understand it, can't separate implementation from concept or know that what I am saying is true.
>Proxy objects usually redefine the == operator in order to make their use completely transparent. They mean whatever the receiver thinks they mean.
Since == is referred to as the identity operator and is used to compare the identity of 2 references, using it any other way is what is known as fucking stupid. I could override the string concatenation operator + to delete the contents of the string and ignore the other string. But I would never do that because its fucking stupid.
Just because the language allows you powerful mechanisms that can be misused by idiots, does not mean those meachinisms (overriding the function of a specific operator to do something the operator is not intended for) does not invalidate the concepts that operator represents.
If a proxy object wants to override the identity operator to make its identity comparison transparent for the object it is being proxy for then great. There is nothing wrong with that. The proxy object is implementing the concept of reference appropriate for what a proxy is. Once would say that the proxy object is acting as a proxy for another object, go figure.
>What if "x" is an instance of an object, but isn't a pointer? string x;In C++: This is clearly not a pointer, and clearly not a reference. What is it if not an instance of a string?
Wow, you are really struggling here. x is a reference to to a string object. It isn't a pointer and it isn't a C++ reference. It is a variable that is a reference to a specific object of type string that uses value semantics to work with the object. If you would have used a pointer or C++ reference you would be working with reference semantics.
>You are the one that made the extremely broad claim that C++ did OOP poorly, so it is your job to defend that.
I am trying very hard, but I keep having to repeat myself. But before I state my reasons again, I need to teach you some thing about OOP.
>I made no such claim, so the onus is not on my to attack it.
That is true, you have only proven you fail to understand the reasons.
>You've tried to make garbage-collection an essential component of object-oriented programming (by defining references as managed), but you've failed to do so as both Java and Smalltalk make it possible to defeat the garbage collector for the purpose of implementing proxy objects.
And this really points out your lack of reading comprehension and inability to understand that different languages may implement the same concepts differently. This will be the third time that I point out that I clearly defined that variables that reference objects in VB.Net, C# and Java are distinguished as reference type variables that are managed type-safe pointers. I have never said or implied that other languages must implement these concepts with the same mechanisms.
>You've also tried to define object-oriented programming very narrowly, but won't commit to saying languages that fail your definition (like CLOS and Smalltalk) aren't object-oriented.
And you have yet to offer a counter-definition. So far all you have done is misapplied the implementation of OOP concepts in specific circumstances to specific languages to incorrectly say that there is no way to define OOP and that all languages are OOP and not OOP at the same time.
> I am pointing out one basic way that C++ does not OOP well.
No you're not. You haven't even defined OOP in any way except as "that which C++ does poorly", or perhaps "without asterisks".
You cannot do that. You must offer a set of definitions that defines OOP, that supports existing languages that are accepted as OOP. I've required that contain at least the following languages: CLOS, Smalltalk, and Python. I've specifically left Simula, Self, and Perl out of that list.
You have failed to do this.
> I am trying very hard, but I keep having to repeat myself. But before I state my reasons again, I need to teach you some thing about OOP.
STOP DOING THAT
Your reasons are so-far inadequate.
If you are incapable of explaining your broad statement "C++ does OOP poorly", then that is your failing, and not mine.
I've pointed to a very simple way that you can demonstrate your point. You can enumerate the things that make OOP, that include accepted OOP languages, that excludes C++.
I've also suggested you appear to an authority we can agree on (like Alan Kay). However, I don't think there are any authorities that agree with you.
Finally, I've also suggested you learn something about what you're talking about. You clearly aren't that familiar with enough different programming languages in order to adequately and concisely compare them.
> Since == is referred to as the identity operator and is used to compare the identity of 2 references, using it any other way is what is known as fucking stupid.
I disagree. If you're writing a prevalence layer that records the contents of (say) a hashtable to the disk periodically, you may need to reload parts of those contents periodically. Without overriding the ==
to be aware of locations on the on-disk database, your prevalence layer isn't fully transparent.
If you're writing a proxy to copy messages over the network, say for a distributed server, you may want to push a comparitor over the network. Without overriding ==
you'll need to intern every object that you receive from a remote host, in order for those comparitors to be network-portable.
Now, calling it "fucking stupid" demonstrates you can't think of any reason why you would want to do it. It's not the same thing as proving there aren't any reasons.
By the way, C++ most certainly has an identity; you can use: ((void*)x)==((void*)y)
to test for it.
> Just because the language allows you powerful mechanisms that can be misused by idiots, does not mean those meachinisms (overriding the function of a specific operator to do something the operator is not intended for) does not invalidate the concepts that operator represents.
Because ==
cannot mean that the pair occupy the same memory address, it also means it cannot satisfy your definition of identity.
If an object has identity, but is inaccessible to the program, how exactly does it matter if it has identity or not?
> I have never said or implied that other languages must implement these concepts with the same mechanisms.
Yes, you did. Here:
>>> Object variables contain a type-safe managed pointer to the actual object instance. It is type safe because pointer arithmatic is not allowed. It is managed because the runtime that is executing the code is responsible for memory management and maintaining the pointer to the object instance.
That sure reads a whole hell of a lot like you're defining object variables as type-safe managed pointers.
I don't think you meant it though: Once I pointed out what this meant, you backpedaled:
>>> I am not contending there is 1 correct way to implement the concept of reference.
Even though >>136 definitely contends there is one correct way to implement the concept of reference, and this is it.
Never mind. You indicate now, that managed pointers, and garbage collection don't have anything to do with object-oriented programming. I don't know why you brought it up then, but we can work with that.
So just to enumerate what object-oriented programming is:
That about right?
> And you have yet to offer a counter-definition. So far all you have done is misapplied the implementation of OOP concepts in specific circumstances to specific languages to incorrectly say that there is no way to define OOP and that all languages are OOP and not OOP at the same time.
You keep saying that the onus is on me to do so. You're the one making a broad claim. You seem to think it's all wrapped up in some magic definition of "identity" that you can't point to on another page; that you must describe in prose here, on 4-ch.
Why do you think no one has ever tried to define object-oriented in the way you are doing now? Why do you think you can't point to an authority (like Alan Kay) who agrees with you?
Why can you not use anyone of considerable merit's definitions on this subject? Does this suggest no real programmer does object oriented programming? Or is it simply more likely, that you have no idea what you're talking about
>Why do you think you can't point to an authority (like Alan Kay) who agrees with you?
See, there you were able to give what you feel is a definition of OOP.
Here is a neat email exchange between him and an author:
http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en
>OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.
So there, Alan Kay says C++ isn't OOP to him because it doesn't do EXTREME late binding in all things.
I mean my whole thing was that C++ want to create new unique instances of objects when passing it to another scope unless the developer implements mechanisms to maintain an objects uniqueness and that it would be better if it was the other way around because objects are a unique instance of a class.
But Alan Kay kills it with extreme late binding. That really limits the number of actual OOP languages to just a few.
> See, there you were able to give what you feel is a definition of OOP.
I did not.
I said I would accept Alan Kay as an authority. I chose him on purpose because his definition of OOP is incompatible with yours.
> Here is a neat email exchange between him and an author:
http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en
HTTP/1.1 403 Access Denied.
> I mean my whole thing was that C++ want to create new unique instances of objects when passing it to another scope unless the developer implements mechanisms to maintain an objects uniqueness and that it would be better if it was the other way around because objects are a unique instance of a class.
First of all, C++ programmers don't do that. Stack allocation is very popular in C++. The heap isn't used for objects so much as it is used for buffers.
Second of all, that's a far cry from doing OOP poorly.
>http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en
HTTP/1.1 403 Access Denied.
Wow, that site doesn't like 4-ch
Google: alan kay object oriented programing definition
first result
>First of all, C++ programmers don't do that.
That is true, they have to implement extra features of the language to use objects properly.
Lets say I want to paint my house.
class Person {
void Paint(House houseToPaint) {
...
}
...
}
class House {
...
}
Person me;
House myhouse;
me.Paint(myhouse);
So what did I just do. I created a new house, painted it and destroyed my new house resulting in my actual house not being painted. Damn the language sucks at working with objects. Lets try what someone else wanted to do...
class Person {
House Paint(House houseToPaint) {
...
}
...
}
class House {
...
}
Person me;
House myhouse;
myhouse = me.Paint(myhouse);
Not too bad, I only had to change a few things. But what did I just do? I built a new house just like the old one, moved in to it, painted it, and destroyed my old house.
Clearly, that is more work that I wanted to do.
Lets do something seemingly more practical.
class Person {
House Paint(House* houseToPaint) {
...
}
...
}
class House {
...
}
Person me;
House myhouse;
me.Paint(myhouse);
Success! My house is painted without extra steps. All I had to do was type an asterix (or I could have even done an ampersand).
The thing is, why does the language want me to work with myhouse with value type semantics in the scope is was declared and then implement reference type semantics myself when passing to another scope. The language should be oriented towards working with objects. The language clearly supplies me with the mechanisms to do so. But it is left up to me to implement those mechanisms each time I want to work with the object in a practical manner (which is using the instance of the object when I want to work with the instance of an object).
If I am working with objects, I will be working with instances of those objects. The language wants to create a new completley new and unquie instance unless I tell it not to. If the language was oriented to working with objects, when working with an object it should always use the instance unless I tell it not to.
Yeah ,its just a simple astrix but it exposes the fact that the language does not treat object instances as unique entities. The programmer is left up to doing that on theri own.
> Success! My house is painted without extra steps. All I had to do was type an asterix (or I could have even done an ampersand).
You actually need an ampersand. What you wrote isn't valid C++. Person::Paint(House *foo)
needs to be called as me.Paint(&myhouse);
. Inside Person::Paint(House *foo)
, foo
can be manipulated with ->
Many C++ programmers find it useful; if they're using ->
it means they don't own the object and should be careful about stashing it anywhere.
> The thing is, why does the language want me to work with myhouse with value type semantics in the scope is was declared and then implement reference type semantics myself when passing to another scope.
I don't think you understand what you're saying. Inside Person::Paint(House &foo)
, foo
is manipulated as if it were in the parent scope- using .
.
C++ programmers don't do this because it's confusing without full exact garbage collection.
> If I am working with objects, I will be working with instances of those objects. The language wants to create a new completley new and unquie instance unless I tell it not to. If the language was oriented to working with objects, when working with an object it should always use the instance unless I tell it not to.
You're running in circles. You can use the copy constructor to get whatever semantics are most-useful in dealing with the kind of object. Without garbage collection, you need using the copy-constructor to be the default, or you cannot track access and usage to an object's resources.
You see, the copy constructor makes it possible to avoid the performance cost associated with object-oriented programming. You'll note that much of C++ is designed to make it possible to avoid performance costs. Turning more control over to the compiler means you get to avoid an asterisk, but it also means you're losing control.
> Yeah ,its just a simple astrix but it exposes the fact that the language does not treat object instances as unique entities. The programmer is left up to doing that on theri own.
So what? Several object-oriented languages don't treat instances as unique, as we've been over before.
>>150 "(To link to this page, please use the canonical URI "http://www.purl.org/stefan_ram/pub/doc_kay_oop_en" only, because any other URI is valid only temporarily.)"
Thanks >>152 , but I'm still getting access denied.
Any mirrors?
>>153
Just copy and paste the url, or turn off your referrer header. Works fine.