Dual core, Moore Law (29)

1 Name: Sling!XD/uSlingU 2005-04-24 14:05 ID:syqoZM6O

Intel Pays $10000 for a Magazine
http://www.grabageek.net/modules/news/article.php?storyid=425
It was the book in which Intel co-founder Gordon Moore has originaly proposed the now universally worshipped Moore's law which dictates how Silicon Technology as it was then would evolve.

Intel, AMD Dual-Core Scramble
http://www.boostmarketing.com/story.php?id=1005
Chip rivals Intel and AMD are rushing to get the first dual-core processors on the market

How I see it, the chipmakers can't keep up with Moore's Law anymore (bad planning ahead? major projects botched? lack of funding for new fabs? science has hit a barrier in the microscopic? whatever), and therefore try to resort to dual core to somehow keep their chips advance.

Dual chip is a bad idea for several reasons.
One, it can and will break applications - not all apps are written with dual cores in mind.
Two, 2 chips != double power. A lot is wasted in coordinating the chips. At best it's probably 150%, if I have to guess.

It's a desperate measure, hopefully temporary.

2 Name: CyB3r h4xX0r g33k 2005-04-24 15:31 ID:Heaven

2 CPU's is not double the power. When HyperThreading first came out many had misconceptions about that as well. And then the educated had to say "well no your OS might say its got 2 processors, but its only got one. HT != SMP" . This is just Version 2.0 of Hyperthreading: dual (or more) processor machines are quite expensive, so we'll make technology that combats that, and use mis-information to guide more sales.

HOW WONDERFUL.

orz

3 Name: CyB3r h4xX0r g33k 2005-04-24 16:48 ID:3qlsfdq4

Have you ever used an SMP system? It's not much faster, but a lot more responsive. I'm all for it.

4 Name: cyrilthefish!ljAhqzG3aU 2005-04-24 17:39 ID:mVYsxlo1

>(bad planning ahead? major projects botched? lack of funding for new fabs? science has hit a barrier in the microscopic? whatever)

technical reasons. mainly that the smaller/faster a chip gets the more power is wasted. as time as goes on proportionally more power is needed and less performance is gained.

ie. Intels P4 was supposed to scale up to 8ghz+ pretty easily, but is stuck at 3.8-4ghz and nearly melts itself in the process.

>Dual chip is a bad idea for several reasons.
>One, it can and will break applications - not all apps are written with dual cores in mind.

it will not break apps, the worst thing that'll happen is only one core being used. but as time goes on most apps will be written for multiple threads anyway

>Two, 2 chips != double power. A lot is wasted in coordinating the chips. At best it's probably 150%, if I have to guess.

not really true: AMD's dual-core chips run in the same power output envelope as single-core chips.

the only real bad point for dual-core chips at the moment is that they have a slightly slower clockspeed, but it's not a huge difference

even for apps that don't use both cores, you'll still get a performance boost as other background services can just use the other unused core instead of interupting the one running the program :p

i can't wait for dual-core chips to become affordable. :D

5 Name: Sling!XD/uSlingU 2005-04-24 18:16 ID:syqoZM6O

>mainly that the smaller/faster a chip gets the more power is wasted. as time as goes on proportionally more power is needed and less performance is gained.

Don't you mean "less power is needed"? The smaller the chip, the less energy it uses, the faster it is.
Or are you saying that the official explanation is suspicious?

>it will not break apps,

I read reports already from gamers that some of their games don't work anymore.

>AMD's dual-core chips run in the same power output envelope as single-core chips.

By power I mean calculating power, not electrical. Tho, I suspect that more electricity will be needed to power these beasts, even if they share some parts on-chip.

>you'll still get a performance boost

Yes but it will be lesser than a real single core with the same 2x power. It's a makeshift solution the industry cooked up to make up for their failure to keep up with the Moore's Law.

6 Name: CyB3r h4xX0r g33k 2005-04-24 19:30 ID:Heaven

Wait, doesn't Moore's law state that the number of transistors in a CPU will double every n years? Why the fuck do they have to keep up with that? If they can do more or the same with fewer transistors, why is that a problem?

7 Name: Sling!XD/uSlingU 2005-04-24 22:41 ID:syqoZM6O

Although Moore's law was initially made in the form of an observation and prediction, the more widely it became accepted, the more it served as a goal for an entire industry. This drove both marketing and engineering departments of semiconductor manufacturers to focus enormous energy aiming for the specified increase in processing power that it was presumed one or more of their competitors would soon actually attain. In this regard it can be viewed as a self-fulfilling prophecy.
http://en.wikipedia.org/wiki/Moore's_law

8 Name: Sling!XD/uSlingU 2005-04-24 22:44 ID:syqoZM6O

Also, same page:
Expressed as "a doubling every 18 months", Moore's law suggests the phenomenal progress of technology in recent years. Expressed on a shorter timescale, however, Moore's law equates to an average performance improvement in the industry as a whole of over 1% a week. For a manufacturer competing in the cut-throat CPU, hard drive or RAM markets, a new product that is expected to take three years to develop and is just two or three months late is 10 to 15% slower or larger in size than the directly competing products, and is usually unsellable.

9 Name: dmpk2k!hinhT6kz2E 2005-04-26 07:33 ID:3mNrMR/p

It's cheaper for the chip manufacturers to use dual cores to increase performance. Instead of escalating complexity, you keep the same design and add more copies of it. The upside is that it's less error-prone. The downside is it pushes complexity onto compilers and developers.

10 Name: !WAHa.06x36 2005-04-26 12:28 ID:DmImOLh2

Yes, modern processors are getting too complex to handle. The first P4 design had a twenty-stage pipeline, where at least two of the stages where just delays to give the electrical signals time to travel across the chip. Later designs have only increased the complexity. The control logic to keep track of all the different units of the processor and its pipeline is also getting out of hand. Simplifying the processor core and adding more of them solves many design problems, and lets you get away with far less complex control circuitry, which saves on space and power consumption.

I seem to recall there are some interesting insights on this in this article: http://arstechnica.com/articles/paedia/cpu/cell-1.ars

11 Name: Sling!XD/uSlingU 2005-04-26 21:03 ID:syqoZM6O

I'm still waiting for Masamune Shirow's CPU chip. ^^
The basic idea is a chip that reprograms itself in reply to the demand. This area is often solicited? Add more logic space to it. That other area is hardly used? Reclaim its space.

12 Name: dmpk2k!hinhT6kz2E 2005-04-27 08:47 ID:509Vgqrn

There were some experiments with genetic algorithms and FPGAs about a decade ago. I'm wondering if they've been expanded on since.

I highly doubt anything that can respond like that in realtime will exist in the foreseeable future.

13 Name: Alexander!DxY0NCwFJg 2005-04-27 10:47 ID:Heaven

>The basic idea is a chip that reprograms itself in reply to the demand. This area is often solicited? Add more logic space to it. That other area is hardly used? Reclaim its space.

I'm amazed how no-one seems to care about Crusoe reprogramming itself and using 1W of power, when we are having huge problems with power/heat/flexibility.

14 Name: !WAHa.06x36 2005-04-27 13:49 ID:HQ8QltX2

Yeah, it really seems people are dragging their feet with the FPGAs. There's a simpler idea than having a processor that can completely reprogram itself, and that is to have part of the processor be reprogrammable, and having the OS or program reprogram it to better suit the task at hand. Some tasks are better suited to software, and some to hardware. Having the software redesign the hardware has the potential to get the best of both. People have been TALKING about this, but I haven't seen anything practical yet.

15 Name: Sling!XD/uSlingU 2005-04-27 13:55 ID:syqoZM6O

Wasn't the Crusoe a flop?
I have read it was a disappointment.
I see it's being bashed in that >>10 link too:
moving core processor functionality into software meant moving it into main memory, and this move put Transmeta's designs on the wrong side of the ever-widening latency gap between the execution units and RAM. TM was notoriously unable to deliver on the initial performance expectations

16 Name: Sling!XD/uSlingU 2005-04-27 13:56 ID:syqoZM6O

Ah, >>15 is addressed to >>13.

17 Name: Sling!XD/uSlingU 2005-04-27 14:11 ID:syqoZM6O

Terje Mathisen used to say, "all programming is an exercise in caching."
A "reprogrammable chip" would not need to have to reprogram itself entirely (expensive in term of circuitry, and probably tricky), but only to be able to change its cache sizes, IMO. Give me variable caching! ^^

18 Name: !WAHa.06x36 2005-04-28 15:40 ID:HQ8QltX2

If you can vary the cache, you might as well set it to the maximum size and leave it there.

Also, I really, really hate the way memory speed is lagging behind processor speed. It's just no fun to program low-level code when you have to worry about memory access being a slow operation.

19 Name: CyB3r h4xX0r g33k 2005-05-06 18:59 ID:vkJusCjW

>>18
Hey, it has introduced an entire new area of research. I love how the CPU/cache system is now better than my first computer.

Low level code is almost obsolete anyway. With the complexity of modern architectures the compiler is going to do a better job than the vast majority of people.

20 Name: CyB3r h4xX0r g33k 2005-05-07 05:10 ID:t8F3G+O7

>>19
lolz

21 Name: !WAHa.06x36 2005-05-07 17:26 ID:KiqV5/w2

>>19

But will it? How can you ever tell? That's the reason I hate this - I'm reduced to trusting the compiler to do things right, when I know it won't always, and I have no idea what I should do to help it. Besides, issues like cache sizes and memory latency has an impact on much higher levels of code and even algorithm design than what the compiler deals with.

22 Name: CyB3r h4xX0r g33k 2005-05-08 11:17 ID:4wi3Cawn

All of you fools are subject to Assembler!

23 Name: dmpk2k!hinhT6kz2E 2005-05-08 12:27 ID:5gq0ET7l

Assembler has its place, but that place is becoming increasingly small.

24 Name: CyB3r h4xX0r g33k 2005-05-08 17:27 ID:3qlsfdq4

The thing I don't like is how when humans are involved, "optimization" tends to be mutually exclusive with "portability."

25 Name: CyB3r h4xX0r g33k 2005-05-09 06:35 ID:QTTwxMv3

>>21
Hey, it's what you get for fast and cheap computers. In any case, the key is that you might end up with worse code, but you didn't spend any time on it. Unless you're working with a real-time system, who cares? Programmer time is much more important than computing time now.

Does anyone know the relative access times (in cycles) for caches? Of course, it varies with the architecture, but there are general ranges. From what I remember, it's something like

L1: 2-6
L2: 10-15
Main memory: 60-100

And it's going to get much worse until Moore's Law expires (if it does).

26 Name: dmpk2k!hinhT6kz2E 2005-05-09 13:05 ID:2B4KWK4Y

Just a pet peeve:

"Programmer time is much more important than computing time now" is only true in some cases, and not in others. If you make a custom app that will be running on few machines, yes. If it's more widespread...

27 Name: CyB3r h4xX0r g33k 2005-05-10 07:54 ID:VYRDDuLt

>>26
You have a point. But time to market is one of the key factors in many application domains. In such cases, you often have to sacrifice code efficiency for programmer efficiency. The first thing people buy and use is like the incumbent: it almost always wins. In places where competition is not an issue, such as Windows or various open source projects, you can take your time.

28 Name: dmpk2k!hinhT6kz2E 2005-05-10 08:33 ID:HL1MfW8e

Does not change that "programmer time is much more important than computing time now" isn't necessarily true. For every example where you demonstrate that programmer time is more important, I can show a counterexample.

Like all things in life, nothing is absolute.

29 Name: !WAHa.06x36 2005-05-10 12:53 ID:HQ8QltX2

>>27

I hate that attitude too - not all programming is done strictly for profit. Volunteer work in open source code changes that completely, for instance. There you'll find people who want nothing more than to hand-optimize bottleneck code in libraries and the like.

And even from a strictly economic point of view, the vas majority of processors out there right now are in embedded systems, where well-optimized code means you can choose smaller and slower processors, which are cheaper and use less power, meaning direct savings from some investment in optimization.

This thread has been closed. You cannot post in this thread any longer.