Смотреть (1000)0 дней в Full HD качестве ОНЛАЙН

Дата: 04.02.2018

(1000)0 дней

In November, Intel engineer Timothy Mattson caused a stir at the Supercomputer 2010 Conference when he told the audience that one of the Terascale chips — the 48-core Single-chip Cloud Computer SCC — could theoretically scale to 1,000 cores.

What would it take to build a 1,000-core processor? The challenge this presents to those of us in parallel computing at Intel is, if our fabs [fabrication department] could build a 1,000-core chip, do we have an architecture in hand that could scale that far? And if built, could that chip be effectively programmed? The architecture used on the 48-core chip could indeed fit that bill. Message-passing applications tend to scale at worst as the diameter of the network, which runs roughly as the square root of the number of nodes on the network.

So I can say with confidence we could scale the architecture used on the SCC to 1,000 cores. There is no theoretical limit to the number of cores you can use.

But could we program it? Is that message-passing approach something the broader market could accept? We have shared memory that is not cache-coherent between cores. Can we use that together with the message passing to make programming the chip acceptable to the general-purpose programmer?

If the answers to these questions are yes, then we have a path to 1,000 cores. But that assumption leads to a larger and much more difficult series of questions.

Chief among these is whether we have usage models and a set of applications that would demand that many cores.

We have groups working on answers to that question. As I see it, my job is to understand how to scale out as far as our fabs will allow and to build a programming environment that will make these devices effective. I leave it to others in our applications research groups and our product groups to decide what number and combination of cores makes the most sense.

In a sense, my job is to stay ahead of the curve. Is there a kind of threshold of cores beyond which it is too difficult to program to get maximum use of them? What is it — 100, 400?

It depends on, one, how much of the program can be parallelised and, two, how much overhead and load-imbalance your program incurs. If S is the serial fraction, you can easily prove with just a bit of algebra that... So the limit on how many cores I can use depends on the application and how much of it I can express in parallel. The 48-core SCC processor could theoretically scale to 1,000 cores, according to Mattson. Intel It turns out that getting S below one percent can be very hard.

For algorithms with huge amounts of "embarrassingly parallel" operations, such as graphics , this can be straightforward. For more complex applications, it can be prohibitively difficult. Would Intel ever want to scale up a 1,000-core processor?

That depends on finding applications that scale to 1,000 cores, usage modes that would demand them, and a market willing to buy them.

We are looking very hard at a range of applications that may indeed require that many cores. For example, if a computer takes input from natural language plus visual cues such as gestures, and presents results in a visual form synthesised from complex 3D models, we could easily consume 1,000 cores. Speaking from a technical perspective, I can easily see us using 1,000 cores. The issue, however, is really one of product strategy and market demands. As I said earlier, in the research world where I work, my job is to stay ahead of the curve so our product groups can take the best products to the market, optimised for usage models demanded by consumers.

Would the process of fabricating 1,000 cores present problems in itself? Our product roadmap takes our "what is possible? I need to be very clear about the role of the team creating this chip. Our job is to push the envelope and develop the answer to the question: This is a full-time job.

That is also a full-time job. That may or may not look like the 48-core SCC processor. I have no idea.