Making
the Leap to Parallel Computing
In the early 1980s, when I was teaching and
doing research at Yale’s computer science department and School of Management,
my colleagues and I dreamed about the great promises of artificial intelligence
and all the things it would do. We made radical forecasts, in particular, about
the promise of parallel computation — separate computers operating in sync,
providing more power than any single computer could bring to bear.
Thirty-five years later, the future we
foresaw is finally arriving. Computing is moving from serial processing, where
each step has to be completed before the next is started, to massively parallel
processing. The resulting leap in computer power will have a major disruptive
effect on business, one that has not been generally noticed and for which most
business leaders are not prepared.
Until a few years ago, all programs followed
the serial model. They constructed their solutions as a single complex
operation or group of operations, conducted on a single computer. The more
powerful the computer, the more complex the problems it could handle. But the
limits of a single machine also shaped the limits of the problems it could
handle.
In parallel processing, a problem is broken
down into a number of steps, each of which can be managed independently of the
next. This allows all of the steps to be processed at the same time — that is,
in parallel — on many separate computers or processors. The time required to
reach a solution is drastically reduced.
Coordinating these computers and managing
parallel computation used to be a difficult problem. These days, the entire
burden can be handled through cloud-based services such as those offered by
Amazon, Google, and Microsoft. This can make implementation very easy.
But there is more to parallel computing than
cloud-based coordination. The programs used for serial and parallel
computation, for example, are very different. One cannot simply take a serial
program and make it work in parallel. It needs a complete redesign. And our
legacy systems – including those in most corporations, banks, governments, and
hospitals – are coded and operate in serial. These represent billions of lines
of code.
In the past few years, more and more systems
with a parallel architecture have been released. They are orders of magnitude
faster and can handle significantly more data than their serial counterparts.
They have an additional advantage: The advent of elastic computing, which only
calls for as much storage and power as it needs at a given time. Serial
computation, which can require fixed amounts of storage, cannot compete
effectively against parallel processing and elastic storage, which have
significant cost and speed advantages.
Large
enterprises, governments, and other organizations dependent on legacy programs
thus need to reengineer their systems to take advantage of parallel computing.
This is particularly
true in banks, whose legacy systems have dragged them into
a quagmire that limits their productivity. They are vulnerable to disruption by
financial technology (fintech) startups. The startups operate in a parallel,
elastic world; their systems process data quicker and at a fraction of the cost
of incumbents’.
Ask any large company how difficult
reengineering its systems is, and they will say it is extremely difficult — if
not impossible. Generally speaking, there is no way to simply tweak code
written for a serial machine and make it parallel. It needs to be rethought,
re-architected, and rewritten from the ground up — a gargantuan task.
Most senior executives have not yet come to
terms with this looming challenge. They often do not have adequate knowledge of
the essential causes and technology and how they can be fixed. Similarly, the
boards of directors for these firms often do not have the right people who can
pose the right questions.
But
there are examples of enterprises and institutions that have successfully made
the leap to parallel computing. One is the country of Estonia. All of the
nation’s computer systems now run on the Internet and are built on a
disaggregated, parallel architecture. Estonian cabinet
meetings that once averaged four to five hours
now take 30 to 90 minutes because many of the discussions and votes have taken
place asynchronously before the meeting — the easy issues have been solved.
Google and Facebook are themselves good
examples of firms that are leveraging the enormous power of parallel computing.
When you type a phrase into Google’s search engine, thousands of computers
immediately spring to life. This is much faster than if only one computer at a
time could be used for the task.
What should you do if you are an executive
facing this challenge? Start with some of the practices of design thinking.
Initiate a few blue sky sessions, to imagine your business operating in a
parallel world. From this you can design not only an architecture that could
achieve these goals, but create a view of how human resources would be
positioned in this new world. Much of the architecture can be bought or
licensed or taken from open source so that this transition need not be as
daunting as it might first seem. It does, however, require the right technical
skills, which may not exist inside your company at this time.
The move to parallel computing represents a
terrific opportunity to move from your current, internally siloed operations.
You could leap to horizontal, customer-centric organizational groups, like
those at Google, Facebook, and Amazon. And the risks of not moving toward
parallel computing are serious, for the technology is here to stay. This time,
it’s not in the future. It’s now. If you don’t believe me, just ask banks why
they fear fintech startups.
Ron S. Dembo
http://www.strategy-business.com/blog/Making-the-Leap-to-Parallel-Computing?gko=e444a&utm_source=itw&utm_medium=20161117&utm_campaign=respB
No comments:
Post a Comment