|
|
|
|
|
Winston Prather must feel like
hes experiencing deja-vu all over
again. The head of R&D for the HP
3000 Commercial Systems Division
(CSY) was working in HPs labs
during the companys last push
toward a new 3000 RISC architecture.
That Spectrum project of
the 1980s yielded todays 900
Series HP 3000s and MPE/iX. Now
Prather has been given the all-clear
to make the 3000 ready for
another new chip architecture, the
IA-64 line jointly designed
by HP and Intel. Prather gave us the
technical plans for the project
transition, as well as notes on how
moving to IA-64 will compare
to Spectrum the last
successful migration for HP 3000s.
Does it feel like youre
approaching the same kind of project
that HP pulled off in the 1980s?
Its very similar, in that
its moving from one architecture to
another. One thing that makes me feel
good about it is that its
something weve done before. If
you go back to the Spectrum program
10 years ago, it was a major unknown
for HP or any other vendor.
I think we pulled it off pretty
successfully, and we learned quite
a bit. Well use some of the
same learning and techniques as we
move to the new architecture.
Thats one of the things
thats given HP the confidence that we
have. We can say not only been
there, done that, but designed
the technology underneath it.
Comparing our position to where
we think the competition will be,
they will not have been able
to say theyve been there, done
that. And they dont have the
advantage of working with Intel for
the past five years.
When youre thinking about
competition, are you thinking of other
people that are moving to Merced, like Sun?
That was my point not
specific in a 3000 or 9000 competition,
but Hewlett Packard compared to Sun
or Digital. We clearly feel
we have a leg up on the competition.
Will the compilers and software be
doing a lot more work than
they were in the previous architecture?
From a technology point of view that
compiler technology has really
taken another leap forward. The idea
that were going to be reliant
on the compilers is still the same
although the techniques and
the complexity have grown. When you
look at some of the dynamic
code optimization that the compilers
are going to do, its even
more impressive.
Will the HP 3000 customers have an
experience like in the late
1980s, where they had an MPE
V-Classic group of programs, and
MPE/XL programs for the RISC systems
and they could interoperate
between the two programs, so long as
they were willing to go to
an equivalent of a Compatibility Mode
on the new architecture?
Yes, thats the goal. The 3000
customers who experienced the move
from Classic to XL know exactly what
theyll be looking at as
they move forward. There will be the
same kinds of concepts: Native
Mode IA-64 compilers and object code,
and PA-RISC compilers and
object code, and translators. Our
customers that have gone through
this will understand exactly what
this transition will feel like.
Is there anything radically
different in the strategy to move
from one architecture to the next
compared to the last time around?
One new concept is that when we moved
from Classic to XL, the
translators were static translators:
you basically took an old
program and you ran it through a
program, and it translated or
emitted PA-RISC code. One of the
concepts you will see when we
move to the new architecture is that
were going to provide dynamic
translation, where you wont
have to run it through an object
code translator and it will spit out
code that you then execute
natively. Instead, well just
trap and on the fly do that translation
for you.
One of the concepts thats
being explored is the dynamic translation,
which means you wouldnt have to
do any of that. You could just
put on the same program and say go,
and we will catch it and translate
it and optimize it dynamically.
You only do that translation once?
Im not sure how much of the
translation is saved away for future
use and how much can be done quick
enough that the saving isnt
necessary. They can use run-time
knowledge of how the program
executes to change the program, and
then make it execute faster.
This does bring some challenges
for example, debugging. Were
exploring all sorts of new techniques
for that. Thats one of
the more exciting technologies
the concept of dynamic translation.
To look at the scope of the CSY
commitment to IA-64, are there
not only extra resources inside CSY
but also concurrent resources
inside HPs Computer Languages Lab?
The decision to put MPE on IA-64 is a
company decision, not something
one division can do by itself. We
already leverage a lot of the
activities throughout the Enterprise
Systems Group [ESG, which
handles the HP-UX and NetServer
servers as well as the 3000].
This is clearly an ESG commitment,
and thats why it was very
important that all of the
organizations were behind this.
What do you know today about the
timing of all this? How long
should customers should look for it
after 2000?
We really dont have a timing
right now. I dont think customers
are going to say, when? It goes back
to my question about 30 percent
performance. What they care about is
that were going to deliver
the performance. We have some
flexibility in different ways that
we can deliver that. For example, PA
is going to be around for
quite some time. Well continue
to push the performance that way.
At some point in that overlap period,
IA-64 will be available,
years before its needed for
performance. That gives customers
a window of staying on PA to get 30
percent more performance a
year, or they can move to IA-64.
Do you have plans yet to be in on
the IA-64 Developers Symposium
at HP World?
Were scrambling right now to
figure out if we can integrate into
what they had already planned. That
one is still open. Its not
real clear, but if at all possible we
need to be part of that.
Was there anything that happened
on the technical level that made
it easier to decide to commit to
doing this for the 3000?
I dont think there was any one
technical thing, an a-ha that
gave us a breakthrough and made it
easier than we first thought.
That wasnt really the thinking.
For me, what this decision means
to our customers is that it once
again shows HPs commitment to
the 3000s future. Its
more of a customer satisfaction and commitment
to customer decision, as opposed to a
technology breakthrough.
Weve had our engineers
working on IA-64 since the creation of
it, working with Intel and working
with the team creating the
architecture, in anticipation of this
type of migration. From
a technical perspective, weve
been working with them to understand
what it would take what
wed have to do, what it means to compilers,
and what it means to the operating
system for quite a while.
Are customers going to be looking
at a world where some of their
applications are going to be built
for 32-bits, and others are
built for 64-bits? How will they make
those two kinds of applications
interoperate?
We still do have some technical
decisions to make as we move forward,
about trying to ensure that customers
dont have to deal with
multiple binaries of the operating
system. Our goal is to make
it not noticeable to customers. They
should not have to care.
They get a tape and they just install
it. If we do get to a point
where we do need multiple binaries,
then they wont notice it.
There was that kind of a fork back
in 1987, because you had MPE
V and MPE XL.
Right. Even before this new
architecture, with large memory and
large files and 64-bitness, we were
really committed to minimizing
that impact that customers see. When
we move to the new architecture,
it will be even harder to minimize
that impact. The probability
that they will see multiple binaries
goes up. When youre on a
PA box, we want to avoid customers
having to deal with multiple
binaries.
Your reference to PA box indicates
there are going to be two different
kinds of 3000s by then?
Our plan in moving to IA-64 will be
very similar to the Unix side
of the camp. For example, there will
be a complete new platform
available prior to IA-64. That would
be a box upgrade for the
high-end and midrange systems. That
new platform will be able
to run PA processors, and you can
take out those PA processors
and plug in IA-64 processors. So that
the transition, if you will,
from a PA platform to an IA-64
platform is just a board upgrade.
The roadmap to the future for the
3000 customers will look something
like when they need the performance
power, they would upgrade
either their midrange line or their
high-end line to their new
box, and run PA on it. They could
continue to upgrade those boxes
with PA processors, or at some time
when theyre ready, they could
then pull out their PA processor
boards and put in IA-64 processor
boards.
So at that moment in time they
wouldnt even have to change to
another version of the operating system?
That, Im sure, will have to
happen. Its a question of how non-intrusive
it will be. I can pretty much
guarantee that you would have to
do some sort of upgrade of the
operating system. There will clearly
be another version of the operating
system, like moving from Release
X to Release Y to Release Z. The
transition would look like Go
to Release Z. Swap processor boards.
Reboot. There you are. Very
similar to what you did when you
moved to Spectrum.
The major transition isnt
plugging in the processor board, but
getting up on the next operating system?
Replacing the operating system is it,
and making it easier than
we did before. When you moved from
the 70s to the Series 930,
that was a complete box swap, which
made it a larger process.
This would be just a board swap.
Are the people making MPE/iX
applications going to have help soon
in doing the work on IA-64
transitions? How much change are they
going to experience under the hood in
moving applications?
Remember, they wouldnt
necessarily need to do anything. But theyll
want to, in order to take full
advantage of the new architecture.
Weve already started the
process of working with the applications
providers, helping them understand
what the transition path would
look like and when they would need to
get assistance, and what
kind of assistance do they want.
Weve started all that.
Do you think you want to go the
route you did in the Spectrum
project, and set up Technical
Assistance Centers?
Thats still to be determined.
We need to continue to talk to
[developers] and see whether
thats the right idea, or whether
we could do it another way.
Well, some of the fundamental
assumptions of working with computers
have changed a bit since then. For
one thing, the hardware is
going to more affordable than the new
hardware was in 1986.
Right. One of the reason why we went
to the Access Centers before
was to share the hardware. Its
probably much less required. But
thats something well have
to figure out as we move forward.
Right now we just had a big reseller
meeting in Venice, where
we told the resellers of what we were
doing and started working
even more closely with them on what
it means to them and how we
will work with them to move forward.
What does the picture look like
for an application provider whos
not doing work in Unix right now?
Will the migration be easier
for people with some kind of HP-UX
version of their program, because
the tools and programs have been in
place a little longer for
HP-UX?
If theres any advantage that
they would have, it would be that
theyd already gone through a
little of the process on the Unix
side. But I dont know that
theres any other difference.
They will have been through the
process, but a lot of our application
providers have been around a long
time, and theyre already going
to know what the process looks like
because theyve already
done it once. It will be very simple
to them.
Do you think theres any
advantage the MPE-only software shop
would have in trying to make this
transition?
I dont know that theres
an advantage. A generic advantage that
any vendor has is only working with a
limited number of binaries.
If you ship on 13 different operating
systems, then thats 13
different migrations.
Im just trying to figure out
if it would be easier than coming
from a Unix perspective.
Im not sure. It wouldnt
surprise me.
Can you talk about anybody
whos participating actively in this
kind of migration now?
I really cant yet, because I
dont think were at the point where
I personally know which vendors are
allowing us to show their
commitment. I dont know
whats public and whats not, so Id
rather not comment. I can say
were working with many of them.
Is it your feeling that the 3000
customer base is going to have
a lot more homegrown applications
they will be moving across than
a customer base in another
environment moving to IA-64? Will you
be tuning the migration program and
tools more toward people that
have homegrown things?
I think thats something we
should look into. I dont know that
were far enough along to have
that information. I agree that
the 3000 installed base has a higher
percentage of homegrown applications.
I think your question is Will
they need access to a set of tools
that may not normally be available to
end-user customers? Thats
a good question, but I think
were a little early.
What do the customers have to do
to get this installed? Make sure
they have budget in place?
I think the customers shouldnt
focus on the technology. They
should focus that this is one of the
ways that Hewlett Packard
is going to ensure we deliver on the
performance commitment that
we make. Were promising 30
percent performance increase per year,
and one way to get that is going to
be PA for quite some time,
and then there will a transitionary
overlap period where theyll
move to the new architecture, and
that will require some recompilation
if they want to achieve maximum
performance. But the transition
should be as smooth as the one
theyve done in the past.
The reaction I think customers are
going to have to this announcement
is not a technology reaction,
its going to be a confidence, commitment
kind of reaction. I think
theyre going to say great, glad to
hear it, it makes me feel good about
my current and future purchases.
Weve been trying to shift
away from focusing on technology. Were
focused on delivering the performance
and functionality they need.
And well use whatever
technology we need. We did the same thing
with 64-bitness. Originally we said
we dont see a technology
reason to move there. As the reason
developed customers needed
large memory for performance, large
files for storage then we
decided to use 64-bit types of
technology. Its the same thing
here. You need performance, and the
performance is going to scale,
and its up to us to make sure
we give you that performance. One
way to do it is new architecture.
There will the technologists
that want to know about speculation,
prediction techniques and
the compilers, the techniques that
the compilers use. But I dont
think thats the normal reaction.
Does it go without saying that
IMAGE is going to move all the
way?
Well have to move the databases.
Is there anything you know of
right now that wont make the cut
which has a significant customer base?
Not really. I really believe the best
way to think about this
is been there, done that once, gonna
do it again. From the compiler
point of view, technologys come
a long way and thats going to
make it even easier.
So youre moving to the third
distinct generation of HP 3000?
Absolutely. Not many computers can
say theyve been through that.
Its interesting when you look
at Hewlett-Packard and IA-64 in
general, and how well we are
positioned within the industry. Its
going to interesting, because IBM is
the only major vendor that
hasnt shown a commitment to
IA-64. Even though Sun is going to
waffle right now and say theyre
IA-64 but theyre also SPARC,
cmon really? Its
just a matter of time. And the same with
DEC; although they talk about Alpha,
they made a commitment to
IA-64 from the NT perspective. How
long can all of the other vendors
producing chips do it? Theyre
not going to have the volume. When
you look at the volume Intel is going
to have with the new chip
set compared to some of these other
platforms, its going to be
interesting to see how anyone can
make inroads.
Did the delay of Merced have any
impact on what CSY is doing with
IA-64?
It really doesnt impact us at
all. Remember, our commitment is
to the performance levels, and
were still committed to those
both on the 9000 side and the 3000
side. We have PA plans to ensure
were going to get those
performance levels. Then IA-64 is an
evolutionary thing that will start
sometime during that overlap.
It really wasnt a big deal for
us. Well still be able to deliver
our performance levels. The whole
show wasnt bet on Intels schedules.
It appears that the NT and Unix
solutions from HP could really
have used the IA-64 horsepower to
meet performance goals, much
more so than the HP 3000. Is that so?
Even on the NT and Unix side, we
still have plans in place to
deliver the performance. I dont
think it created that type of
a problem for them, either. The
customers that tend to buy Unix
tend to be more technology focused,
so theyre more interested
in the architectures and the whole
IA-64 thing. On the 9000 side
we are in no way at risk of not
delivering the performance we
need because of Intels slip.
From a marketing perspective, on
the Unix side they market that
technology a lot more. Thats what
those customers want to hear, leading
bleeding edge. So its put
more pressure on them from that
perspective.
From our side we already moved two or
three years ago away from
pushing technology. Our customers
arent as hyped about it. Our
public announcement of IA-64 is going
to be more of a confidence
announcement than a product message.
Have you run any MPE/iX programs
on IA-64 simulators yet?
On some of the simulators, yes.
How does it feel to be going
through the second evolution of HP
3000?
The magnitude of the task feels like
a similar process with more
complexity in the compilers. From a
personal perspective it feels
very similar. I had one of the
project managers tell me something
when we finally decided were going to
do this. One of the most
exciting times of my entire life was
when we went through that
Spectrum period, he said.
And here we go again. Im ready for
it. Theres that level of
excitement from the engineers. Personally,
it feels really great. The lab is
gearing up for a number of years
and a lot of fun.
One of the things I have to keep
the lab focused on is that we
have a tremendous number of things on
our plate right now that
will come long before IA-64. We have
a ton of performance work
thats going on, another
platform and more processors. All of
that is the building blocks leading
toward plugging in that IA-64
processor. If you would look at what
the lab is working on right
now, from the growth perspective, I
summarize it as getting the
operating system out of the way of
all of the performance thats
going to come from the hardware. If
you recall the one slide I
flashed at the IPROF talk this year,
thats a lot of performance.
The operating system has a lot of
scalability work thats got
to be done, work on limits. Those are
the kinds of things that
are the building blocks.
That work is going to done at
first to benefit from the PA-RISC
2.0 horsepower, right?
The core design of the operating
system will be the same as we
move to IA-64. A lot of the code will
be the same, just recompiled.
The higher-level parts of the
operating system will almost its
incredibly oversimplifying, but you
can think of it as were going
to do the same thing we ask our
customers to do. Were going to
recompile the operating system. The
generic algorithms that the
Dispatcher uses and the Memory
manager uses will be recompiled.
All of the kernel will have to be
recompiled and then modified
slightly. All of the building-block
work will leverage straight
through to the version of the
operating system that supports IA-64.
So the 64-bit work youre
doing for PA-RISC will dovetail with
IA-64 work?
Absolutely. Its all evolutionary.
One which end do you first see
IA-64 first becoming available
for 3000 customers?
Thats still to be determined.
If I had to guess, I would say
high-end, because the biggest benefit
would be that continued
growth curve. The primary objective
would be for that high-end
growth, although the midrange and low
end would be able to take
advantage of it too, with different
chips as they come available.
If youre going to get ready
to accommodate IA-64 in your 3000
shop, do you need to plan a couple of
expenditures: one to get
the system that can accept the new
chips, and another to buy the
processor boards themselves?
What the customer should be planning
on is affording the performance
they need. The fact that it comes
from a box or a board upgrade
is not really as relevant. When you
look at from the pricing point
of view and its
obviously too early to say I would assume
it will be the same kind of price for
performance.
Are the technical similarities
between IA-64 platforms going to
help deliver more applications to the 3000?
I would say they make it easier. I
wouldnt say its a slam dunk,
so you can take any application from
a Unix vendor. Theres a
database issue and an intrinsic
issue, and then theres the bigger
issue with a lot of the
application providers, its the number
of binaries they have to support. The
front of the compilers would
look exactly the same. It clearly
would help.
Have you examined if Java can be a
good lever between IA-64 platforms?
Were not really sure. Java can
be leveraged across any architecture.
Thats the whole concept, that
it doesnt matter. Because of the
compiler technology that well
have, I think that the Java virtual
machine executing on an HP IA-64
platform will outperform other
Java Virtual Machines. |
|
|
|
|