September
2004
HP updates
storage issues in Chicago
MPE/iX to handle
bigger disks for 3000s, while HAFO improves
availability
By Steve Hammond
Although its end-of-life date is not far over the
horizon, HP continues to offer availability and disk expansion
opportunities for HP 3000 users.
In a session at HP World 2004 in Chicago, HPs
Jim Hawkins detailed some of the improvements they are making to MPE
regarding storage.
In response to the Number 8 item on the 2003 System
Improvement Ballot, HP has investigated the issue of the 300GB limit
on size of hard drives. We have done some testing,
Hawkins said, and we were able to determine that MPE can handle
a drive up to 512GB.
The current limit remains 300GB, but in the near
future, HP will be issuing patches to allow the use of larger disks.
Unfortunately, 512GB will be the maximum disk size, because
increasing that limit would require significant operating system
changes. So ultimately, even when you connect a drive larger than
512GB, only 512GB of space on that drive will be utilized.
Hawkins also addressed the question of device driver
hardening. We are currently investigating what needs to be done
to harden the MPE drivers, he said. It may not be easy,
but when we stop supporting MPE, we want to be able to say what needs
to be done to harden the drivers, if it can be
done.
Another issue still under investigation is a SCSI
pass-through driver for MPE. No decision has been made on this one,
only that they will continue to look at whether such a driver
will be easily do-able.
Hawkins said in the session that even though MPE is
nearing its end of life, HP still has people on the team trying to
make improvements for users.
Higher availability
If you are thinking about clustering and high
availability for your 3000, then Walt McCullough has your back. Walt
has been with HP for 23 years, almost all of it in the MPE realm. He
now is the MPE/iX High Availability R&D Architect, and he gave
details at HP World on how you can keep your 3000 applications
functioning in unplanned downtimes.
The two solutions offered by HP are Cluster/iX and
HAFO/iX (High Availability Fail Over). Both have their expenses,
hardware-wise, but this is an expense that must be balanced against
the possible business losses when unplanned downtime occurs. A study
done in 1996 estimated downtime costs for a brokerage house will run
between $6 and $7 million an hour, while a large retail enterprise
can lose almost $3 million an hour in credit card sales. Even a small
catalog sales company can drop $100,000 in that hour. Therefore,
setting up a high availability environment can pay for itself in a
single outage.
McCullough did note in his presentation that running
both these products in the same environment is an unsupported
configuration.
Cluster/iX creates an environment that protects
against controller failure, severed cabling, FC switch failure, HBA
failure, software/application failure and OS failure. In combination
with Continuous Access XP, it will also help you keep your data and
applications available in instances of site power outage,
catastrophic equipment failure and will create a disaster site.
It is best implemented when the user community can
tolerate minimal downtime not enough time for a system reboot
(one to 10 minutes), in an environment where you have some control
over the applications and are willing to make some changes to
accommodate the clustering and where the users only want one copy of
the data. It requires two servers, implementation of some scripting
(which HP can do for an added cost) and a higher level of
administrative training. (In other words, if you do things wrong,
data corruption will occur.)
In short, there are two 3000s, with a single set of
disks or arrays that are shareable and recognized by both servers.
Scripts are running on both servers a script on the
primary creates a heartbeat, which a script
on the secondary listens for. If that heartbeat fails,
then the secondary takes over and the script can also be
written to page someone and also continue to monitor the primary to
determine when its heartbeat returns. Switching control back is also
part of the scripting.
HAFO/iX addresses the same problem from a different
direction you have one server, but redundant disks. It
protects against controller failure, severed cables, HBA failure and
FC switch failure. It is set up with dual active paths to both
sets of disks. HAFO/iX detects a component failure and
redirects the data to a redundant path.
Like Cluster/iX, HAFO/iX comes with a set of caveats
HAFO/iX will not work in conjunction with Cluster/iX, all
logical devices must use similar connection technology, and it is not
fault tolerant meaning any unplanned outages may become
planned outages to fully recover. HAFO does add significant
complexity to the operating environment (which requires a far better
understanding of the system characteristics and better planning for
any system changes) and it is highly dependent on performance
expectations, which means a high potential for false failovers.
McCullough concluded with his own caveat
HAFO/iX is not some magic panacea of high availability. It should not
be implemented on a system that has inexperienced or part-time system
management. This is a complex environment that needs monitoring and
tuning false failovers will occur if the system is not closely
observed. It solves problems, but the solution comes at a price
management must be willing to absorb.
|