|
January 2004 net.digest tracks each months message traffic on the 3000-L mailing list and comp.sys.hp.mpe Internet newsgroup. Advice offered from the messages here comes without warranty; test before you implement. Edited by John Burke This month we again had a number of lengthy politics and religion threads that threatened to hijack 3000-L, obscuring the many interesting technical and non-technical threads. A casual observer might think nothing of technical merit or interest was discussed this month, but he would be wrong. I enjoyed the reports and associated commentary about HP planning to enter the online music store market, about the recent defections from the HP executive ranks, and about what the hell HPs Virtual Adaptive Enterprise actually means. Then there was the thread about HPs new Webcast extolling the virtues of migrating to ..NET and Alfredo Regos observation, Does this mean that HP has now decided that there is no value (and that there are no benefits) associated with migrating your HP e3000 to HP-UX? Finally, there was the verbal nuclear bomb dropped by OpenMPE Board member Ken Sletten (covered elsewhere in this issue). What about technical content, you ask? It is well represented below here and in Hidden Value. I always like to hear from readers of net.digest and Hidden Value. Even negative comments are welcome. If you think Im full of it or goofed, or a horses behind, let me know. If you spot something on 3000-L and would like someone to elaborate on what was discussed, let me know. Are you seeing a pattern here? You can reach me at john@burke-consulting.com.
Automating FTP logons Many of us have used FTP for years without really understanding some of its features. In response to a question about automating an FTP process, Donna Garverick gave a short tutorial on using netrc files. netrc files are simple ASCII files that make both FTP and users happy. The format of a netrc record is: machine nodename login mgr.foobar password never,tell If this netrc file is named netrc and lives in the FTP initiators home group, then FTP nodename will automatically log you onto nodename. Caveat: Since so many users home group is pub, it makes a tremendous amount of sense to not have the netrc file in the pub group. And, since this file contains passwords, it makes sense to put some kind of security on this file. My recommendation is to altsec it in some fashion (either for r/w access or acds...). If you put this netrc file into a non-home group, then the following file equation is needed: :file netrc.[my_home_group] = [filename.group][.account] For example, :file netrc.pub = nodename.netrc. A single netrc file can hold logons for multiple nodes but not multiple logons for a single server (you need multiple files for that). With this kind of system in place, users never need to know passwords. Its all hidden in netrc files. For more information, read FTPdoc.arpa.sys.
FTP: EXITONERROR not working correctly A user wrote, We have several batch jobs that get files from other machines. They worked fine under plain MPE/iX 7.0, but when PowerPatch 2 was applied, EXITONERROR now appears to exit and set the variables back to a successful state, instead of indicating why EXITONERROR activated. James Hofmeister replied, I duplicated this problem and EXITONERROR is working properly. However, the problem is the quit that is called internally on EXITONERROR is operating the same as if the QUIT command is entered at the user command prompt. We need to make a code repair to the quit on EXITONERROR and avoid updating the FTPLASTREPLY variable with the results of quit. Tim Cummings suggested, The only way I have found to reliably determine if FTP has completed the task is to set my own VARs. Before you enter FTP, set a VAR to indicate that your FTP has failed. Then, inside FTP, after you issue the PUT, GET etc. follow it with a :setvar to indicate that the FTP completed successfully. Joshua Johnson added, I took this a step further and used setvar _FTP_lastcmd and :if to check the FTP variables, then set my own variables before I exit FTP.
Why failure to prepare is preparing to fail The following sad story was posted to 3000-L: I was doing an archive of a dataset in IMAGE this weekend, and I was disappointed at the performance of the PUTS. I had created a flat file previously of the records to add back, the DB was set with AUTODEFER enabled, TPI turned off, and with no IMAGE paths on the set (just OMNIDEX keys). The set had 101 million entries when I started, I unloaded the 40 million we wanted to keep via Suprtool into a SD file (this only took 58 minutes), erased and resized the set to 60 million via Adager, and then began the PUTS via Suprtool with the set locked up front. My performance time was 3.5 million PUTS per hour, and I was really disappointed. I truly thought that because I had no IMAGE keys, AUTODEFER was on, and TPI was off, the set should load very quickly. It turns out that even though I set TPI off, I still needed to de-install OMNIDEX. Sigh.
CI functions SIB request update This from HPs Jeff Vance: As you know we are implementing the CI Functions SIB request. The engineer responsible for the coding and design details (Hariprasad) discovered that the CIs evaluator treats . and / as token separators. The / isnt surprising since / is the division operator, and expressions such as a/b are perfectly valid. The . is more surprising, but since there are no predefined CI functions with a . in their name, and there are no real values, and there are no CI methods, and there are no CI structures, maybe it just turned out that way. Anyway, it is better from an eliminating regression failures point of view if we do NOT change the evaluator parsing rules in the implementation of CI functions. However that would preclude a CI function from being qualified. For example, Myfunc.grp(), ../MyFunct(), /bin/functions/MyFunc() MyDir/MyFunc() all would NOT be legal function names. This is inconsistent with the CI in that it allows qualified script names. However, the CI also supports unqualified POSIX names as script names. For example: myScript(case sensitive), my_Script, my-Script, etc. are all legal script names and can be found in the POSIX namespace. So my question is would the restriction of disallowing qualified user function names be a problem for you, and if so, please give me some examples. Basically, this means that scripts operating as functions could not be qualified and would have to lie on a path specified in HPPATH. A lively discussion ensued with some good suggestions offered, but it appears if we are ever to get this enhancement that we will have to live with this restriction.
Nike Arrays 101 Many 3000 homesteaders as well as fence sitters those who havent begun to plan for migrations are picking up used HP Nike Model 20 disk arrays. The interest comes from the fact that there is a glut of these devices on the market meaning they are inexpensive and they work with older models of HP 3000s. However, there is a lot of misinformation floating around about how and when to use them. For example, one company posted the following to 3000-L: Were upgrading from a Model 10 to a Model 20 Nike array. Im in the middle of deciding whether to keep it in hardware RAID configuration or to switch to MPE/iX mirroring, since one can now do it on the System volume set. (It wasnt when the system was first bought, so we stayed with the Nike hardware RAID. Were considering the performance issue of keeping it Nike hardware RAID versus the safety of MPE Mirroring. Has anyone switched from one to the other? A side issue is that one can use the 2nd Fast and Wide card on the array when using MPE mirroring, but you cant when using Model 20 hardware RAID. So, with hardware RAID, you have to consider the single point of failure of the controller card. If we split the bus on the array mechanism into two separate groups of drives, and then connect a separate controller to the other half of the bus, you cant have the hardware mirrored drive on the other controller (Im told you can do this on UX). It must be on the same path as the master drive because MPE sees them as a single device. Using software mirroring you can do this because both drives are independently configured in MPE. Software mirroring adds overhead to the CPU, but its a tradeoff you have to decide to make. We are evaluating the options, looking for the best (in our situation) combination of efficiency, performance, fault tolerance and cost. First of all, as a number of people pointed out, Mirrored Disk/iX does not support mirroring of the System Volume Set never did and never will. Secondly, you most certainly can use a second FWSCSI card with a Model 20 attached to an HP 3000. Bob J. elaborated on the second controller, All of the drives are accessible from either controller but of course via different addresses. Your installer should set the DEFAULT ownership of drives to each controller. To improve throughput each controller should share the load. Only one controller is necessary to address all of the drives, but where MPE falls short is not having a mechanism for auto failover of a failing controller. In other words sysgen reconfiguration would be necessary to run on a single controller after SP failure in a dual SP configuration. You could have alternate configurations stored on your system to cover both cases of a single failing controller but the best solution is to get it fixed when it breaks. The best news is that SP failures are not very common. There is a mechanism in MPE for failover called HAFO - High Availability FailOver. Unfortunately for the original poster it is only supported with XP and VA arrays and not on Nikes or AutoRAIDs (because it does not work with those). Andrew Popay provided some personal experience: We have seven Nike SP20 arrays, totaling 140 discs spread across all the arrays, using a combination of RAID 1 (for performance) and RAID 5 (for capacity). We use both SPs on all arrays, with six arrays used over three systems (two per system). One of our systems has two arrays daisy-chained. The only failures we have suffered on any of the arrays have been due to a disc mechanism failing. We never find any issues with the hardware raiding; in fact, as a lot of people have mentioned, hardware raiding is much more preferred to software raiding. Software raiding has several issues, system volume, performance, ease of use, etc. Hardware raiding is far more resilient. As for anyone concerned about single points of failure, I would not worry too much about the Nike arrays, I would say they are almost bullet proof. For those who require a 24x7 system and cant afford any downtime what so ever, maybe they should consider upgrading to an N-Class, with a VA or XP. Bottom line is SP20s are sound arrays on the HP 3000s, easy to configure, setup and maintain. John Burke uses more than 20 years of IT, HP 3000 and MPE experience to help 3000 sites through Burke Consulting (www.burke-consulting.com). Also contributing to this months net discussions were Michael Berkowitz, John Clogg, Gilles Schipper and Goetz Neumann. Copyright The 3000 NewsWire. All rights reserved. |