You can also look into either serial-to-parallel converter boxes, or
software/hardware solutions involving third-party spooler packages and
third-party serial
print servers.
We need account managers to be able to ABORTJOB a background job stream
within
their own account. How do we enable this? The ALLOW command is difficult to use
because it does not persist across logoff/logon sequences.
Lee Gunter replies:
Check out ALLOWXL on the Interex CSL tape. You set up a list of user
IDs and
associated ALLOWed commands for those users. The ALLOWXL program scans this
list at
logon (via a logon UDC).
HP's Lars Appel adds:
Have you looked into JOBSECURITY LOW for this special need?
We're considering adding additional disk drives to our 969KS200.
Presently we have 3
HASS enclosures containing 10 F/W SCSI 2GB disks on one channel mirrored
with 10
F/W SCSI 2GB disks on another channel. I'm considering adding 2 x 4GB
disks, but that
will result in 11 devices on each mirrored F/W SCSI channel. The F/W SCSI
maximum is
15 devices, but only 10 are recommended. Will this be OK?
Bill Lancaster replies:
Chances are you will probably be OK with 11, but as with any performance
question, the answer is "It depends." Before you decide whether or not you
can put the
11th drive on the channel, you have to see how much demand the other
devices put on
the channel. If you look at the sustained disk I/O rate on any given disk
drive, you are
probably sustaining only 1-5 I/O's per second. Adding it all up would
indicate a
maximum sustained disk I/O rate on the channel of only 50 I/O's per second.
However,
this is only the high average sustained I/O rate. You need to look at burst
activity. If your
burst activity is significantly higher, you may run into periodic disk I/O
problems. This
will most likely occur during a transaction manager checkpoint post to disk.
You may be able to live with the results. Remember, that HP's performance
recommendations are generally conservative. The channel can handle
approximately 150
I/O's per second.
Is there a way to rename a group?
Jeff Vance of HP replies:
There is no way in FOS MPE to rename a group. The RENAME command fails
with
FSERR 20 -- Invalid Operation.
From the Posix shell: $mv /ACCT/GROUP
intentionally fails
with an implementation
dependent error.
The problem in renaming groups is that MPE groups have more attributes than a generic directory: capabilities, CPU limit, connect limit, files space limit, volume set pointer, security matrix, password, and others. Groups have their own directory object ID. For now, groups need to live immediately below an MPE account.
However, it seems reasonable to allow a group to be renamed to another
group
within the same account, and probably OK to rename a group to another group in
another account. By the way, this is a current SIGMPE item.
Jeff Kell adds:
$mv
"almost" works if the target group exists by doing:
$mv /ACCT/GROUP/* /ACCT/GROUPB/
except that on database files (PRIV mode probably the culprit) you
get the error
message:
mv: cannot rename "/a/b/c" to "/a/d/c": System call error
Finally, Chuck Duncan adds:
A little cumbersome, but I create the new group and then use MPEX from
VESOFT:
rename @.oldgroup,@.newgroup
On the subject of halts and FLT error codes, where are these things
documented?
Joe Searle replies:
Some of the system aborts and monitor halts are documented in the
MPE/iX 5.0
Error Messages Manual, Volume II, Chapter 30.
HP's Lars Appel notes:
System Abort texts can also be looked up with MSGUTIL. Select M for
Message
Display when prompted and then use 98 as the "magic" subsystem code for
system abort
messages and enter the system abort number.
We use back references to ENV files. This works fine on MPE/iX 5.0. We
just upgraded to
MPE/iX 5.5 and the back references no longer work. Here's an example:
file myenv=tt22.pub.sys
file p;dev=lp;env=*myenv
listf @,2;*p
This places an ENV file in the spoolfile on our 5.0 system but does not on
our 5.5
system. How do we make our references work again?
HP's Larry Byler replies:
I entered SR 4701-339614 against this problem late last fall. We have
a patch in
the works, soon to be in beta test. Here (from the SR) is a possible
workaround:
:setvar myenv "tt22.pub.sys"
:file p;dev=lp;env=!myenv