MULTICS TECHNICAL BULLETIN                                MTB-622

To:       MTB Distribution

Subject: Multics Automated Support System
Date:  05/23/83
                           - ABSTRACT -

   The contents of this MTB outline a system that will reduce the
shop  cost of  a DPS8M  and its  submodels.  It  is based  on the
ability  of  the  Dynamic  Maintenance Panel  (DMP)  in  the DPS8
Processors  to  control  the  CPU,  read  and  write  the  memory
connected to the processor and  to extract data from the internal
registers and control flags of the processor.

   This system  will make use  of the DMP's  ability to interface
over a  standard communication line  using a Multics  system as a
driver  to  control many  processors under  test.  The  number of
processors under test is limited only  by the size of the Multics
system.   The  only  'support  hardware'  required,  besides  the
Support System itself, is one SCU and 1 MW of memory for each CPU
under test.

          Please send comments via mail
          To:  Fawcett.Multics
          Subject:  MASS.MTB


Multics  Project  internal  working  documentation.   Not  to  be
reproduced or distributed outside the Multics Project.



   This  MTB describes  a Multics  Automated Support  System that
will replace the current methods used  to qualify a DPS8M CPU for
customer shipment.

   Over half of the DPS8M CPU shop cost is attributable to direct
labor  charges  and  test-cell  support  equipment.   The Multics
Automated Support System will reduce the manufacturing test cycle
to  1  or  2  weeks from  a  scheduled  cycle of  5  to  6 weeks.
Eventually  the  need  of  qualifying the  CPU  with  the Multics
operating system will be reduced or eliminated.

   This type of  design/functional verification methodology could
be expanded to other types of equipment (e.g., GCOS 870's, SCU's,
IOM's and  FUTURE PRODUCTS) as  well.  It may  also be used  as a
field support  tool, reducing the cost  of maintaining systems in
the field and improving  customer satisfaction by reducing repair
time.  However, these facets will not be addressed in this MTB.



   Currently, the only tools available to verify/test a DPS8M CPU
are the Multics Hardware Acceptance Test (MHAT) and Offline T&Ds.
MHAT  is  a  set of  absentees  that  are run  under  the Multics
operating system to exercise the hardware.  They have a very high
probability of detecting a problem if  one exists in the CPU, but
have  no diagnostic  capability.  The  problem here  is getting a
fully  functioning Multics  system.  Often problems  with the CPU
under  test manifest  themselves in hardcore  or system utilities
requiring  a  technician knowledgeable  in  the hardware  as well
Multics  and dump  analysis (if  a dump  can be  taken).  Many of
these  hardware problems  result in  a 'trashed'  file system and
full  system restores  are the  norm, not  the exception.   It is
worth noting that the system failures that produce readable dumps
are preferred to failures in  MHAT which, in essence, merely tell
the technician  that two files  do not compare.  At  least with a
dump  there is  a chance that  the failing sequence  of events or
instruction is available.

   The problems  with Offline T&D  need not be  mentioned in this
MTB, except  to say that they  do not test the  CPU in the manner
that Multics  uses it and even  if they did, a  broken box should
not be used to diagnose itself.

   There are at least 8 test cells in New Products Test that have
a  full  complement  of  support equipment  required  to  run the
Multics MHAT package  (2 SCU'S 1 meg each, 1  IOM, 1 PR400, 1 MTP
with  4 Tape  drives, 1  dual channel  MSP with  4 MSU400s  and 1
DN355).  There are  6 other cells that have  less equipment, that
are used for T&D (less some MSU400's, SCU, DN355).  This could be
reduced  to  one cell  to run  the  MHAT package  and 10  or more
test-cells that have the CPU under test  and one SCU with 1 MW of
memory each, and one Support System.

   The minimum(1) configuration of  the Multics Automated Support
System should be:
     2 CPU's
     2 SCU's 1 MW each
     2 IOM's
     2 MSP's dual channel, 12 MSU451's
     1 MTP, 2 or 3 tape drives
     2 FNP's

   It appears that half the system could handle all 10 cells with
reasonable response.


(1) The  Support  System  is  fully redundant  to  meet  the high
    availability requirements for this application.


The  process that  a CPU  now goes  through in  New Products Test
(NPT) is:

  a) AUTO II,  a L6 to  DMP interface that tests  the CPU in
     GCOS mode.  This should be Re-Hosted on MASS.

  b) GCOS offline T&D.  This should be eliminated.

  c) GCOS Operating system using  the GCOS3 Acceptance Test.
     This  should be  reduced to only  one pass  to test the
     functionality of the GCOS mode, and should be run after
     the Multics activities.

  d) Multics Offline T&D.  This should be eliminated.

  e) Multics  operating  system as  single CPU  system using
     MHAT.   This  is now  a  scheduled 24  hour  run.  This
     should be eliminated.

  f) Multics  operating system  with dual  CPU configuration
     using MHAT.   This is now  a scheduled 48  hour run and
     should be  changed to a  quad run of 24  hour after the
     MASS verification.  Eventually this  test could also be

   The  DPS8  CPU common  boards go  through a  preliminary step,
Fairchild  Fault  Finder  and  Teradyne  (automatic process-error
detection tools), that find most  process errors on boards before
the unit actually enters a  test-cell.  The Multics unique boards
are scheduled to go through this activity as wire-wrap boards are
converted to the hard-copper technology.



   This  will  be  a  test  of  the  functionality  of  the DPS8M
appending  unit,   and  the  control  terms   that  provide  this
functionality.   It is  assumed that  other tests  (e.g., AUTO II
will  be  developed or  are in  place to  check data  and address

   There is a parallel development effort  on the L6 AUTO II that
should be merged into the Multics Automated Support System.



   The Multics Automated Support System  will allow test cases to
be run in a very controlled Multics process type environment.

   This  will be  achieved by  interfacing to  the DPS8M  CPU DMP
through the  FNP on the  Multics Automated Support  System over a
9600  baud  RS232  interface.    The  application  programs  will
communicate  to  this interface  via  the standard  tty_  DIM.  A
special terminal type will be generated for the TTF to define the
characteristics of  the DMP interfaces.   The test cases  will be
down line loaded into the memory  connected to the CPU under test
through the DMP.

   Each test case  will be defined in a  separate directory.  The
directory name  will be the  same as the  test case name  and may
have an addname of Test_n, where n is the test number.

   The test directory will contain  an info_seg which defines the
test  case.   The information  contained  in the  info_seg  is as

   o The name of the test case.

   o The path name of the  segment that holds the actions to
     be taken at various steps, if any.

   o The  path name  of the  segment whose  contents are the
     stored machine condition blocks for each step.

   o The  length  of  the  resultant  data,  absolute memory
     address of the resultant data  as well as the path name
     of the segment that holds what this data should be.

   o The  indicator register  value and the  offset into the
     stack_4 segment where the  indicator register is stored
     by the test.

   o The  path  name  of   the  segment  where  the  linkage
     information for the test case is to be found.

   o If the test case uses  segment numbers > 24 octal, then
     the segment  paths that contains  the sdw and  ptws for
     these segments.

   o The number and path names  of the segments that make up
     the test case.  These  segments are in loadable format,
     one  that can  been loaded in  the test  memory via the

   o The  path  name(s)  of  the  combined  segment(s).  The
     combined segments are the concatenation of the loadable


     segments.  This is an optimization of I/O's to the DMP.
     There will usually be only one combined segment.

   At  some  future  time  this  directory  hierarchy  should  be
replaced by a MRDS data base system.

   Control and result checking will  be performed by stopping the
CPU at various points in the test sequence and comparing the data
obtained by the  DMP with a set of known  data.  This data may be
internal CPU registers and  controls, machine condition blocks or
data stored by the CPU, depending on the test case.

   Test cases should be written with a tool that will make adding
test cases  easy, easy enough  for the average  technician on the
factory  floor.  At  present an extended  eis_tester_ exists that
will  convert  the  scripts  used  by  eis_tester_  into loadable
segments for MASS, and one to convert alm routines (ala test_cpu)
is currently under development.

   The current methodology of testing a processor is to first run
test of a very simplistic nature and the progress to more complex
tests.  The reasoning  for this is that the  processor under test
is required to make the decision if it failed or passed the test.
Therefore this  building block approach  must be used.   The MASS
system will  run complex test  cases because the  processor under
test will not be responsible for the pass fail decision.
 A large number  of these test cases will  be developed to ensure
as much comprehensiveness  in the package as is  reasonable to do
so.  At the very least, these tests will verify that the CPU will
function in  the Multics environment.   This will be  achieved by
the addition of test cases as appropriate.

   When a test case fails, a  subset of the failing test case can
be run for  isolation.  This will be done  by restarting the last
known good state  of the CPU from the  stored machine conditions.
The small failing loop will be in memory as well.



   Fault vectors will  be set up in the  test environment similar
to the Multics operating system  fault vectors.  The fault vector
pairs will contain the SCU  and TRA instructions with indirection
through  ITS  pairs.  The  ITS  pair used  by  the SCU  will have
further indirection  to another ITS  pair.  The ITS  pair used by
the  TRA will  point to  segment 3|0  (fault_handler).  This will
transfer to code  that will then store the  pointer registers, OU
registers, and  EIS P&L.  This machine  condition block format is
similar  to  the  machine  condition frame  in  the  PDS  or PRDS
segments in Multics.

   A round robin buffer for  the machine condition blocks will be
maintained to assist  in problem analysis.  They will  be kept in
segment number  2 (round robin  MC buffer).  The  pointers to the
current, and  last blocks will  be kept in the  fault handler and
updated only by the MASS.

   The  restarting of  a fault such  as a directed  fault will be
similar  to  the  Multics  environment.  The  CPU  state  will be
restored via the machine conditions and the RCU will use the data
stored by the fault vector SCU instruction.

   To  start  a  test  the  data  needed  are  the  dseg  and the
page_tables for  each segment.  SDW's  and PTW's should  all have
the fault  bit off, except  for those segments  required to start
the test case.

   The  start_restart utility  will start  a test  by loading the
DSBR and other  registers in absolute mode and then  do a TRA via
an ITS  pair to enter  appending mode.  The user  ring is entered
via  a  RTCD.   The  test  case  will  be  entered  via  a  CALL6
instruction.   All  test  cases  should  use  the  same procedure
segment  for  starting.  Segment  7  has been  selected  for this

The  general  sequence of  events  for a  test  case will  be the

  1. Load  the procedure  and the data  segments.  This will
     include the SDW's (DSEG) and the PTW's.  Also the start
     and restart code.

  2. Set the CPU to stop on all "hardware faults".

  3. Execute the "start command".

  4. Poll the DMP for a stop condition.


  5. Check   the   stop  condition   (dis,  sof,   soa)  for

  6. Check the machine condition block for correctness.

       a. If  the  fault  data  is  correct  then  take  the
          corrective action  (ie:  unfault the  SDW, PTW, or
          snap  link  pair).  Check  for  any actions  to be
          taken at  this action step.   Execute the "restart
          command" to continue the test case.

       b.  If  incorrect indicate the  areas of mis-compare,
          and stop the test case.

  7. At the end of the test case check all data.

   An important  point is that  ALL verification is  performed by
the  Multics  Automated  Support System.   Only  the instructions
under test and the supporting  fault restarting code are executed
in the CPU under test.



   The  DMP  has two  interface  modes.  The  "VIP" mode  and the
"TRANSPARENT"  mode.  The  "VIP" mode  is an  ASCII character set
type  of interface.   The "TRANSPARENT"  mode is  an "octal coded
hex" type.

   The DMP an be attached via tty_.  A terminal type can be built
for the  TTF to define  the characteristics of the  DMP.  One for
"TRANSPARENT" and one for "VIP" mode.

   In  "TRANSPARENT" mode  ("TM") all  commands must  be given in
hex.  The data returned for read type requests is in "octal coded
hex".   This is  a hex  byte (8 bits)  where the  first nibble (4
bits) has a value of 0-7  (octal) and the second nibble will have
the  most significant  bit set to  a one  and value 0-7  in the 3
least significant  bits if this byte  is data.  If the  byte is a
control  character then  the most  significant bit  of the second
nibble  may be  off.  This case  precludes the use  of a standard
ring_0 translation mechanism.  Therefore,  when data is expected,
the  returned data  must be  checked for  an error  code (control
character) and  then the data  translated to real  octal (machine
word).   The translation  will be done  with a  small alm program
that executes an MVT instruction 9 bit to 6 bit.

   Commands sent to the DMP must be in hex.  Data sent to the DMP
must be in  octal coded hex with the most  significant bit of the
second nibble  "off".  For commands,  the hex data  can be mapped
into ASCII ("bit for bit").  For data, the problem will be solved
by taking the  data to be sent in octal  and converting it with a
small alm program that will execute a MVT instruction 6 to 9 bit.



   MASS is intended for use by the NPT technician and not for the
"normal  Multics  user".  Therefore  the  user interface  will be
designed  for the  use by this  technician, whose  needs would be
much different than a normal Multics user.



   Test cases must be completely defined in an info_seg contained
in a directory that has the same name as the test.  This info_seg
is defined in info_seg.incl.pl1.  The most important parts of the
info_seg are the path names to be loaded into the "test memory".

   The segments to be loaded  must be transmitted in "octal coded
hex".   The  DMP expects  the first  byte it  receives to  be the
command  (e.g.,  write  memory),  the  next  next  six  bytes (12
nibbles, 12  octal digits, one  machine word) to  be the address.
Followed  by the  data to be  loaded, six bytes  for each machine
word of 36 bits, and the End  Of Data byte (EOD) to terminate the

   The utility that loads segments into the 'test memory' expects
that the  format of a  segment to be  loaded will be  as follows.
The first word  of the segment will be the  total number of ASCII
characters that will be sent to  the DMP.  The first character of
the second  word must be  the ASCII representation of  the DMP TM
mode  write memory  command (16h)  or (026o).   Starting with the
second  character of  the second word  for six  characters is the
"octal  coded hex"  of the  address to  be loaded.   Six of these
characters  will  represent  one  DPS8M machine  word.   The next
character,  fourth character  of the  third word,  will start the
"octal coded hex" representation of machine words, six characters
each.  At  the very end of  all this is one  character, EOD.  The
EOD is  a 76h (176o or  "~" in ASCII).  With  this format the DMP
utility  program can  just do a  put_chars with a  pointer to the

   The  loading  of a  test  case into  the  test memory  will be
optimized  by  concatenating  all  the loadable  segments  into a
combined segment.  This will allow the  test case to be loaded in
one call.



There  are seven  preset segments  that must  be included  in the
info_seg as segments to load.   These segments must be loaded and
their  segment numbers  and memory addresses  are reserved.  They

     seg0 (dseg)
     seg1 (fault_vectors)
     seg3 (fault_handler)
     seg4 (linkage_seg)
     seg5 (start_restart)
     seg6 (PTW_seg)

   Note the  lack of seg2 (machine_conditions).   This segment as
well  as its  memory address are  reserved.  This  is the segment
where the round robin machine  conditions are kept.  This segment
should not be loaded by the test case.

   These seven  segments comprise the  "Multics Environment" that
the test  cases will run  in.  Segments 3  and 5 are  the "ring_0
procedure" segments.  The others  are data segments.  The default
operating ring number that a test case will run in is 4.

   Segment  number 0  is the descriptor  segment.  It  is a paged
segment  that  contains  the  SDW'S  that  define  the  test case
environment.  Segment numbers 0 though 6 have the SDW faulted bit
on.  The rest  of the SDW'S have their faulted  bit off.  This is
done so that the test case can be started.  After the first fault
condition is seen  the faulted bits are at  the discretion of the
test case.  However  it is recommended that the  faulted bits for
the first  seven segments not  be changed, with  the exception of
the linkage_seg, segment number 4.

   The fault vectors  are contained in segment number  1 they are
all identical.  However, the SCU  instruction will store data via
an indirect  ITS pair pointing  into segment number  3.  This ITS
pair  will be  managed by  the utility  in the  Multics Automated
Support System that deals with the machine condition block.

   The machine  condition block will be  stored in segment number
2.   This will  be a  round robin  buffer managed  by the Multics
Automated Support System.  The data  of a machine condition block
will be much like a standard machine condition block as stored by
the Multics fault handling software.   The exception will be that
the miscellaneous area where the fault time is stored will not be

   Segment 3 is the fault handler.  This code will be entered via
the  fault  vectors  to  complete  the  storing  of  the  machine
condition  block.   At  this  point  a  DIS  instruction  will be
executed  and  the  CPU will  stop.   This segment  will  also be


responsible for restarting faults.  This part of the code will be
transferred  to from  the start-restart segment.   Segment 3 will
contain  the  pointers  into  the machine  condition  round robin
buffer.  These  pointers will be updated  directly by the Multics
Automated Support System.

   The linkage segment will be segment  number 4.  It can be used
by  the test  case to store  linkage pairs.   The normal sequence
will  be  that  the pair  is  faulted and  the  Multics Automated
Support System will  store the correct ITS pair  to snap the link

   Segment 5 will be the code  that really starts or restarts the
test case executing in the CPU under test.  In the case where the
test  case  is  started,  this segment  will  start  executing in
ABSOLUTE mode  via an DMP TM  mode TRA command to  load the DSBR,
and the CPU registers.  At this point a TRA instruction, indirect
via an ITS pair will be  executed to cause the processor to enter
the  APPENDING  mode.  An  EPP0  instruction through  the linkage
segment  will cause  a linkage  fault.  The  utility in  the test
support system  will convert this  into an ITS  pair.  After this
has been resolved and restarted,  a RTCD instruction will set the
ring  level to  that defined  in the  snapped links  ITS pair.  A
CALL6 modified  by PR0 will  be executed to start  the test case.
The  case  of restarting  the  test case  after  a fault  will be
performed by  another DMP TM  mode TRA command.   This will cause
the CPU to enter the ABSOLUTE mode, reload the DSBR and execute a
TRA through  an ITS pair  to the restart  part of segment  3, the
fault  handler,  to restore  the  machine state  and  restart the
fault.  This  same mechanism is  used for restarting  the linkage
fault at start time.

   The page tables for all the  paged segments will be in segment
6.   The  initial  Page Table  Words  (PTW's) will  all  have the
directed fault  bit set to  0 and the  fault type set  to 1.  The
exceptions will be  the PTW's for segment 0  (dseg) and segment 4
(linkage_seg).   The  first PTW  for each  segment will  have the
fault bit  on.  After the  test case has  started execution these
PTW's  could  be altered,  depending upon  the test  case control



Segment numbers:

    0  dseg               (absolute address    6000 paged)
    1  fault_vectors      (absolute address       0 unpaged)
    2  machine_conditions (absolute address    1000 unpaged)
    3  fault_handler      (absolute address     400 unpaged)
    4  linkage_seg        (absolute address    4000 paged)
    5  start_restart      (absolute address     204 unpaged)
    6  ptw_seg            (absolute address   10000 unpaged
    7  test_case          (absolute address   20000 paged)
    10 10 -> 17 any data  (segment 10 starts at 220000 paged)
    20 stack_0            (absolute address 2220000 paged)
    21 stack_1            (absolute address 2420000 paged)
    22 stack_2            (absolute address 2620000 paged)
    23 stack_3            (absolute address 3020000 paged)
    24 stack_4            (absolute address 3220000 paged)

 Segments 7 -> 24 are restricted to 64k

 linkage segment format:

 offset 0 => link pair used by start_restart.
 offset 2 => link pair for procedure segment of test case.
 offset 4 => start of link pairs for the test case if needed.



build_action_seg, bas

Syntax:  build_action_seg TEST_CASE_PATH

   This command will  build the segment that holds  the action to
be  performed at  a selected step.   This command  will query the
user on  the actions to  build.  The action can  be selected from
the following:

   page_fault:   This will  cause the  ptw of  the indicated
     segment  and computed  address to be  faulted.  This is
     done at the action step indicated.

   seg_fault:   This  will cause  the  sdw of  the indicated
     segment to be faulted.

   deactivate:   This will  cause the  sdw and  ptws for the
     segment selected to be faulted.

   ring_alarm:  This  will cause the ring  alarm register to
     be loaded with the value of 1.

   new_sdw:   This  will change  the  value of  the selected
     segments sdw to be the new values.  The new values will
     be  the ones  used to build  the sdw  after any segment
     faults on this segment.

   restore:  Reload a segment into the test memory.

   unlink:  This will cause a link  pair to be reverted to a
     link fault state from the ITS pair state.

   no_ldbr:   This will  set a  flag which  will inhibit the
     loading of the dbr register  after each step, until the
     ldbr key word is encountered.

   ldbr:  This will reset the flag that inhibits the loading
     of the dbr register.  This is the normal setting.

NOTE:  More than one action can be done at a step.


convert_oct_hex, coh

Syntax:  convert_oct_hex IN_PATH OUT_PATH -control_args

   Perform  the  conversion  from  octal  to  "octal  coded hex",
calculates the total  number of ascii characters that  are in the
new segment and insert the correct control bytes.


       IN_PATH is the path name of the segment to be converted.

       OUT_PATH is the path name of the output segment


 -in_path path
     use path as the input path name.

 -out_path path
     use path as the output path name.

 -in_start offset
     start  converting  at the  given octal  offset of  the input
     segment.  (DEFAULT 0).

 -mem address
     address is  the absolute memory location  where this data is
     to start.  Incompatible with -segno, -regs

 -range nn
     convert nn number of words in octal.  (DEFAULT is calculated
     from the bit count of the input segment).

 -segno nn
     this  will generate  the absolute  memory address  to be the
     absolute  memory address  of segment  nn, where  nn is octal
     number 0 - 24.  Not compatible with -mem or -regs

 -offset nn
     if -segno is given then the nn octal offset will be added to
     the absolute memory address of the segment.  Only compatible
     with -segno.

     Used for generating a segment  to overlay the default values
     used by the start segment (the range will be 30 octal).  The
     pointer registers  must be first followed  by the arithmetic
     registers.   This is  not compatible  with -mem,  -segno, or


     One  and only  one argument may  be specified  to define the
     absolute address.



Syntax:  dmp$add_attach

   This  command  will add  the  attach description  used  by the
standard  Multics module  tty_ to a  file, the file  path name is
hard  coded.   This  is the  module  that interfaces  to  the FNP
channel.  The attach description for a direct connected DMP would

     tty_ a.h001 -resource "line_type=none"

      a.h001 is the fnp channel 01 on hsla 0 of fnp a.

   Other tty_  descriptions can be  built.  See tty_  in "Multics
Subroutines and I/O Modules" AG93-04.

   dmp$add_attach will prompt with
    "Input CPU name -> ".
 At  this  point  the symbolic  name  of  the CPU  under  test is
entered.  Next the prompt will be
   "Input CPU attach description "
at this time the tty_  attach description should be entered.  The
program will  loop back to the  "Input CPU name ->",  to exit the
program type ".q".


   List   the   names   and    attach   descriptions   built   by


mass_build, m_b

Syntax:  m_b {PATH} {-control_args}

   Converts the object  segment PATH into an info_seg  for a test
case.   If  the PATH  is  not supplied  on  the command  line the
command will query the user.  The command will query the user for
the  parent  directory,  if  it  is  not  supplied  as  a control
argument.  The test case directory  will be created and named the
same as the segment name in PATH.


     This   will  inhibit   the  printing   of  the  linkage
     information in the object segment.
 -ring N
     Set the operating ring number  for this test case to N.
     (Default 4).
     This will  allow the user to  select the segment number
     for each  segment and build  the sdw and  ptws for each
     segment.   This is  the only  way that  segment numbers
     larger than 24 octal can be generated.
     This  will indicate  the parent directory  for the test



Syntax:  mass_et PATH {-control_args}

   This  command  will  convert  a  script  compatible  with  "et
scripts".  The PATH is the path name of the segment that contains
the  script.  The  test case name  will be  the instruction under
test concatenated with its position  in the segment PATH.  If the
-test_dir argument is not given  the command will function as the
Multics command "et".


 -do N
     Converts only the Nth test in the segment PATH
 -from N
     Starts  converting  test cases  at the  Nth one  in the
     segment PATH
 -lg, -long
     Will print a description of the test case.
     This will  not store the  test case.  Only  useful with
     the -nox control argument.
 -to N
     Stops converting after the Nth test case in the segment
 -test_dir PATH
     This indicates the parent directory for the test case.



Syntax:  TEST_CASE_PATH {-control_args} Function:
   Prints information on the selected test case.

TEST_CASE_PATH is the path name of the test case


 -action, -act
     Print  the action  steps and the  type of  action to be
     taken at that step.
     List the path  names of the segments that  will be used
     as  the  initial  loadable segments.   This  segment is
     comprised  of the  loadable segments in  a format which
     will allow the complete test case to be loaded with one
     IO to the DMP.
     Print all the stored condition steps for this test case
     as well as the machine conditions interpreted and octal
     Print  all  the stored  condition  steps for  this test
     case, but only display a  brief form of the interpreted
     scu data from the machine conditions.
     Display the test case result data.
     Query the user to change the loadable segment paths.
 -linkage, -link
     Display the linkage pairs used by this test case
 -list, -ls
     List the loadable segments
 -long, -lg
     Used  with  the  -list  and -seg  options  to  dump the
     loadable segments.  If -list then the sdws and ptws are
     also displayed.
     Display  the  sdws  and  ptws  of  the  segment numbers
     greater that the standard (24 octal), if any.
     Like -ptw except only the sdws are displayed.
 -seg N
     Display  the  Nth loadable  segment.  The  -long option
     will dump the segment.

Default mode:  -act -data -linkage -list.