next up previous contents
Next: Printing under UNIX Up: Using UNIX Previous: Using PAW under UNIX

Accessing mass data under UNIX

Mass data are primarily kept on MVS. By FOPI convention Raw and DST data are stored in such a way that MVS and the different UNIX systems can share the same data pool. See gif how to copy data from MVS. There are in principle three ways to access mass data.

  1. on disks connected to some UNIX-server (includes locally attached disks):
    raw/i/file /data01/run_0815.lmd              # accessing raw data
    dst/scan   /data01/run_0815.dst          ! Y # overview  of DST data 
    dst/i      /data01/run_0815.dst          ! Y # DST input
    dst/o      /data01/run_0815.dst <run-no> ! Y # DST output
    Please do not keep mass data in user filesystems and do not use mount points to mass storage in user file systems.
  2. On computers equipped with SCSI tape drives (Exabyte, DAT, ...) data can be read off ANSI labelled (level 3) tapes:
    raw/i/file    TAPE:run_0815.lmd              # accessing raw data
    dst/scan      TAPE:run_0815.dst          ! Y # overview  of DST data 
    dst/i         TAPE:run_0815.dst          ! Y # DST input
    dst/o         TAPE:run_0815.dst <run-no> ! Y # DST output
    where you have to define before:
    export TAPE=/dev/rmt0      # AIX, first attached tape
    export TAPE=/dev/rmt5h     # ultrix, tape at SCSI ID 5
  3. via NFS-mount of the MVS data pool. It is nice because it allows to read data directly on your local machine without any explicit copying to disk or tape. However, it has some inconveniences due to the incompatible operating systems MVS and UNIX. Here is how to use it:
    1. Create "high level" mount points on your local machine (only once)
      mkdir /kp09
      mkdir /kp09/dstx
      mkdir /kp09/lmdv
      Please do not use mount points in user directories !! They are not necessary and they cause nothing but trouble, in particular when user file systems are to be backed up.
    2. To mount the MVS data pool there is a shell script in $FOPI/bin. In superuser mode run:
      mvsdata.sh
      At present, these mounts are not permanent, they do not survive a shutdown of your local machine or of the NFS server on MVS.
    3. In contrast to pure UNIX NFS, MVS requires an additional step for security reasons. You have to do sort of a login (similar to FTP). If not yet done for FTP, now is the time to create a file ".netrc" in your home directory containing the lines
      machine mvs128 login <mvs-account> password <mvs-password> # via tokenring, RS/6000 only,
      machine mvs    login <mvs-account> password <mvs-password> # via ethernet
      To authorize you just have to type
      mvslogin mvs128
      or
      mvslogin mvs
      As with a normal TSO session, there is a timeout of several hours if you do not perform any NFS file operations, so it may be necessary to give that command from time to time or at the beginning of a batch command file.
  4. The file name correspondance is for example:
    MVS:   KP09.LMDV.S018.RAW0815
    UNIX: /kp09/lmdv/s018.raw0815
  5. Another difference to UNIX is that MVS datasets can be on disk or on 3480 tape cartridge. In the current implementation a tape-migrated dataset is recalled when it is asked for explicitly, but the NFS client does not wait for the recall to be complete. To overcome this trouble, in particular for batch processing, I have provided two UNIX commands:
                                                                                    
       hsmtrig <filename>        # triggers recall and returns control              
                                                                                    
       hsmwait <filename> [ <timeout> [ <timeinc> ] ]                               
                                 # Waits for the specified file to be               
                                 # recalled.                                        
                                 # Fails after <timeout> seconds                    
                                 # are expired (optional).                          
                                 # Retries every <timeinc> seconds                  
                                 # (optional).
    These commands allow an asynchronous recall and a synchronization when you really need the dataset. See the example below.
  6. Finally, here is an example how to use all that in a PAW session.
       shell mvslogin mvs128                       # authorize yourself             
       shell hsmtrig /kp09/dstx/s018c.dst0681      # trigger  1st file              
       shell hsmwait /kp09/dstx/s018c.dst0681 600  # wait for 1st, max 600s         
       shell hsmtrig /kp09/dstx/s018c.dst0682      # trigger  2nd                   
       dst/i         /kp09/dstx/s018c.dst0681 ! Y  # open     1st                   
                                                                                    
       dst/a 10000                                 # analyze  1st                   
       shell hsmwait /kp09/dstx/s018c.dst0682 600  # wait for 2nd                   
       shell hsmtrig /kp09/dstx/s018c.dst0683      # trigger  3rd                   
       dst/i         /kp09/dstx/s018c.dst0682 ! Y                                   
                                                                                    
       dst/a 10000                                 # analyze  2nd                   
       shell hsmwait /kp09/dstx/s018c.dst0683 600  # wait for 3rd                   
       shell hsmtrig /kp09/dstx/s018c.dst0684      # trigger  4th                   
       dst/i         /kp09/dstx/s018c.dst0683 ! Y                                   
                                                                                    
       dst/a 10000                                 # analyze  3rd                   
       shell hsmwait /kp09/dstx/s018c.dst0684 600  # wait for 4th                   
       dst/i         /kp09/dstx/s018c.dst0684 ! Y                                   
       dst/a 10000
    This method overlaps the local analysis of the "n-th" file with the recall delay on MVS for the "(n+1)-th" file and hence should improve your overall throughput. The timeout value of 600 seconds is a guess, you will probably have to make your own experience.
  7. The I/O performance of NFS with a null-analysis amounts to several hundred kBytes/sec. I've seen 450 kB/sec when scanning raw data on the RS/6000-560, which is comparable to a local tape. With complicated analysis there should be no significant difference whether you scan from disk or tape or via network.
  8. Trouble shooting:

next up previous contents
Next: Printing under UNIX Up: Using UNIX Previous: Using PAW under UNIX

Back to GSI Home Page Back to FOPI Home Page Index

fopi
Tue May 28 15:33:35 CST 1996