VSAM stands for Virtual Storage Access Method, is IBM high performance access method which allows you to access files of different organization such as sequential, indexed, relative record and linear datasets.Features of VSAM
VSAM is one coherent file storage system used to store and retrieve data. It is not a database management system like IDMS or DB2. It does not provide for relationships among the data. The existing databases like IMS or DB2 may be implemented using VSAM.
VSAM is not a programming language. But you can access VSAM dataset through programming languages like COBOL or PL/I. It is not a communication system like VTAM or CICS. It has no equivalent for a ‘PDS’ type of file organization.Advantages of VSAM
Provides protection of Data against unauthorized access through password facility.
Cross-system (MVS & VSE) Cmpatibility. VSAM datasets can be imported and imported in MVS and VSE systems.
Device Independence (Access Via Catalog). The application programmer need not be concerned with Block size ,volume and other control information, as access to VSAM dataset it always through the Catalog and all control information are stored in the catalog entry of the dataset.
IDCAMS commands can be included in JCL to handle VSAM datasetsTypes of VSAM DatasetsClusters
VSAM files are often called clusters. A cluster is the set of catalog entries that represent a file. A cluster consists of one or two components. All VSAM datasets consist of a data component in which data records are placed . For KSDS , there is an additional index component, which contains the indexes used to access records in the data component. ESDS RRDS and LDS have data component only and no index component
VSAM clusters are categorized into 4 types based on the way we store and access the records:
ESDS Entry Sequenced dataset.
These are sequential datasets that can be read in the sequence in which they were created. Records can be added only to the end of the dataset.
KSDS Key Sequenced dataset.
These datasets are stored in sequence of some key field in the record. The data component and index component are separated. The keys are stored in a separate index and records are accessed through the index. Individual records can be accessed randomly on the basis of the record key. Locating the record is a two stage process.
• First search for the key in the index• Use the information in the index to locate the record.
RRDS Relative record dataset.
These datasets associate a number to each record. There is no key field but records are accessed by deriving the relative position of the record in the dataset.
LDS Linear dataset. These datasets consist of a stream of bytes which are accessed and written as 4k blocks accessed by Relative Byte Address
VSAM history
VSAM was introducted in 1973. This version had only Entry Sequence Datasets and Key Sequenced Datasets. In 1975 Relative Record Datasets and alternate indexes for KSDS was added. In 1979 DF/EF VSAM was introduced with Integrated Catalog Facility (ICF).
DFP/VSAM Ver 1 was introduced in 1987 to run under the MVS/XA architecture. DFP/VSAM version 2 introduced Linear Datasets (LDS)
DFP/VSAM version 3 was introduced to run under MVS/ESA architecture.In 1991 version 3.3 supported variable-length records for RRDS.
Back to VSAM index
Wednesday, May 27, 2009
2. VSAM Catalogs
VSAM is totally catalog-driven. Catalogs are special purpose files residing on DASD (Direct Access Storage Device) serving as a central repository for information about all datasets under its control. There are two types of catalogs used • Master catalog • User catalog There’s only one Master catalog per system. The entries in the master catalog may point to VSAM or non-VSAM dataset, user catalogs, system datasets or other objects. User catalogs contain same type of information as master catalog. All user catalogs must be cataloged into master catalog. Access to a dataset can only be made through a master or user catalog. Therefore all VSAM datasets have to be cataloged. Non-VSAM datasets can also be cataloged. Catalogs are protected by RACF. Figure 2.1 VSAM Catalog Vsam catalog Catalogs maintain the following information • Name and physical location of datasets • password information required to access protected datasets • Statistics about datasets Example No. of records added, read, deleted or no. of Control Interval/Control Area splits • Information about dataset itself Example ESDS, KSDS, RRDS, CSIZE, KEYLENGTH • Location of catalog recovery area Vsam records VSAM records can be fixed or variable length. Records can also be spanned Vsam space allocation VSAM space allocation depends on whether the dataset is cataloged in an ICF or the older VSAM type catalog. For VSAM datasets cataloged in the newer ICF-type catalogs, dedicated space is allocated dynamically when the cluster is created with the DEFINE CLUSTER command Each VSAM dataset cataloged in an ICF catalog has its own VTOC entry. These VSAM datasets can have 1 primary and 122 secondary allocation unlike OS dataset which can have only 1 primary and 15 secondary extends on a volume. Vsam space management VSAM maintains detailed information in its catalogs about DASD space allocated to VSAM files. This allocation information stored in the catalog is more comprehensive and flexible than the equivalent information stored for a non-VSAM file in VTOC. Sub Allocation Once the space has been allocated, VSAM has complete control over subsequent allocations within that space. Within that space, VSAM can create suballocated files. Whenever a suballocated files need to be created, extended or deleted, VSAM uses it own space management facilities. Unique Allocation Alternatively an entire VSAM space can be allocated to single VSAM file. In that case allocation for the file called UNIQUE file, is managed by DADSM rather than by VSAM. Allocation information for the unique files is maintained in two places : the VSAM catalog entry for the file and the VTOC entry for the space that contains unique file. The figure below shows two DASD volumes. The first volume has a VSAM dataspace contains two sub-allocated files. Notice that there’s unused space within the dataspace too. However, that space is not available to non-VSAM files because it’s already under VSAM’s control. The second DASD volume contains two unique VSAM datasets. All of the unused space on the volume is available to both VSAM and Non-VSAM datasets. Under VSE/VSAM & OS/VS VSAM most VSAM datasets are sub-allocated. Under ICF, there is no VSAM space. All VSAM files are Unique Figure 2.2 Space Allocation
Back to VSAM index
Back to VSAM index
3. Inside VSAM Datasets
Control Interval
A control interval is the unit of data VSAM transfers between virtual and disk storage. It is similar to the concept of blocking in non-VSAM files. Each control interval can contain more than one logical record.
The size of CI must be between 512 bytes to 32K. Upto 8K bytes it must be multiple of 512, beyond this it is multiple of 2K. The length of the CI is specified at file creation time.
For index component, the size of CI is 512, 1K, 2K or 4K bytes.
A Control Inverval consists of records, free space and control field information as shown below
Figure 3.1 Contents of Control Interval
In th Control Interval shown above Rec1, Rec2, Rec3 are records. Free Space is where new records can be inserted.
Figure 3.2 Contents of Control Field
Control Interval Descriptor Field(CIDF) contains information about available space within CI. Record Descriptor Field (RDF) contains the length of each record and how many adjacent records are of the same length. There’s one RDF for each record in variable length records.
There will be only two RDFs per CI in case of fixed length files. One RDF specifies the length of the record and the second RDF specifies how many records are there in the CI. Each RDF is of 3 bytes .
VSAM groups control intervals into contiguous, fixed length areas of storage called Control Areas. Maximum size of a CA is 1 cylinder. You can also specify freespace in CA. The total number of CI/CA in a Cluster is determined by VSAM.
CONTROL AREA
Figure 3.3 Control Area
Spanned Records
Spanned records are records larger than the specified CI size. That is they span more than one CI. So one spanned record may be stored in several CIs. Each CI that contains a record segment of a spanned record has two RDFs. The right RDF gives the length of the segment and the left gives the update number of the segment. Spanned records can exist only in ESDS and KSDS.
A CI that contains a record segment of a spanned record contains no other data. Records can span Control Intervals but not Control Areas. For KSDS the entire key field of the spanned record must be in the first Control Interval.
Figure 3.4 Spanned Record
ESDS
ESDS is a sequential dataset. Records are retrieved in the order in which they are written to the dataset. Additions are made always at the end of the file. Records can be retrieved randomly by using RBA(Relative Byte Address). RBA is an indication of how far, in bytes, each record is displaced from the beginning of the file.
KSDS
In Key Sequenced Datasets logical records are placed in the dataset in the ascending collating sequence by the key field.
Rules for key
• Key must be unique in a record
• Key must be in same position in each record and key data must be contiguous
• When a new record is added to a dataset it is inserted in its collating sequence by key
A KSDS consists of two components index component and data component
DATA Component :- Contains control areas which in turn contains Control Intervals as shown in Figure 3.5
Figure 3.5 Contents of Control Area
KSDS Structure
Figure 3.6 Contents of KSDS Index
The first level of index is called a Sequence set. The Sequence set consists of Primary keys and pointers to the Control Intervals holding records with these primary keys. The Sequence set is always in sequential order of the primary keys. The Control Intervals may be in any order. VSAM uses the Sequence Set to access records in the KSDS sequentially.
The index component is a separate entity with a different CI size , a different name and can be stored on a different volume.
Control interval splits can occur in Indexes also
Sequence Set
CI CI CI CI
Figure 3.7 Contents of Sequence Set
Index Set
Index
component
Sequence Set Sequence Set
CA1 CA2
Figure 3.7 Contents of Index Set
Figure 3.8 Inserting a new record into a KSDS
Before Control Interval Split
Full
Control Interval
Figure 3.9 Inserting a new record into a full CI
Figure 3.10 After Control Interval Split
Sequence Set
0
100
200
Figure 3.11a Effect of Control Interval Split on Sequence Set
0
100
200
Figure 3.11b Effect of Control Interval split on Sequence Set
Back to VSAM index
A control interval is the unit of data VSAM transfers between virtual and disk storage. It is similar to the concept of blocking in non-VSAM files. Each control interval can contain more than one logical record.
The size of CI must be between 512 bytes to 32K. Upto 8K bytes it must be multiple of 512, beyond this it is multiple of 2K. The length of the CI is specified at file creation time.
For index component, the size of CI is 512, 1K, 2K or 4K bytes.
A Control Inverval consists of records, free space and control field information as shown below
Figure 3.1 Contents of Control Interval
In th Control Interval shown above Rec1, Rec2, Rec3 are records. Free Space is where new records can be inserted.
Figure 3.2 Contents of Control Field
Control Interval Descriptor Field(CIDF) contains information about available space within CI. Record Descriptor Field (RDF) contains the length of each record and how many adjacent records are of the same length. There’s one RDF for each record in variable length records.
There will be only two RDFs per CI in case of fixed length files. One RDF specifies the length of the record and the second RDF specifies how many records are there in the CI. Each RDF is of 3 bytes .
VSAM groups control intervals into contiguous, fixed length areas of storage called Control Areas. Maximum size of a CA is 1 cylinder. You can also specify freespace in CA. The total number of CI/CA in a Cluster is determined by VSAM.
CONTROL AREA
Figure 3.3 Control Area
Spanned Records
Spanned records are records larger than the specified CI size. That is they span more than one CI. So one spanned record may be stored in several CIs. Each CI that contains a record segment of a spanned record has two RDFs. The right RDF gives the length of the segment and the left gives the update number of the segment. Spanned records can exist only in ESDS and KSDS.
A CI that contains a record segment of a spanned record contains no other data. Records can span Control Intervals but not Control Areas. For KSDS the entire key field of the spanned record must be in the first Control Interval.
Figure 3.4 Spanned Record
ESDS
ESDS is a sequential dataset. Records are retrieved in the order in which they are written to the dataset. Additions are made always at the end of the file. Records can be retrieved randomly by using RBA(Relative Byte Address). RBA is an indication of how far, in bytes, each record is displaced from the beginning of the file.
KSDS
In Key Sequenced Datasets logical records are placed in the dataset in the ascending collating sequence by the key field.
Rules for key
• Key must be unique in a record
• Key must be in same position in each record and key data must be contiguous
• When a new record is added to a dataset it is inserted in its collating sequence by key
A KSDS consists of two components index component and data component
DATA Component :- Contains control areas which in turn contains Control Intervals as shown in Figure 3.5
Figure 3.5 Contents of Control Area
KSDS Structure
Figure 3.6 Contents of KSDS Index
The first level of index is called a Sequence set. The Sequence set consists of Primary keys and pointers to the Control Intervals holding records with these primary keys. The Sequence set is always in sequential order of the primary keys. The Control Intervals may be in any order. VSAM uses the Sequence Set to access records in the KSDS sequentially.
The index component is a separate entity with a different CI size , a different name and can be stored on a different volume.
Control interval splits can occur in Indexes also
Sequence Set
CI CI CI CI
Figure 3.7 Contents of Sequence Set
Index Set
Index
component
Sequence Set Sequence Set
CA1 CA2
Figure 3.7 Contents of Index Set
Figure 3.8 Inserting a new record into a KSDS
Before Control Interval Split
Full
Control Interval
Figure 3.9 Inserting a new record into a full CI
Figure 3.10 After Control Interval Split
Sequence Set
0
100
200
Figure 3.11a Effect of Control Interval Split on Sequence Set
0
100
200
Figure 3.11b Effect of Control Interval split on Sequence Set
Back to VSAM index
4. IDCAMS COMMANDS
You can write IDCAMS utility program
1. To create VSAM dataset
2. To list, examine, print, tune, backup, and export/import VSAM datasets.
The IDCAMS utility can be invoked in batch mode with JCL or interactively with TSO commands. With JCL you can print/display datasets and system messages and return codes. Multiple commands can be coded per job. You can use IF-THEN-ELSE statement to execute command/s selectively based on condition codes returned by previous commands.
Listed below are the IDCAMS commands to be discussed in this course
• DEFINE
• MODAL COMMANDS
IF
SET
PARM
• BUILDINDEX
• REPRO
• PRINT
• DELETE
• VERIFY
• IMPORT/EXPORT
• ALTER
• LISTCAT
The example 4.1 shown below is a skeleton JCL for executing IDCAMS commands. The PGM parameter specifies that the program to be executed is IDCAMS utility program . The statements that follow SYSIN DD * are IDCAMS commands. The end of data is specified by /*.
Optionally JOBCAT and STEPCAT statements may be coded to indicate catalog names for a job/step, in which concerned dataset may be cataloged
// jobname JOB (parameters)
// stepname EXEC PGM=IDCAMS
// SYSPRINT DD SYSOUT = *
[// ddname DD DSN=datasetname,
DISP= SHR/ OLD
]
//SYSIN DD *
IDCAMS command/s coded freely between 2 to 72 cols.
/*
//
Optionally:
// JOBCAT DD DSN = catalogname, DISP= SHR
// STEPCAT DD DSN = catalogname, DISP = SHR
Example 4.1 JCL for executing IDCAMS commands
Format of IDCAMS command
verb object (parameters)
Every IDCAMS command starts with a verb followed by object which takes some parameters. In the code listing 4.2 DEFINE is the verb CLUSTER is the object which takes a dataset DA0001T.LIB.KSDS.CLUSTER as parameter
DEFINE CLUSTER -
NAME(DA0001T.LIB.KSDS.CLUSTER) -
CYLINDERS(5, 1) -
VOLUMES (BS3013) -
INDEXED -
)
Example 4.2 Creating a cluster
Comments:
Comments in IDCAMS can be specified in the following manner
/* comment */
or
/* -----
*/
IDCAMS return codes
The IDCAMS Commands return certain codes which have the following interpretation
Condition code:
0 : command executed with no errors
4 : warning - execution may go successful
8 : serious error - execution may fail
12 : serious error - execution impossible
16 : fatal error - job step terminates
The condition codes are stored in LASTCC/MAXCC. LASTCC stores the condition code for the previous command and MAXCC stores the maximum code returned by all previous commands. Both LASTCC and MAXCC contain zero by default at the start of IDCAMS execution. You can check the condition code of the previous command and direct the flow of execution or terminate the JCL.
Syntax of IF statement
IF LASTCC/MAXCC
comparand VALUE -
THEN -
command
ELSE
Command
Comparand(s) are : EQ/NE/GT/LT/GE/LE
Hyphen is required after then to indicate the continuation of the command on the next line . Comment is assumed as null command . ELSE is optional. LASTCC and MAXCC values can be changed using the SET command.
Note : LASTCC and MAXCC can also be set to any value between 0-16
e.g.
SET LASTCC = 4
Setting MAXCC has no effect on LASTCC. Setting LASTCC changes the value of MAXCC, if LASTCC is set to a value larger than MAXCC. Setting MAXCC = 16 terminates the job
.........
REPRO INFILE (INDD) -
OUTFILE (OUTDD)
................
IF LASTCC EQ 0 -
THEN -
PRINT OUTFILE (INDD)
ELSE
PRINT INFILE (OUTDD)
IF MAXCC LT 4 -
THEN -
DO
/* COMMENT */
Command
Command
END
ELSE
Command
Example 4.3a JCL using MAXCC and LASTCC
DEFINE CLUSTER
….
IF LASTCC > 0 THEN
SET MAXCC = 16
ELSE
REPRO
……
Example 4.3b JCL using MAXCC and LASTCC
Defining an ESDS Cluster
DEFINE CLUSTER
Clusters are created and named with the DEFINE CLUSTER command.
The NAME parameter
This is a required positional parameter.
Format : NAME(Cluster-Name)
Cluster name :- The name to be assigned to the cluster
Example: NAME(DA0004T.LIB.KSDS.CLUSTER)
The cluster Name becomes the dataset name in any JCL that invokes this cluster either as an input or output
//INPUT DD DSN=DA0004T.LIB.KSDS.CLUSTER,DISP=SHR
The high-level qualifier is important because in most installations this technique ensures that VSAM datasets are cataloged in the appropriate user catalog.
Rules for Naming Cluster
Can have 1 to 44 alphanumeric characters
Can include the national characters #, @, $
Segmented into level of eight or fewer characters, separated by periods
The first character must be either alphabetic or national character
The SPACE Allocation parameter
The space allocation parameter specifies space allocation values in the units shown below:
Format :
CYLINDERS(Pri Sec)
TRACKS(Pri Sec)
RECORDS(Pri Sec)
KILOBYTES(Pri Sec)
MEGABYTES(Pri Sec)
Primary : Number of units of primary space to allocate. This amount is allocated once when the dataset is created
Secondary : Number of units of secondary space to allocate. This amount is allocated a maximum of 122 times as needed during the life of the dataset. VSAM calculates the control area size for you. A control area size of one cylinder usually yields best performance. To ensure control area size of one cylinder you must allocate space in CYLINDERS.
Allocating space ine RECORDS must be avoided as this might result in an inefficient Control Area size.
The VOLUMES parameter
This VOLUMES parameter assigns one or more storage volumes to your dataset. Multiple volumes must be of the same device type.
Format :
VOLUMES(volser) or VOLUMES(volser ........ volser)
volser : The 6 digit volume serial number of a volume.
Example :
VOLUMES(BS3011)
VOLUMES(BS3011 BS3040 BS3042)
You can store the data and index (in case of KSDS clusters) on separate volumes as this may provide a performance advantage for large dataset
The Recordsize parameter
This parameter tells VSAM what size records to expect. The avg and max are average and maximum values for variable length record. If records are of fixed length, avg and max should be the same.
Format :
RECORDSIZE(avg max)
avg : Average length of records
max : Maximum length of records
e.g. :
RECORDSIZE(80 80) [Fixed Length records]
RECORDSIZE(80 120) [Variable Length records]
RECORDSIZE can be assigned at the cluster or data
level
Note :
This is an optional parameter, if omitted default is RECORDSIZE(4086 4086)
The SPANNED parameter
This parameter allows large record to span more than one control interval. However records cannot span Control Areas. The resulting free space in the spanned control interval is unusable by other records, even if they fit logically in the unused bytes. [NONSPANNED is the default] & it means that records cannot span control intervals
The DATASET-TYPE parameter
This parameter specifies whether the dataset is INDEXED(KSDS),
NONINDEXED(ESDS), or NUMBERED(RRDS).
Format : INDEXED NONINDEXED NUMBERED
INDEXED :- Specifies a KSDS and is the default
NONINDEXED :- Specifies an ESDS. No index is created and records are accessed sequentially or by relative byte address
NUMBERED :- Specifies an RRDS
LINEAR :- Specifies a LINEAR dataset
The default dataset Type is INDEXED.
//DA0001TA JOB LA2719, PCS,MSGLEVEL=(1,1),
// MGCLASS=A,NOTIFY=DA0001T
// * Delete/Define Cluster for ESDS VSAM Dataset
//STEP1 EXEC PGM=IDCAMS
// SYSPRINT DD SYSOUT = *
// SYSIN DD *
DELETE DA0001T.LIB.ESDS.CLUSTER
DEFINE CLUSTER -
(NAME(DA0001T.LIB.ESDS.CLUSTER) -
NONINDEXED -
RECORDSIZE(125 125) -
RECORDS(100 10) -
NONSPANNED -
VOLUMES (BS3013) -
REUSE - ) -
DATA(NAME(DA0001T.LIB.ESDS.DATA))
Example 4.4 JCL for Defining an ESDS Cluster
Defining KSDS Cluster
While defining a KSDS Cluster it is essential to code the DATA, INDEX and KEYS parameter
The DATA parameter
The DATA parameter tells IDCAMS that you are going to create a separate data component. This parameter is optional for ESDS and RRDS datasets. You should code the NAME parameter of DATA for KSDS datasets, in order to operate on the data component by itself.
Format : DATA(NAME(dataname) Parameters)
dataname :- The name you choose to name the data component
The INDEX parameter
The INDEX parameter creates a separate index component
Format :
INDEX(NAME(indexname) Parameters)
indexname : The name you choose to name the index component
INDEX(NAME(DA0004T.LIB.KSDS.INDEX))
When you code the DATA and INDEX parameters, you usually coda a NAME parameter for them. If you omit the NAME parameter for DATA and INDEX , VSAM appends .DATA or .INDEX as the low-level qualifier.
The KEYS parameter
This parameter defines the length and offset of the primary key in a KSDS record.
The offset is the primary key’s displacement (in bytes) from the beginning of the record.
Format :
KEYS(length offset)
length : length in bytes of the primary key
offset : Offset in bytes of the primary key with records (0 to n)
Example :
KEYS(8 0)
VSAM records begin in position zero
Note :
Default is KEYS(64 1) [Key is in bytes 2 thru 65]
//DA0001TA JOB LA2719, PCS,MSGLEVEL=(1,1),
// MGCLASS=A,NOTIFY=DA0001T
// * Delete/Define Cluster for KSDS VSAM Dataset
//*
//STEP1 EXEC PGM=IDCAMS
// SYSPRINT DD SYSOUT=*
// SYSIN DD *
DELETEDA0001T.LIB.KSDS.CLUSTER
DEFINE CLUSTER(
NAME(DA0001T.LIB.KSDS.CLUSTER) -
INDEXED -
KEYS(4 0) -
FSPC(10 20) -
RECORDSIZE(125 125) -
RECORDS(100 10) -
NONSPANNED -
VOLUMES (BS3013) -
NOREUSE - )-DATA(NAME(DA0001T.LIB.KSDS.DATA)) INDEX(NAME(DA0001T.LIB.KSDS.INDEX))
/*
//
Example 4.5 JCL for Defining a KSDS Cluster
The FREESPACE parameter
This FREESPACE parameter, which applies to the KSDS, allocates some percentage of control interval and control area for planned free space. This free space can be used for adding new records or for expanding existing variable records. FREESPACE applies only to the data component
Format :
FREESPACE(%CI %CA)
%CI :- Percentage of control interval to leave free for expansion
%CA :- Percentage of control area to leave free for expansion
Example : FREESPACE(20 10)
Too much free space results in more i/o, especially when doing sequential processing. Too little results in excessive control interval and control area split
Note :
Default is FREESPACE(0 0)
The REUSE parameter
The REUSE parameter specifies that the cluster can be opened a second time as a reusable cluster. NOREUSE is the default, and specifies the cluster as non-reusable.
Format :
REUSENOREUSE
Some application call for temporary dataset or workfile that must be created, used and deleted each time the application runs. To simplify these applications, VSAM lets you create reusable files. The reusable file is a standard VSAM KSDS, ESDS or RRDS. The only difference is that, if you open an existing reusable file for output processing, VSAM treats the file as if were empty. Any records already present in the file are ignored.
The CONTROL INTERVAL SIZE parameter
This parameter specifies the Control Interval size. It is usually abbreviated CISZ.
Format :
CISZ(bytes)
Example :
CISZ(4096)
Note : If omitted VSAM calculates CISZ based on record size.
Remark : Control Interval is VSAM’s equivalent of a block and it is the unit of data that is actually transmitted when records are read or written.
Guidelines for determining the CISZ
ESDS is processed sequentially, so the CISZ should be relatively large, depending on the size of the record. For sequential processing with larger records you may choose a CISZ of 8k
For datasets processed randomly as well as sequentially (for backup at night) choose a CISZ for random processing and then allocate extra buffers for sequential processing with the AMP JCL parameter.
RRDS is usually processed randomly, so the CISZ should be relatively small, depending on the size of the record.
SHAREOPTIONS
This parameter tells VSAM whether you want to let two or more jobs to process your file at the same time. It specifies how a VSAM dataset can be shared
Format :
SHARE OPTIONS(cr value cs value)
cr value : Specifies the value for cross region sharing. Cross region sharing is defined as different jobs running on the same system using Global Resource Serialization(GRS), a resource control facility available only under MVS/XA and ESA
cs value : Specifies the value for cross system sharing means different jobs running on different system in a NONGRS environment
Values :-
• multiple read OR single write
• multiple read AND single write
• multiple read AND multiple write
Default :- SHAREOPTIONS(1 3)
Back to VSAM index
1. To create VSAM dataset
2. To list, examine, print, tune, backup, and export/import VSAM datasets.
The IDCAMS utility can be invoked in batch mode with JCL or interactively with TSO commands. With JCL you can print/display datasets and system messages and return codes. Multiple commands can be coded per job. You can use IF-THEN-ELSE statement to execute command/s selectively based on condition codes returned by previous commands.
Listed below are the IDCAMS commands to be discussed in this course
• DEFINE
• MODAL COMMANDS
IF
SET
PARM
• BUILDINDEX
• REPRO
• DELETE
• VERIFY
• IMPORT/EXPORT
• ALTER
• LISTCAT
The example 4.1 shown below is a skeleton JCL for executing IDCAMS commands. The PGM parameter specifies that the program to be executed is IDCAMS utility program . The statements that follow SYSIN DD * are IDCAMS commands. The end of data is specified by /*.
Optionally JOBCAT and STEPCAT statements may be coded to indicate catalog names for a job/step, in which concerned dataset may be cataloged
// jobname JOB (parameters)
// stepname EXEC PGM=IDCAMS
// SYSPRINT DD SYSOUT = *
[// ddname DD DSN=datasetname,
DISP= SHR/ OLD
]
//SYSIN DD *
IDCAMS command/s coded freely between 2 to 72 cols.
/*
//
Optionally:
// JOBCAT DD DSN = catalogname, DISP= SHR
// STEPCAT DD DSN = catalogname, DISP = SHR
Example 4.1 JCL for executing IDCAMS commands
Format of IDCAMS command
verb object (parameters)
Every IDCAMS command starts with a verb followed by object which takes some parameters. In the code listing 4.2 DEFINE is the verb CLUSTER is the object which takes a dataset DA0001T.LIB.KSDS.CLUSTER as parameter
DEFINE CLUSTER -
NAME(DA0001T.LIB.KSDS.CLUSTER) -
CYLINDERS(5, 1) -
VOLUMES (BS3013) -
INDEXED -
)
Example 4.2 Creating a cluster
Comments:
Comments in IDCAMS can be specified in the following manner
/* comment */
or
/* -----
*/
IDCAMS return codes
The IDCAMS Commands return certain codes which have the following interpretation
Condition code:
0 : command executed with no errors
4 : warning - execution may go successful
8 : serious error - execution may fail
12 : serious error - execution impossible
16 : fatal error - job step terminates
The condition codes are stored in LASTCC/MAXCC. LASTCC stores the condition code for the previous command and MAXCC stores the maximum code returned by all previous commands. Both LASTCC and MAXCC contain zero by default at the start of IDCAMS execution. You can check the condition code of the previous command and direct the flow of execution or terminate the JCL.
Syntax of IF statement
IF LASTCC/MAXCC
comparand VALUE -
THEN -
command
ELSE
Command
Comparand(s) are : EQ/NE/GT/LT/GE/LE
Hyphen is required after then to indicate the continuation of the command on the next line . Comment is assumed as null command . ELSE is optional. LASTCC and MAXCC values can be changed using the SET command.
Note : LASTCC and MAXCC can also be set to any value between 0-16
e.g.
SET LASTCC = 4
Setting MAXCC has no effect on LASTCC. Setting LASTCC changes the value of MAXCC, if LASTCC is set to a value larger than MAXCC. Setting MAXCC = 16 terminates the job
.........
REPRO INFILE (INDD) -
OUTFILE (OUTDD)
................
IF LASTCC EQ 0 -
THEN -
PRINT OUTFILE (INDD)
ELSE
PRINT INFILE (OUTDD)
IF MAXCC LT 4 -
THEN -
DO
/* COMMENT */
Command
Command
END
ELSE
Command
Example 4.3a JCL using MAXCC and LASTCC
DEFINE CLUSTER
….
IF LASTCC > 0 THEN
SET MAXCC = 16
ELSE
REPRO
……
Example 4.3b JCL using MAXCC and LASTCC
Defining an ESDS Cluster
DEFINE CLUSTER
Clusters are created and named with the DEFINE CLUSTER command.
The NAME parameter
This is a required positional parameter.
Format : NAME(Cluster-Name)
Cluster name :- The name to be assigned to the cluster
Example: NAME(DA0004T.LIB.KSDS.CLUSTER)
The cluster Name becomes the dataset name in any JCL that invokes this cluster either as an input or output
//INPUT DD DSN=DA0004T.LIB.KSDS.CLUSTER,DISP=SHR
The high-level qualifier is important because in most installations this technique ensures that VSAM datasets are cataloged in the appropriate user catalog.
Rules for Naming Cluster
Can have 1 to 44 alphanumeric characters
Can include the national characters #, @, $
Segmented into level of eight or fewer characters, separated by periods
The first character must be either alphabetic or national character
The SPACE Allocation parameter
The space allocation parameter specifies space allocation values in the units shown below:
Format :
CYLINDERS(Pri Sec)
TRACKS(Pri Sec)
RECORDS(Pri Sec)
KILOBYTES(Pri Sec)
MEGABYTES(Pri Sec)
Primary : Number of units of primary space to allocate. This amount is allocated once when the dataset is created
Secondary : Number of units of secondary space to allocate. This amount is allocated a maximum of 122 times as needed during the life of the dataset. VSAM calculates the control area size for you. A control area size of one cylinder usually yields best performance. To ensure control area size of one cylinder you must allocate space in CYLINDERS.
Allocating space ine RECORDS must be avoided as this might result in an inefficient Control Area size.
The VOLUMES parameter
This VOLUMES parameter assigns one or more storage volumes to your dataset. Multiple volumes must be of the same device type.
Format :
VOLUMES(volser) or VOLUMES(volser ........ volser)
volser : The 6 digit volume serial number of a volume.
Example :
VOLUMES(BS3011)
VOLUMES(BS3011 BS3040 BS3042)
You can store the data and index (in case of KSDS clusters) on separate volumes as this may provide a performance advantage for large dataset
The Recordsize parameter
This parameter tells VSAM what size records to expect. The avg and max are average and maximum values for variable length record. If records are of fixed length, avg and max should be the same.
Format :
RECORDSIZE(avg max)
avg : Average length of records
max : Maximum length of records
e.g. :
RECORDSIZE(80 80) [Fixed Length records]
RECORDSIZE(80 120) [Variable Length records]
RECORDSIZE can be assigned at the cluster or data
level
Note :
This is an optional parameter, if omitted default is RECORDSIZE(4086 4086)
The SPANNED parameter
This parameter allows large record to span more than one control interval. However records cannot span Control Areas. The resulting free space in the spanned control interval is unusable by other records, even if they fit logically in the unused bytes. [NONSPANNED is the default] & it means that records cannot span control intervals
The DATASET-TYPE parameter
This parameter specifies whether the dataset is INDEXED(KSDS),
NONINDEXED(ESDS), or NUMBERED(RRDS).
Format : INDEXED NONINDEXED NUMBERED
INDEXED :- Specifies a KSDS and is the default
NONINDEXED :- Specifies an ESDS. No index is created and records are accessed sequentially or by relative byte address
NUMBERED :- Specifies an RRDS
LINEAR :- Specifies a LINEAR dataset
The default dataset Type is INDEXED.
//DA0001TA JOB LA2719, PCS,MSGLEVEL=(1,1),
// MGCLASS=A,NOTIFY=DA0001T
// * Delete/Define Cluster for ESDS VSAM Dataset
//STEP1 EXEC PGM=IDCAMS
// SYSPRINT DD SYSOUT = *
// SYSIN DD *
DELETE DA0001T.LIB.ESDS.CLUSTER
DEFINE CLUSTER -
(NAME(DA0001T.LIB.ESDS.CLUSTER) -
NONINDEXED -
RECORDSIZE(125 125) -
RECORDS(100 10) -
NONSPANNED -
VOLUMES (BS3013) -
REUSE - ) -
DATA(NAME(DA0001T.LIB.ESDS.DATA))
Example 4.4 JCL for Defining an ESDS Cluster
Defining KSDS Cluster
While defining a KSDS Cluster it is essential to code the DATA, INDEX and KEYS parameter
The DATA parameter
The DATA parameter tells IDCAMS that you are going to create a separate data component. This parameter is optional for ESDS and RRDS datasets. You should code the NAME parameter of DATA for KSDS datasets, in order to operate on the data component by itself.
Format : DATA(NAME(dataname) Parameters)
dataname :- The name you choose to name the data component
The INDEX parameter
The INDEX parameter creates a separate index component
Format :
INDEX(NAME(indexname) Parameters)
indexname : The name you choose to name the index component
INDEX(NAME(DA0004T.LIB.KSDS.INDEX))
When you code the DATA and INDEX parameters, you usually coda a NAME parameter for them. If you omit the NAME parameter for DATA and INDEX , VSAM appends .DATA or .INDEX as the low-level qualifier.
The KEYS parameter
This parameter defines the length and offset of the primary key in a KSDS record.
The offset is the primary key’s displacement (in bytes) from the beginning of the record.
Format :
KEYS(length offset)
length : length in bytes of the primary key
offset : Offset in bytes of the primary key with records (0 to n)
Example :
KEYS(8 0)
VSAM records begin in position zero
Note :
Default is KEYS(64 1) [Key is in bytes 2 thru 65]
//DA0001TA JOB LA2719, PCS,MSGLEVEL=(1,1),
// MGCLASS=A,NOTIFY=DA0001T
// * Delete/Define Cluster for KSDS VSAM Dataset
//*
//STEP1 EXEC PGM=IDCAMS
// SYSPRINT DD SYSOUT=*
// SYSIN DD *
DELETEDA0001T.LIB.KSDS.CLUSTER
DEFINE CLUSTER(
NAME(DA0001T.LIB.KSDS.CLUSTER) -
INDEXED -
KEYS(4 0) -
FSPC(10 20) -
RECORDSIZE(125 125) -
RECORDS(100 10) -
NONSPANNED -
VOLUMES (BS3013) -
NOREUSE - )-DATA(NAME(DA0001T.LIB.KSDS.DATA)) INDEX(NAME(DA0001T.LIB.KSDS.INDEX))
/*
//
Example 4.5 JCL for Defining a KSDS Cluster
The FREESPACE parameter
This FREESPACE parameter, which applies to the KSDS, allocates some percentage of control interval and control area for planned free space. This free space can be used for adding new records or for expanding existing variable records. FREESPACE applies only to the data component
Format :
FREESPACE(%CI %CA)
%CI :- Percentage of control interval to leave free for expansion
%CA :- Percentage of control area to leave free for expansion
Example : FREESPACE(20 10)
Too much free space results in more i/o, especially when doing sequential processing. Too little results in excessive control interval and control area split
Note :
Default is FREESPACE(0 0)
The REUSE parameter
The REUSE parameter specifies that the cluster can be opened a second time as a reusable cluster. NOREUSE is the default, and specifies the cluster as non-reusable.
Format :
REUSENOREUSE
Some application call for temporary dataset or workfile that must be created, used and deleted each time the application runs. To simplify these applications, VSAM lets you create reusable files. The reusable file is a standard VSAM KSDS, ESDS or RRDS. The only difference is that, if you open an existing reusable file for output processing, VSAM treats the file as if were empty. Any records already present in the file are ignored.
The CONTROL INTERVAL SIZE parameter
This parameter specifies the Control Interval size. It is usually abbreviated CISZ.
Format :
CISZ(bytes)
Example :
CISZ(4096)
Note : If omitted VSAM calculates CISZ based on record size.
Remark : Control Interval is VSAM’s equivalent of a block and it is the unit of data that is actually transmitted when records are read or written.
Guidelines for determining the CISZ
ESDS is processed sequentially, so the CISZ should be relatively large, depending on the size of the record. For sequential processing with larger records you may choose a CISZ of 8k
For datasets processed randomly as well as sequentially (for backup at night) choose a CISZ for random processing and then allocate extra buffers for sequential processing with the AMP JCL parameter.
RRDS is usually processed randomly, so the CISZ should be relatively small, depending on the size of the record.
SHAREOPTIONS
This parameter tells VSAM whether you want to let two or more jobs to process your file at the same time. It specifies how a VSAM dataset can be shared
Format :
SHARE OPTIONS(cr value cs value)
cr value : Specifies the value for cross region sharing. Cross region sharing is defined as different jobs running on the same system using Global Resource Serialization(GRS), a resource control facility available only under MVS/XA and ESA
cs value : Specifies the value for cross system sharing means different jobs running on different system in a NONGRS environment
Values :-
• multiple read OR single write
• multiple read AND single write
• multiple read AND multiple write
Default :- SHAREOPTIONS(1 3)
Back to VSAM index
5. LISTCAT
LISTCAT’s basic function is to list information about VSAM and NONVSAM objects. With LISTCAT you can also view password and security information, usage statistics, space allocation information, creation and expiration dates etc.
Format 1:
LISTCAT ENTRIES(entryname) options
Options are :
• HISTORY
• VOLUME
• ALLOCATION
• ALL
ENTRIES (ENT) requires you to specify each level of qualification, either explicitly or implicitly, using an asterisk as a wild card character.
Examples:
LISTCAT
ENT(DA0001T.VSAM.KSDS.CLUSTER) -
CLUSTER -
ALL -
Example 5.1 LISTCAT
The above command will only display the base cluster
LISTCAT
ENT(DA0001T.VSAM.KSDS.CLUSTER) -
DATA -
ALL -
The above command will only display the data component
LISTCAT
ENT(DA0001T.VSAM.KSDS.CLUSTER) -
ALL
The above command will display all catalog information.
//STEP1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
LISTCAT -
ENTRIES(DA0001T.LIB.KSDS.CLUSTER) ALL
/*
Format 2:
LISTCAT LEVEL(level) options
LEVEL by definition lists all lower levels. VSAM assumes that qualifier to be the high-level qualifier and list every entry with that high level qualifier .
Example
LISTCAT LVL(DA0001T.*.KSDS) ALL
The above will list all entries with DA0001T as high level qualifier , anything in the second-level qualifier and KSDS in the third-level qualifier . That is it would list DA0001T.ABC.KSDS and DA0001T.TEST.KSDS.AIX, DA0001T.TEST.KSDS.DATA .
To execute LISTCAT from TSO prompt
LISTCAT ENTRIES (LIB.KSDS.CLUSTER) ALL
If you analyze the output of the LISTCAT command there is ALLOCATION information which shows two fields HURBA and HARBA.
High-Used-RBA (HURBA)points to the end of the data. High-Allocated-RBA (HARBA)is the highest byte that can be used.
HIGH-ALLOC-RBA indicates the Relative Byte Address(plus 1) of the last allocated data control area. This value reflects the total space allocation for the data component.
HIGH-USED-RBA indicates the Relative Byte Address(plus 1) of the last used data control area. This value reflects the portion of the space allocation that is actually filled with data records.
There are actually to HURBAs one in the VSAM control block of the cluster and one in the catalog entry for the cluster.
You can write application programs (in COBOL, PL/I Assembler Language, in CICS) and use the statements provided by these languages to write and read VSAM datasets
Figure 5.1 HURBA and HARBA
Back to VSAM index
Format 1:
LISTCAT ENTRIES(entryname) options
Options are :
• HISTORY
• VOLUME
• ALLOCATION
• ALL
ENTRIES (ENT) requires you to specify each level of qualification, either explicitly or implicitly, using an asterisk as a wild card character.
Examples:
LISTCAT
ENT(DA0001T.VSAM.KSDS.CLUSTER) -
CLUSTER -
ALL -
Example 5.1 LISTCAT
The above command will only display the base cluster
LISTCAT
ENT(DA0001T.VSAM.KSDS.CLUSTER) -
DATA -
ALL -
The above command will only display the data component
LISTCAT
ENT(DA0001T.VSAM.KSDS.CLUSTER) -
ALL
The above command will display all catalog information.
//STEP1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
LISTCAT -
ENTRIES(DA0001T.LIB.KSDS.CLUSTER) ALL
/*
Format 2:
LISTCAT LEVEL(level) options
LEVEL by definition lists all lower levels. VSAM assumes that qualifier to be the high-level qualifier and list every entry with that high level qualifier .
Example
LISTCAT LVL(DA0001T.*.KSDS) ALL
The above will list all entries with DA0001T as high level qualifier , anything in the second-level qualifier and KSDS in the third-level qualifier . That is it would list DA0001T.ABC.KSDS and DA0001T.TEST.KSDS.AIX, DA0001T.TEST.KSDS.DATA .
To execute LISTCAT from TSO prompt
LISTCAT ENTRIES (LIB.KSDS.CLUSTER) ALL
If you analyze the output of the LISTCAT command there is ALLOCATION information which shows two fields HURBA and HARBA.
High-Used-RBA (HURBA)points to the end of the data. High-Allocated-RBA (HARBA)is the highest byte that can be used.
HIGH-ALLOC-RBA indicates the Relative Byte Address(plus 1) of the last allocated data control area. This value reflects the total space allocation for the data component.
HIGH-USED-RBA indicates the Relative Byte Address(plus 1) of the last used data control area. This value reflects the portion of the space allocation that is actually filled with data records.
There are actually to HURBAs one in the VSAM control block of the cluster and one in the catalog entry for the cluster.
You can write application programs (in COBOL, PL/I Assembler Language, in CICS) and use the statements provided by these languages to write and read VSAM datasets
Figure 5.1 HURBA and HARBA
Back to VSAM index
6. Creating Alternate Indexes
An Alternate Index AIX provides a view of data different from the one offered by the primary key. For example for a KSDS dataset Employee, you may have a Record Key index on Employee-no and an Alternate Index on Employee-Name . You can now browse and even update the same KSDS in logical sequence by Employee-Name.
Alternate Indexes may be defined on one or more than one Alternate Key(s) i.e. Field(s) other than primary key. Alternate Key(s) need not be unique. Each alternate index itself is a KSDS with data and index component.
Alternate Index greatly reduces redundancy. There is no need to keep a separate dataset for different views like Employees’ Social Security No. The records may be accessed sequentially or randomly based on the alternate record keys.
They can be updated automatically when the base cluster is updated.
Alternate Indexes do not support a reusable base cluster. So NOREUSE which is the default, should be specified.
Too many Alternate Indexes built on a KSDS may lead to performance Degradation as access by alternate key requires twice as many I/O’s . VSAM first locates the primary key from the alternate index and then locates the Control Interval information from the record key index.
For ESDS, VSAM builds AIX by mapping one field to the record’s RBA.
Steps for defining and building alternate indexes:
DEFINE AIX Command
Define the Alternate Index Cluster using the IDCAMS DEFINE AIX command.
//STEP1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT =*
//SYSIN DD *
DEFINE AIX -
(NAME(DA0001T.LIB.KSDS.AUTHNAME.AIX)
-
VOLUMES (BS3013) -
RELATE(DA0001T.LIB.KSDS.CLUSTER)
-
UPGRADE -
TRACKS(10 1)
-
KEYS(25 9) -
RECORDSIZE(70 110)
FREESPACE(20 10)
-
SHAREOPTIONS(1) -
NONUNIQUEKEY) -
)
DATA(NAME(DA000A1T.LIB.KSDS.AUTHNAME.DATA)) -
INDEX(NAME(DA0001T.LIB.KSDS.AUTHNAME.INDEX)
/*
//
Example 6.1 JCL to define AIX
Pathname is the dataset name in JCL (DSN=PATHNAME)
RELATE Parameter
Format:
RELATE(base cluster name)
This parameter establishes the relationship between the base cluster and the alternate index via the use of the base cluster name. It is unique to the DEFINE AIX command, and it is required.
The RECORDSIZE Parameter
Format:
RECORDSIZE(avg max)
This parameter specifies the average and maximum length of each alternate index record. There are two types of alternate indexes.
KSDS unique alternate index: You can create a unique alternate index by specifying the UNIQUEKEY parameter. The records of unique alternate indexes are of fixed length. The length of a unique alternate index built over a KSDS is derived as follows:
Figure 6.1 Contents of KSDS unique alternate index
For example if an unique alternate index on Soc-Sec-No is built on our KSDS cluster Employee then the RECORDSIZE will be calculated as follows:-
5 Bytes fro HouseKeeping + size of alternate key + Size of Primary Key that the alternate
= 5 + 9 + 8 = 22
Therefore recordsize parameter will be coded as RECORDSIZE(20 20)
KSDS non-unique alternate index: An alternate index created with a NONUNIQUEKEY parameter has variable length records. The RECORDSIZE is calculated as follows:-
Avgerage Record length = 5 bytes for House Keeping + size of the alternate key + size of the primary key x average no of records the alternate index key can point to
Maximum Record length = 5 bytes for House Keeping + size of the alternate key + size of the primary key x maximum no of records the alternate index key can point to
DEFINE PATH Command:
Define an Alternate Index Path using the IDCAMS DEFINE PATH command. The path forms a connection between the alternate index and the base cluster. Path name becomes a catalog entry but path does not contain any records. The path name is specified in the JCL for applications that access records via the alternate index.
//STEP1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT =*
//SYSIN DD *
DEFINE PATH -
NAME(DA0001T.LIB.KSDS.AUTHNAME.PATH) -
PATHENTRY(DA0001T.LIB.KSDS.AUTHNAME.AIX) -
UPDATE -
)
/*
//
Example 6.2 JCL to define PATH for the AIX
UPDATE vs NOUPDATE
Records may be accessed by applications by the alternate index path alone, without opening the base cluster. In such cases any changes made to data will be reflected in the alternate index records if the UPDATE option is specified. If NOUPDATE is specified then the alternate index records will not be automatically updated.
UPGRADE vs. NOUPGRADE
The UPDATE/NOUPDATE option of DEFINE PATH works in tandem with the UPGRADE / NOUPGRADE of the DEFINE AIX command.
UPGRADE specifies that any changes made in the base cluster records will be reflected immediately in the alternate index records if the base cluster is opened in the application. Fortunately UPGRADE and UPDATE are defaults for their respective commands.
Building Alternate Indexes
The final step in creating an alternate index is to actually build and populate it with records.
The BLDINDEX command does the following:
• The data component of the base cluster is read sequentially and pairs of key pointers are extracted. These pairs consist of the alternate key field and its corresponding primary key field. VSAM creates a temporary file with these records.
• This temporary file is sorted in ascending alternate key sequence.
• If NONUNIQUEKEY option is specified then a merge operation takes place, which will merge all records with the same alternate key into a single record.
• These records are the data component of the Alternate Index. VSAM now constructs the index component just as it does for the KSDS.
Note: The Alternate Index can be built only after the base cluster has been both defined and loaded with atleast 1 record.
//STEP1 EXEC PG=IDCAMS
//SYSPRINT DD SYSOUT =*
//DD1 DD DSN=DA0001T.LIB.KSDS.CLUSTER,
// DISP=OLD
//IDCUT1 DD UNIT=SYSDA,SPACE=(TRK, (2, 1))
//IDCUT2 DD UNIT=SYSDA,SPACE=(TRK, (2, 1))
// SYSIN DD *
BLDINDEX -
INFILE(DD1) -
OUTDATASET(DA0001T.LIB.KSDS.AUTHNAME.AIX) -
INTERNALSORT
/*
//
Example 6.3 JCL to build Alternate Index
Disposition of base cluster is DISP=OLD as the BLDINDEX needs absolute control of the base cluster.Output dataset can be Alternate index cluster or pathname
The INTERNALSORT uses virtual storage whereas EXTERNAL SORT uses disk space. INTERNALSORT is the default. If you want an external sort to be performed then include IDCUT1 and IDCUT2 DD statements in your JCL and specify EXTERNALSORT in the BLDINDEX command.
DEFINE Cluster
(NAME(DA0001T.LIB.KSDS.CLUSTER)
.
)
DEFINE AIX
(NAME(DA0001T.LIB.KSDS.AUTHNAME.AIX) RELATE(DA0001T.LIB.KSDS.CLUSTER)
.
)
DEFINE PATH (NAME(DA0001T.LIB.KSDSK.AUTHNAME.PATH) PATHENTRY(DA0001T.LIB.KSDS.AUTHNAME.AIX)
.
)
BLDINDEX
INDATASET(DA0001T.LIB.KSDS.CLUSTER) OUTDATASET(DA0001T.LIB.KSDS.AUTHNAME.AIX)
.
)
Example 6.4 Steps for creating and building AIX
Back to VSAM index
Alternate Indexes may be defined on one or more than one Alternate Key(s) i.e. Field(s) other than primary key. Alternate Key(s) need not be unique. Each alternate index itself is a KSDS with data and index component.
Alternate Index greatly reduces redundancy. There is no need to keep a separate dataset for different views like Employees’ Social Security No. The records may be accessed sequentially or randomly based on the alternate record keys.
They can be updated automatically when the base cluster is updated.
Alternate Indexes do not support a reusable base cluster. So NOREUSE which is the default, should be specified.
Too many Alternate Indexes built on a KSDS may lead to performance Degradation as access by alternate key requires twice as many I/O’s . VSAM first locates the primary key from the alternate index and then locates the Control Interval information from the record key index.
For ESDS, VSAM builds AIX by mapping one field to the record’s RBA.
Steps for defining and building alternate indexes:
DEFINE AIX Command
Define the Alternate Index Cluster using the IDCAMS DEFINE AIX command.
//STEP1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT =*
//SYSIN DD *
DEFINE AIX -
(NAME(DA0001T.LIB.KSDS.AUTHNAME.AIX)
-
VOLUMES (BS3013) -
RELATE(DA0001T.LIB.KSDS.CLUSTER)
-
UPGRADE -
TRACKS(10 1)
-
KEYS(25 9) -
RECORDSIZE(70 110)
FREESPACE(20 10)
-
SHAREOPTIONS(1) -
NONUNIQUEKEY) -
)
DATA(NAME(DA000A1T.LIB.KSDS.AUTHNAME.DATA)) -
INDEX(NAME(DA0001T.LIB.KSDS.AUTHNAME.INDEX)
/*
//
Example 6.1 JCL to define AIX
Pathname is the dataset name in JCL (DSN=PATHNAME)
RELATE Parameter
Format:
RELATE(base cluster name)
This parameter establishes the relationship between the base cluster and the alternate index via the use of the base cluster name. It is unique to the DEFINE AIX command, and it is required.
The RECORDSIZE Parameter
Format:
RECORDSIZE(avg max)
This parameter specifies the average and maximum length of each alternate index record. There are two types of alternate indexes.
KSDS unique alternate index: You can create a unique alternate index by specifying the UNIQUEKEY parameter. The records of unique alternate indexes are of fixed length. The length of a unique alternate index built over a KSDS is derived as follows:
Figure 6.1 Contents of KSDS unique alternate index
For example if an unique alternate index on Soc-Sec-No is built on our KSDS cluster Employee then the RECORDSIZE will be calculated as follows:-
5 Bytes fro HouseKeeping + size of alternate key + Size of Primary Key that the alternate
= 5 + 9 + 8 = 22
Therefore recordsize parameter will be coded as RECORDSIZE(20 20)
KSDS non-unique alternate index: An alternate index created with a NONUNIQUEKEY parameter has variable length records. The RECORDSIZE is calculated as follows:-
Avgerage Record length = 5 bytes for House Keeping + size of the alternate key + size of the primary key x average no of records the alternate index key can point to
Maximum Record length = 5 bytes for House Keeping + size of the alternate key + size of the primary key x maximum no of records the alternate index key can point to
DEFINE PATH Command:
Define an Alternate Index Path using the IDCAMS DEFINE PATH command. The path forms a connection between the alternate index and the base cluster. Path name becomes a catalog entry but path does not contain any records. The path name is specified in the JCL for applications that access records via the alternate index.
//STEP1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT =*
//SYSIN DD *
DEFINE PATH -
NAME(DA0001T.LIB.KSDS.AUTHNAME.PATH) -
PATHENTRY(DA0001T.LIB.KSDS.AUTHNAME.AIX) -
UPDATE -
)
/*
//
Example 6.2 JCL to define PATH for the AIX
UPDATE vs NOUPDATE
Records may be accessed by applications by the alternate index path alone, without opening the base cluster. In such cases any changes made to data will be reflected in the alternate index records if the UPDATE option is specified. If NOUPDATE is specified then the alternate index records will not be automatically updated.
UPGRADE vs. NOUPGRADE
The UPDATE/NOUPDATE option of DEFINE PATH works in tandem with the UPGRADE / NOUPGRADE of the DEFINE AIX command.
UPGRADE specifies that any changes made in the base cluster records will be reflected immediately in the alternate index records if the base cluster is opened in the application. Fortunately UPGRADE and UPDATE are defaults for their respective commands.
Building Alternate Indexes
The final step in creating an alternate index is to actually build and populate it with records.
The BLDINDEX command does the following:
• The data component of the base cluster is read sequentially and pairs of key pointers are extracted. These pairs consist of the alternate key field and its corresponding primary key field. VSAM creates a temporary file with these records.
• This temporary file is sorted in ascending alternate key sequence.
• If NONUNIQUEKEY option is specified then a merge operation takes place, which will merge all records with the same alternate key into a single record.
• These records are the data component of the Alternate Index. VSAM now constructs the index component just as it does for the KSDS.
Note: The Alternate Index can be built only after the base cluster has been both defined and loaded with atleast 1 record.
//STEP1 EXEC PG=IDCAMS
//SYSPRINT DD SYSOUT =*
//DD1 DD DSN=DA0001T.LIB.KSDS.CLUSTER,
// DISP=OLD
//IDCUT1 DD UNIT=SYSDA,SPACE=(TRK, (2, 1))
//IDCUT2 DD UNIT=SYSDA,SPACE=(TRK, (2, 1))
// SYSIN DD *
BLDINDEX -
INFILE(DD1) -
OUTDATASET(DA0001T.LIB.KSDS.AUTHNAME.AIX) -
INTERNALSORT
/*
//
Example 6.3 JCL to build Alternate Index
Disposition of base cluster is DISP=OLD as the BLDINDEX needs absolute control of the base cluster.Output dataset can be Alternate index cluster or pathname
The INTERNALSORT uses virtual storage whereas EXTERNAL SORT uses disk space. INTERNALSORT is the default. If you want an external sort to be performed then include IDCUT1 and IDCUT2 DD statements in your JCL and specify EXTERNALSORT in the BLDINDEX command.
DEFINE Cluster
(NAME(DA0001T.LIB.KSDS.CLUSTER)
.
)
DEFINE AIX
(NAME(DA0001T.LIB.KSDS.AUTHNAME.AIX) RELATE(DA0001T.LIB.KSDS.CLUSTER)
.
)
DEFINE PATH (NAME(DA0001T.LIB.KSDSK.AUTHNAME.PATH) PATHENTRY(DA0001T.LIB.KSDS.AUTHNAME.AIX)
.
)
BLDINDEX
INDATASET(DA0001T.LIB.KSDS.CLUSTER) OUTDATASET(DA0001T.LIB.KSDS.AUTHNAME.AIX)
.
)
Example 6.4 Steps for creating and building AIX
Back to VSAM index
7. Reorganizing VSAM datasets
This chapter explains the commands used to back up and restore existing datasets, protect the integrity of data.
REPRO
This command is used to:
• Loads empty VSAM cluster with records.
• Creates backup of a dataset
• Merge data from two VSAM datasets
REPRO command can operate on non-VSAM datasets. It is an all-purpose load and backup utility command and can be used in place of IEBGENER.
With REPRO you can do the following
• Convert an ISAM dataset to VSAM format
• Copy a non-VSAM dataset to a physical sequential or partitioned dataset
• Copy record from one type of VSAM datasets to another. For example KSDS to ESDS
REPRO has following disadvantages:
• Little control over the input data
• Catalog information is not copied with the data
• Prior DELETE and redefinition is required before loading the cluster unless you have specified REUSE in the DEFINE CLUSTER command
Incase of KSDS, data and index component are build automatically.
REPRO Command Syntax
Format :
REPRO
INFILE(ddname) INDATASET(dsname) -
OUTFILE(ddname) OUTDATASET(dsname) -
Optional parameters are :
FROMKEY FROMADDRESS
FROMNUMBER SKIP
TOKEY TOADDRESS
TONUMBER COUNT
INFILE and OUTFILE are required parameters that point to DD1(input file) and DD2 (output file) respectively .
Limiting Input and Output Records:-
While it is not possible to edit the input to REPRO, you can limit the input by providing the optional parameters.
FROMKEY and TOKEY parameters: FROMKEY specifies the key of the input records at which to begin reading. TOKEY specifies the key to stop reading or the last input record.
SKIP and COUNT parameters. SKIP specifies the number of input records to skip before beginning to copy. COUNT specifies the number of output records to copy. You can specify both. For example skip 10 records and copy next 10
//DD1 DD DSN=DA0001T.INPUT.KSDS,DISP=OLD
//DD2 DD DSN=DA0001T.OUTPUT.KSDS, DISP=OLD
//SYSIN DD *
REPRO -
INFILE(DD1) -
OUTFILE(DD2) -
FROMKEY(A001) -
TOKEY(A069)
Example 7.1 JCL for Loading Dataset:
Other parameter for filtering records:
FROMADDRESS (RBA)
TOADDRESS(RBA)
FROMNUMBER (RRN)
TONUMBER(RRN)
COUNT (NO.)
SKIP(NO)
Backing up VSAM Datasets
It is good to backup VSAM datasets on a regular basis.
REPRO command is used to rebuild and restore VSAM cluster from the backup copy.
Backing up a VSAM dataset involves only one step
//JOBNAME DA0001TA…
//STEP10 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT = *
//DD2 DD DSN=DA0001T.KSDS.INV.BACKUP(+1),
// DISP=(NEW,CATLG,DELETE),UNIT=TAPE,
// VOL=SER=32970,LABEL=(1,SL),
// DCB=(RECFM=FB,LRECL=80)
//SYSIN DD *
REPRO
INDATASET(DA0001T.KSDS.INV.CLUSTER) –
OUTFILE(DD2)
/*
//
Example 7.2 Using Repro for backup
In the example above INDATASET is the input file and DD2 is the output tape dataset which is a part of the GDG while is more or less like a physical sequential file. (Ref to chapter 10 for more on GDG’s )
Restoring and rebuilding the backup
DELETE-DEFINE-REPRO sequence required to restore the cluster incase of KSDS.
Delete the original cluster using IDCAMS DELETE command
Redefine the cluster using IDCAMS DEFINE CLUSTER command
Load the empty cluster with data using the IDCAMS REPRO command
When you DELETE-DEFINE-REPRO a VSAM dataset it has the following effects on the KSDS.
• The dataset is reorganized that is the Control Interval and Control Area splits are eliminated
• Free space is redistributed throughout the dataset as specified in the FREESPACE parameter.
• Primary index is rebuilt, however the DELETE command deletes the base cluster as well as its indexes. So the alternate indexes have to be redefined
ESDS or RRDS need not be reorganized because the record position is fixed permanently by sequence of entry or record number.
//DD1 DD DSN=DA0001T.LIB.KSDS.BACKUP(0),
// DISP=OLD, UNIT=TAPE,LABEL=(1,SL)
//SYSIN DD *
DELETE DA0001T.LIB.KSDS.CLUSTER
/* DEFINE CLUSTER NAME(DA0001T.LIB.KSDS.CLUSTER) -
INDEXED -
KEYS(4 0) -
RECORDSIZE(80 80) -
VOLUMES(BS3013) -
) -
DATA(NAME(DA0001T.LIB.KSDS.DATA)) -
INDEX(NAME(DA0001T.LIB.KSDS.INDEX))
REPRO -
INFILE(DD1) -
OUTDATASET(DA0001T.LIB.KSDS.CLUSTER)
/*
Example 7.3 DELETE-DEFINE-REPRO
Merging datasets with REPRO
The REPRO command can also be used to merge two datasets into one. The target dataset can be a nonempty KSDS, ESDS or RRDS. If the target dataset is an ESDS, the merged records are added to the end of the existing dataset.
EXPORT/IMPORT Commands
The EXPORT/IMPORT commands can be used for backup and recovery . You can export a dataset, alternate index or a catalog to a different system.
EMPORT/IMPORT has several advantages as compared to REPRO
Catalog information is exported along with data
Cluster deletion and redefinition not required during import as input dataset already contains catalog information
Easily ported on other systems as catalog information available with data
Like REPRO KSDS datasets are reorganized however three steps of REPRO are replaced by one
Disadvantages:
Exported data cannot be processed until Imported
Can be used only for VSAM dataset
EXPORT
FORMAT :
EXPORT entryname password
OUTFILE(ddname)
OUTDATASET(dsname)
Optional parameters
Example :
EXPORT DA0001T.LIB.KSDS.CLUSTER -
OUTFILE(DD2)
The output dataset from an EXPORT must always
be a sequential dataset (usually on a tape)
IMPORT
Format :
IMPORT -
INFILE(ddname) INDATASET(dsname) -
OUTFILE(ddname) OUTDATASET(dsname) -
Optional parameters:
IMPORT INFILE (DD2) -
OUTDATASET(DA0001T.LIB.KSDS.CLUSTER)
Imports only EXPORTED dataset
//DA0001TA JOB LA1279,PCS,MSGLEVEL=(1,1),
// MSGCLASS=A, NOTIFY=DA0001T
//* Input instream Data into ESDS VSAM Dataset
// STEP1 EXEC PGM=IDCAMS
// SYSPRINT DD SYSOUT = *
// DD1 DD *
123456789123456789
AAAAAAAABBBBBBCCCC
/*
//DD2 DD DSN=DA0001T.ESDS.CLUSTER
//SYSIN DD *
REPRO -
INFILE(DD1) -
OUTFILE(DD2)
/*
//
Example 7.4 Input instream Data into ESDS
//DA0001TA JOB LA2719,PCS,MSGLEVEL= (1,1),
// MSGCLASS=A, NOTIFY=DA0001T
//* Load Data from a file into ESDS VSAM Dataset
//STEP1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT = *
//DD1 DD DSN=DA0001T.ESDS.CLUSTER1
//DD2 DD DSN=DA0001T.ESDS.CLUSTER2
//SYSIN DD *
REPRO -
INFILE(DD1 -
OUTFILE(DD2)
/*
//
Example 7.5 Load Data from a file into ESDS
Back to VSAM index
REPRO
This command is used to:
• Loads empty VSAM cluster with records.
• Creates backup of a dataset
• Merge data from two VSAM datasets
REPRO command can operate on non-VSAM datasets. It is an all-purpose load and backup utility command and can be used in place of IEBGENER.
With REPRO you can do the following
• Convert an ISAM dataset to VSAM format
• Copy a non-VSAM dataset to a physical sequential or partitioned dataset
• Copy record from one type of VSAM datasets to another. For example KSDS to ESDS
REPRO has following disadvantages:
• Little control over the input data
• Catalog information is not copied with the data
• Prior DELETE and redefinition is required before loading the cluster unless you have specified REUSE in the DEFINE CLUSTER command
Incase of KSDS, data and index component are build automatically.
REPRO Command Syntax
Format :
REPRO
INFILE(ddname) INDATASET(dsname) -
OUTFILE(ddname) OUTDATASET(dsname) -
Optional parameters are :
FROMKEY FROMADDRESS
FROMNUMBER SKIP
TOKEY TOADDRESS
TONUMBER COUNT
INFILE and OUTFILE are required parameters that point to DD1(input file) and DD2 (output file) respectively .
Limiting Input and Output Records:-
While it is not possible to edit the input to REPRO, you can limit the input by providing the optional parameters.
FROMKEY and TOKEY parameters: FROMKEY specifies the key of the input records at which to begin reading. TOKEY specifies the key to stop reading or the last input record.
SKIP and COUNT parameters. SKIP specifies the number of input records to skip before beginning to copy. COUNT specifies the number of output records to copy. You can specify both. For example skip 10 records and copy next 10
//DD1 DD DSN=DA0001T.INPUT.KSDS,DISP=OLD
//DD2 DD DSN=DA0001T.OUTPUT.KSDS, DISP=OLD
//SYSIN DD *
REPRO -
INFILE(DD1) -
OUTFILE(DD2) -
FROMKEY(A001) -
TOKEY(A069)
Example 7.1 JCL for Loading Dataset:
Other parameter for filtering records:
FROMADDRESS (RBA)
TOADDRESS(RBA)
FROMNUMBER (RRN)
TONUMBER(RRN)
COUNT (NO.)
SKIP(NO)
Backing up VSAM Datasets
It is good to backup VSAM datasets on a regular basis.
REPRO command is used to rebuild and restore VSAM cluster from the backup copy.
Backing up a VSAM dataset involves only one step
//JOBNAME DA0001TA…
//STEP10 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT = *
//DD2 DD DSN=DA0001T.KSDS.INV.BACKUP(+1),
// DISP=(NEW,CATLG,DELETE),UNIT=TAPE,
// VOL=SER=32970,LABEL=(1,SL),
// DCB=(RECFM=FB,LRECL=80)
//SYSIN DD *
REPRO
INDATASET(DA0001T.KSDS.INV.CLUSTER) –
OUTFILE(DD2)
/*
//
Example 7.2 Using Repro for backup
In the example above INDATASET is the input file and DD2 is the output tape dataset which is a part of the GDG while is more or less like a physical sequential file. (Ref to chapter 10 for more on GDG’s )
Restoring and rebuilding the backup
DELETE-DEFINE-REPRO sequence required to restore the cluster incase of KSDS.
Delete the original cluster using IDCAMS DELETE command
Redefine the cluster using IDCAMS DEFINE CLUSTER command
Load the empty cluster with data using the IDCAMS REPRO command
When you DELETE-DEFINE-REPRO a VSAM dataset it has the following effects on the KSDS.
• The dataset is reorganized that is the Control Interval and Control Area splits are eliminated
• Free space is redistributed throughout the dataset as specified in the FREESPACE parameter.
• Primary index is rebuilt, however the DELETE command deletes the base cluster as well as its indexes. So the alternate indexes have to be redefined
ESDS or RRDS need not be reorganized because the record position is fixed permanently by sequence of entry or record number.
//DD1 DD DSN=DA0001T.LIB.KSDS.BACKUP(0),
// DISP=OLD, UNIT=TAPE,LABEL=(1,SL)
//SYSIN DD *
DELETE DA0001T.LIB.KSDS.CLUSTER
/* DEFINE CLUSTER NAME(DA0001T.LIB.KSDS.CLUSTER) -
INDEXED -
KEYS(4 0) -
RECORDSIZE(80 80) -
VOLUMES(BS3013) -
) -
DATA(NAME(DA0001T.LIB.KSDS.DATA)) -
INDEX(NAME(DA0001T.LIB.KSDS.INDEX))
REPRO -
INFILE(DD1) -
OUTDATASET(DA0001T.LIB.KSDS.CLUSTER)
/*
Example 7.3 DELETE-DEFINE-REPRO
Merging datasets with REPRO
The REPRO command can also be used to merge two datasets into one. The target dataset can be a nonempty KSDS, ESDS or RRDS. If the target dataset is an ESDS, the merged records are added to the end of the existing dataset.
EXPORT/IMPORT Commands
The EXPORT/IMPORT commands can be used for backup and recovery . You can export a dataset, alternate index or a catalog to a different system.
EMPORT/IMPORT has several advantages as compared to REPRO
Catalog information is exported along with data
Cluster deletion and redefinition not required during import as input dataset already contains catalog information
Easily ported on other systems as catalog information available with data
Like REPRO KSDS datasets are reorganized however three steps of REPRO are replaced by one
Disadvantages:
Exported data cannot be processed until Imported
Can be used only for VSAM dataset
EXPORT
FORMAT :
EXPORT entryname password
OUTFILE(ddname)
OUTDATASET(dsname)
Optional parameters
Example :
EXPORT DA0001T.LIB.KSDS.CLUSTER -
OUTFILE(DD2)
The output dataset from an EXPORT must always
be a sequential dataset (usually on a tape)
IMPORT
Format :
IMPORT -
INFILE(ddname) INDATASET(dsname) -
OUTFILE(ddname) OUTDATASET(dsname) -
Optional parameters:
IMPORT INFILE (DD2) -
OUTDATASET(DA0001T.LIB.KSDS.CLUSTER)
Imports only EXPORTED dataset
//DA0001TA JOB LA1279,PCS,MSGLEVEL=(1,1),
// MSGCLASS=A, NOTIFY=DA0001T
//* Input instream Data into ESDS VSAM Dataset
// STEP1 EXEC PGM=IDCAMS
// SYSPRINT DD SYSOUT = *
// DD1 DD *
123456789123456789
AAAAAAAABBBBBBCCCC
/*
//DD2 DD DSN=DA0001T.ESDS.CLUSTER
//SYSIN DD *
REPRO -
INFILE(DD1) -
OUTFILE(DD2)
/*
//
Example 7.4 Input instream Data into ESDS
//DA0001TA JOB LA2719,PCS,MSGLEVEL= (1,1),
// MSGCLASS=A, NOTIFY=DA0001T
//* Load Data from a file into ESDS VSAM Dataset
//STEP1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT = *
//DD1 DD DSN=DA0001T.ESDS.CLUSTER1
//DD2 DD DSN=DA0001T.ESDS.CLUSTER2
//SYSIN DD *
REPRO -
INFILE(DD1 -
OUTFILE(DD2)
/*
//
Example 7.5 Load Data from a file into ESDS
Back to VSAM index
8. VERIFY , PRINT, DELETE, ALTER Command
Verify - preserves data integrity (HURBA)
Format :
VERIFY FILE(ddname/passwd)
or
VERIFY DATASET(entryname/passwd)
VERIFY entryname/passwd (TSO)
VERIFY DATASET(DA0001T.LIB.KSDS.CLUSTER)
Example 8.1 VERIFY
Remark :
VERIFY can be issued from a TSO or within a JCL statement.
It is valid only for VSAM dataset except LDS.
DELETE
- logically deletes dataset
- catalog entry deleted
Format :
DELETE entryname/passwd -
optional parameters
DELETE DA0001T.LIB.KSDS.CLUSTER -
ERASE
Example 8.2 Deleting a Cluster
Optional parameters are :
• AIX
• CLUSTER
• NONVSAM
• PATH
• ERASE NOERASE
• FORCE NOFORCE
• PURGE NOPURGE
• SCRATCH NOSCRATCH
//DA0001TA JOB LA2179,PCS,MSGLEVEL=(1, 1) ,
// NOTIFY=DA0001T
//* Deletes VSAM Dataset
//STEP1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT = *
//SYSIN DD *
DELETE DA0001T.TRAIN.ITMFOIV
/*
//
Example 8.3 Delete VSAM Dataset
PRINT
The default output destination for PRINT is SYSPRINT.
prints in CHAR/HEX/DUMP format
limiting
Format 1 :
PRINT INDATASET (entryname/passwd) -
Format 2 :
PRINT INFILE (ddname/passwd) -
parameters like REPRO are available
Options
• CHAR DUMP HEX
• COUNT (number)
• FROMADDRESS, [TOADDRESS]
• FROMKEY, [TOKEY]
• FROMNUMBER, [TONUMBER]
• OUTFILE (ddname)
• SKIP (number)
//DA0001TA JOB LA2179,PCS,MSGLEVEL=(1, 1) ,
// NOTIFY=DA0001T
//* Print VSAM Dataset
//PRG1 EXEC PGM=IDCAMS
//FILE1 DD DSN=DA0001T.LIB.KSDS.CLUSTER,
// DISP=SHR
//SYSPRINT DD SYSOUT = *
// SYSIN DD *
PRINT INFILE(FILE1) CHARACTER
/*
//
Example 8.4 Print VSAM Dataset
ALTER
Used to change certain attributes of a previously defined VSAM object
Following can be done with ALTER
change names
• Add volumes/Remove volumes
• Change Keys and uniqueness
• Change record size
• Change Upgrade option
• Change % of FREESPACE etc.
Format :
ALTER entryname/passwd parameters
Options :
• ADDVOLUMES (volumes)
• AUTHORIZATION(entry string)
• BUFFERSPACE (size)
• ERASE NOERASE
• FREESPACE(ci% ca%)
• MASTERPW(password)
• NEWNAME(newname)
• READPW (password)
• SCRATCH NOSCRATCH
• SHAREOPTIONS
• (cross region cross system)
• TO(date) FOR(days)
• UPDATE NOUPDATE
• UPDATEPW(password)
• UPGRADE NOUPGRADE
The ORDERED Parameter
The ORDERED Parameter tells VSAM to assign the KEYRANGES values to the volumes, one by one, in the order in which the KEYRANGES and VOLUMES are specified.
Format :
ORDERED UNORDERED
Example :
KEYRANGES( (0001 1000) -
(1001 2000) -
(2001 3000)) -
VOLUMES (BS3013 -
BS3014 -
BS3001)
Note : When you code ORDERED, you must code the same no. of VOLUMES as KEYRANGES.
The IMBED Parameter
The IMBED Parameter directs VSAM to place the sequence set on the first track of the Data Control Area and duplicate it as many times as it will fit.
Advantage : reduces rotational delay
Format :
IMBED NOIMBED
The REPLICATE Parameter
The REPLICATE Parameter directs VSAM to duplicate each index record as many times as it will fit on its assigned track. It applies to a KSDS index component only.
Format :
REPLICATE NOREPLICATE
Example :
INDEX(NAME(DA0001T.LIB.KSDS.INDEX) -
IMBED -
REPLICATE -
)
The Password Protection Parameter
VSAM provides a hierarchical list of parameters that you can specify for a non-DFSMS-managed VSAM dataset. However DFSMS-managed dataset you must use a security package like RACF.
Format :
MASTERPW(password)
Allows the highest level of access to all cluster components, including DELETE and ALTER authority
Format :
UPDATEPW(password)
Allows write authority to the cluster
Format :
READPW(password)
Allows read only access to the cluster
Note : Valid only for KSDS, ESDS, RRDS.
Passwords are initially specified in the DEFINE
CLUSTER
Example :
MASTERPW(TRGDEPT)
At the execution time, a password can be coded explicitly in the PASSWORD clause of a COBOL SELECT clause
The AUTHORIZATION Parameter
AUTHORIZATION provides additional security for a VSAM cluster by naming and assembler user verification routine (USVR).
Format :
AUTHORIZATION (entry-point password)
entry-point : the name of the entry point of a USVR
written in assembly language
password : the password the routine is to verify
Note : Valid only for KSDS, ESDS, RRDS.
Example :
AUTH(MYRTN ‘TRGDEPT’)
ALTER -
DA0001T.LIB.KSDS.CLUSTER -
NEWNAME(A2000.MY.CLUSTER)
Example 8.5 Altering name of a Dataset
ALTER -
DA0001T.LIB.KSDS.INDEX -
FREESPACE(30 30)
Example 8.6 Altering FREESPACE of a Dataset
The following attributes are alterable only for empty clusters
• KEYS(length offset)
• RECORDSIZE(avg max)
• UNIQUEKEY NONUNIQUEKEY
The following attributes are unalterable. You have to DELETE the cluster and redefine it with new attributes.
CISZ
Cluster type,
IMBED/REPLICATE
REUSE NOREUSE
Back to VSAM index
Format :
VERIFY FILE(ddname/passwd)
or
VERIFY DATASET(entryname/passwd)
VERIFY entryname/passwd (TSO)
VERIFY DATASET(DA0001T.LIB.KSDS.CLUSTER)
Example 8.1 VERIFY
Remark :
VERIFY can be issued from a TSO or within a JCL statement.
It is valid only for VSAM dataset except LDS.
DELETE
- logically deletes dataset
- catalog entry deleted
Format :
DELETE entryname/passwd -
optional parameters
DELETE DA0001T.LIB.KSDS.CLUSTER -
ERASE
Example 8.2 Deleting a Cluster
Optional parameters are :
• AIX
• CLUSTER
• NONVSAM
• PATH
• ERASE NOERASE
• FORCE NOFORCE
• PURGE NOPURGE
• SCRATCH NOSCRATCH
//DA0001TA JOB LA2179,PCS,MSGLEVEL=(1, 1) ,
// NOTIFY=DA0001T
//* Deletes VSAM Dataset
//STEP1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT = *
//SYSIN DD *
DELETE DA0001T.TRAIN.ITMFOIV
/*
//
Example 8.3 Delete VSAM Dataset
The default output destination for PRINT is SYSPRINT.
prints in CHAR/HEX/DUMP format
limiting
Format 1 :
PRINT INDATASET (entryname/passwd) -
Format 2 :
PRINT INFILE (ddname/passwd) -
parameters like REPRO are available
Options
• CHAR DUMP HEX
• COUNT (number)
• FROMADDRESS, [TOADDRESS]
• FROMKEY, [TOKEY]
• FROMNUMBER, [TONUMBER]
• OUTFILE (ddname)
• SKIP (number)
//DA0001TA JOB LA2179,PCS,MSGLEVEL=(1, 1) ,
// NOTIFY=DA0001T
//* Print VSAM Dataset
//PRG1 EXEC PGM=IDCAMS
//FILE1 DD DSN=DA0001T.LIB.KSDS.CLUSTER,
// DISP=SHR
//SYSPRINT DD SYSOUT = *
// SYSIN DD *
PRINT INFILE(FILE1) CHARACTER
/*
//
Example 8.4 Print VSAM Dataset
ALTER
Used to change certain attributes of a previously defined VSAM object
Following can be done with ALTER
change names
• Add volumes/Remove volumes
• Change Keys and uniqueness
• Change record size
• Change Upgrade option
• Change % of FREESPACE etc.
Format :
ALTER entryname/passwd parameters
Options :
• ADDVOLUMES (volumes)
• AUTHORIZATION(entry string)
• BUFFERSPACE (size)
• ERASE NOERASE
• FREESPACE(ci% ca%)
• MASTERPW(password)
• NEWNAME(newname)
• READPW (password)
• SCRATCH NOSCRATCH
• SHAREOPTIONS
• (cross region cross system)
• TO(date) FOR(days)
• UPDATE NOUPDATE
• UPDATEPW(password)
• UPGRADE NOUPGRADE
The ORDERED Parameter
The ORDERED Parameter tells VSAM to assign the KEYRANGES values to the volumes, one by one, in the order in which the KEYRANGES and VOLUMES are specified.
Format :
ORDERED UNORDERED
Example :
KEYRANGES( (0001 1000) -
(1001 2000) -
(2001 3000)) -
VOLUMES (BS3013 -
BS3014 -
BS3001)
Note : When you code ORDERED, you must code the same no. of VOLUMES as KEYRANGES.
The IMBED Parameter
The IMBED Parameter directs VSAM to place the sequence set on the first track of the Data Control Area and duplicate it as many times as it will fit.
Advantage : reduces rotational delay
Format :
IMBED NOIMBED
The REPLICATE Parameter
The REPLICATE Parameter directs VSAM to duplicate each index record as many times as it will fit on its assigned track. It applies to a KSDS index component only.
Format :
REPLICATE NOREPLICATE
Example :
INDEX(NAME(DA0001T.LIB.KSDS.INDEX) -
IMBED -
REPLICATE -
)
The Password Protection Parameter
VSAM provides a hierarchical list of parameters that you can specify for a non-DFSMS-managed VSAM dataset. However DFSMS-managed dataset you must use a security package like RACF.
Format :
MASTERPW(password)
Allows the highest level of access to all cluster components, including DELETE and ALTER authority
Format :
UPDATEPW(password)
Allows write authority to the cluster
Format :
READPW(password)
Allows read only access to the cluster
Note : Valid only for KSDS, ESDS, RRDS.
Passwords are initially specified in the DEFINE
CLUSTER
Example :
MASTERPW(TRGDEPT)
At the execution time, a password can be coded explicitly in the PASSWORD clause of a COBOL SELECT clause
The AUTHORIZATION Parameter
AUTHORIZATION provides additional security for a VSAM cluster by naming and assembler user verification routine (USVR).
Format :
AUTHORIZATION (entry-point password)
entry-point : the name of the entry point of a USVR
written in assembly language
password : the password the routine is to verify
Note : Valid only for KSDS, ESDS, RRDS.
Example :
AUTH(MYRTN ‘TRGDEPT’)
ALTER -
DA0001T.LIB.KSDS.CLUSTER -
NEWNAME(A2000.MY.CLUSTER)
Example 8.5 Altering name of a Dataset
ALTER -
DA0001T.LIB.KSDS.INDEX -
FREESPACE(30 30)
Example 8.6 Altering FREESPACE of a Dataset
The following attributes are alterable only for empty clusters
• KEYS(length offset)
• RECORDSIZE(avg max)
• UNIQUEKEY NONUNIQUEKEY
The following attributes are unalterable. You have to DELETE the cluster and redefine it with new attributes.
CISZ
Cluster type,
IMBED/REPLICATE
REUSE NOREUSE
Back to VSAM index
9. Generation DataSets
Although there are many different uses for sequential datasets, many sequential files have one characteristics in common : they are used in cyclical application for example, sequential dataset that contains transaction posted daily against a master file is cyclical; each days transactions, along with the processing required to post them, from one cycle. Similarly a sequential dataset used to hold the backup copy of a master file is cyclical too; each time a new backup copy is made, new cycle is begun.
In most of the cyclical applications, it’s good idea to maintain versions of the files used for several cycles. That way if something goes wrong, you can recreate the processing that occurred during previous cycles to restore the affected files to a known point. Then the processing can continue from that point
For this MVS provides a facility called generation data group, GDG is a collection of two or more chronologically related versions of the same file. Each version of the file or member of the GDG, is called a generation dataset. A generation dataset may reside on tape or DASD. It is generally sequential (QSAM) or direct(BDAM) file. ISAM and VSAM files can’t be used in GDGs.
As each processing cycle occurs a new generation of dataset is added to the generation data group. The new version becomes the current generation; it replaces the old current generation, which becomes a previous generation.
file.c1(+1) Next Generation
file.c1(0) Current Generation
file.c1(-1) Previous Generations
file.c1(-2)
file.c1(-3)
Figure above is the structure of a generation data group. There are 3 previous generations, note that generations are numbered relative to the current generation, file.c1(0).
Relative generation numbers are adjusted when each processing cycle completes, so that the current generation is always referred to as relative generation 0.
MVS uses the generation data group’s catalog entry to keep track of relative generation numbers. As a result, GDGs must be cataloged and each generation dataset that’s a part of the group must be cataloged too.
When you create a generation data group’s catalog entry, you specify how many generations should be maintained Example: You might specify that five generations including the current generation should be maintained. Then during each processing cycle, the new version of the file becomes the current version.
Although MVS lets you use relative generation numbers to simplify cyclical processing, MVS uses “Absolute Generation Numbers” in the form GnnnnV00 to identify each generation dataset uniquely. GnnnnV00 represents the chronological sequence number of the sequence number of the generation, beginning with G0000.
V00 is a version number, which lets you maintain more than one version of a generation. Each time a new generation dataset is created, mvs adds one the sequence number. The sequence and version numbers are stored as a part of the file’s dataset name, like this:
filename.GnnnnV00
35 chars 9 chars
// IN DD DSN=DA0002T.MASTER, DISP=SHR
// OUT DD DSN=DA0002T.MASTER.DAY(+1),
DISP= (NEW,CATLG,DELETE),
UNIT=3390, VOL=SER=BP0031,
SPACE= (CYL,(10,5),RLSE),
DCB=(PROD.GDGMOD,
BLKSIZE=23440,LRECL=80,RECFM=FB)\
Example 9.1 Using a GDG
Relative Name and Absolute Name
DA0002T.MASTER.DAY90) ---> Relative Name
DA0002T.MASTER.DAY.G00001V00 -->Absolute Name
// Step1 EXEC PGM=IDCAMS
// SYSPRINT DD SYSOUT = *
// SYSIN DD *
DEFINE GDG
(NAME(DA0002T.MASTER.DAY)
LIMIT(5)
SCRACH
EMPTY)
/*
Example 9.2 Defining a GDG Index
Following code contains 1 job with 2 steps....
//DA0003TA JOB
//UPDATE EXEC PGM=PAY3200
//OLDMAST DD DSN=MMA2.PAY.MAST(0),DISP=OLD
//NEWMAST DD DSN=MMA2.PAY.MAST(+1),
DISP= (NEW,CATLG),UNIT=3300,
VOL=SER=BS3001,
DCB=(LRECL=80,BLKSIZE=1600)
//PAYTRAN DD DSN=MMA2.PAY.TRAN,DISP=OLD
//PAYLIST DD SYSOUT=*
//REPORT EXEC PGM=PAY3300
//PAYMAST DD DSN=MMA2.PAY.MAST(+1),DISP=OLD
//PAYRPT DD SYSOUT=*
Example 9.3a Adding datasets to a GDG
Following code contains 2 jobs.........
//JOB1 JOB
//UPDATE EXEC PGM=PAY3200
//OLDMAST DD DSN=MMA2.PAY.MAST(0),DISP=OLD
//NEWMAST DSN=MMA2.PAY.MAST(+1),
DISP=(NEW, CATLG), UNIT=3300,
VOL=SER=BS3001,
DCB=(LRECL=80, BLKSIZE=1600)
//PAYTRAN DD DSN=MMA2.PAY.TRAN,DISP=OLD
//PAYLIST DD SYSOUT =*
//JOB2 JOB ...........
//REPORT EXEC PGM=PAY3300
//PAYMAST DD DSN=MMA2.PAY.MAST(0),DISP=OLD
//PAYRPT DD SYSOUT=*
Example 9.3b Adding datasets to a GDG
GDG’s are a group of datasets which are related to each other chronologically and functionally. Generations can continue until a specified limit is reached. The LIMIT parameter specifies total number of generations that can exist at any one time. Once limit is reached the oldest generation is deleted.
GDG Index have to be created using the IDCAMS command ‘DEFINE GDG’ before datasets that are to be included in them can be made a part of them.
Model containing parameter information of the datasets to be included in the GDG has to be specified. All datasets within a GDG will have the same name. Generation number of a dataset, within a GDG is automatically assigned by OS when created. Datasets within a GDG can be referenced by their relative generation number. Generation 0 always references current generation
Creation of GDGs
Create and catalog the index
Use IDCAMS statement DEFINE GDG for creating Index
Parameters for creating index
Specification
Name of GDG
Number of generations
Limit …. maximum no of datasets in a GDG.
Action to be taken when limit is reached
• Uncataloging oldest generation once limit reached
• Uncataloging all generations when limit reached
Physical deletion of entry
Uncataloging entry without physical deletion
Defining a model for the GDG.
NAME …… refers to the name of the GDG Index
LIMIT ….. refers to the maximum no of datasets in a GDG.
NOEMPTY…
EMPTY …
SCRATCH ….
NOSCRATCH …
Modifying Features of GDG
You can modify a GDG only with the ALTER command
//STEP1 EXEC PGM=IDCAMS
//SYSIN DD
ALTER DA0001T.ACCOUNTS.MONTHLY -
NOSCRATCH -
EMPTY
/*
//
Example 9.4 Modifying a GDG
Deleting GDG Index
Can be deleted by the DELETE parameter of IDCAMS
Will result in an error on reference to any generation datasets of the GDG
/STEP1 EXEC PGM=IDCAMS
//SYSIN DD
DELETE DA0001T.ACCOUNTS.MONTHLY GDG
/*
//
Example 9.5 Deleting GDG Index
Adding a Dataset to a GDG
Name of the model containing the GDG DCB parameter’s is coded in the DCB parameter of the DD statement
//STEP1 EXEC PGM=GDG1
//FILE1 DD
// DSN=DA0001T.ACCOUNTS.MONTHLY (+1),
// DISP=(NEW,CATLG,DELETE),UNIT=SYSDA,
// SPACE=(TRK,(30,10),RLSE),
// DCB=(MODEL.DCB,
// RECFM=FB,LRECL=80,
// BLKSIZE=800)
Example 9.6 Adding a Dataset to a GDG
Deleting GDG Index and Datasets
FORCE parameter in the DELETE statement of IDCAMS can be used
Example :
/STEP1 EXEC PGM=IDCAMS
//SYSIN DD
DELETE DA0001T.ACCOUNTS.MONTHLY` -
GDG -
FORCE
/*
//
Example 9.7 Deleting GDG Index and Datasets
Back to VSAM index
In most of the cyclical applications, it’s good idea to maintain versions of the files used for several cycles. That way if something goes wrong, you can recreate the processing that occurred during previous cycles to restore the affected files to a known point. Then the processing can continue from that point
For this MVS provides a facility called generation data group, GDG is a collection of two or more chronologically related versions of the same file. Each version of the file or member of the GDG, is called a generation dataset. A generation dataset may reside on tape or DASD. It is generally sequential (QSAM) or direct(BDAM) file. ISAM and VSAM files can’t be used in GDGs.
As each processing cycle occurs a new generation of dataset is added to the generation data group. The new version becomes the current generation; it replaces the old current generation, which becomes a previous generation.
file.c1(+1) Next Generation
file.c1(0) Current Generation
file.c1(-1) Previous Generations
file.c1(-2)
file.c1(-3)
Figure above is the structure of a generation data group. There are 3 previous generations, note that generations are numbered relative to the current generation, file.c1(0).
Relative generation numbers are adjusted when each processing cycle completes, so that the current generation is always referred to as relative generation 0.
MVS uses the generation data group’s catalog entry to keep track of relative generation numbers. As a result, GDGs must be cataloged and each generation dataset that’s a part of the group must be cataloged too.
When you create a generation data group’s catalog entry, you specify how many generations should be maintained Example: You might specify that five generations including the current generation should be maintained. Then during each processing cycle, the new version of the file becomes the current version.
Although MVS lets you use relative generation numbers to simplify cyclical processing, MVS uses “Absolute Generation Numbers” in the form GnnnnV00 to identify each generation dataset uniquely. GnnnnV00 represents the chronological sequence number of the sequence number of the generation, beginning with G0000.
V00 is a version number, which lets you maintain more than one version of a generation. Each time a new generation dataset is created, mvs adds one the sequence number. The sequence and version numbers are stored as a part of the file’s dataset name, like this:
filename.GnnnnV00
35 chars 9 chars
// IN DD DSN=DA0002T.MASTER, DISP=SHR
// OUT DD DSN=DA0002T.MASTER.DAY(+1),
DISP= (NEW,CATLG,DELETE),
UNIT=3390, VOL=SER=BP0031,
SPACE= (CYL,(10,5),RLSE),
DCB=(PROD.GDGMOD,
BLKSIZE=23440,LRECL=80,RECFM=FB)\
Example 9.1 Using a GDG
Relative Name and Absolute Name
DA0002T.MASTER.DAY90) ---> Relative Name
DA0002T.MASTER.DAY.G00001V00 -->Absolute Name
// Step1 EXEC PGM=IDCAMS
// SYSPRINT DD SYSOUT = *
// SYSIN DD *
DEFINE GDG
(NAME(DA0002T.MASTER.DAY)
LIMIT(5)
SCRACH
EMPTY)
/*
Example 9.2 Defining a GDG Index
Following code contains 1 job with 2 steps....
//DA0003TA JOB
//UPDATE EXEC PGM=PAY3200
//OLDMAST DD DSN=MMA2.PAY.MAST(0),DISP=OLD
//NEWMAST DD DSN=MMA2.PAY.MAST(+1),
DISP= (NEW,CATLG),UNIT=3300,
VOL=SER=BS3001,
DCB=(LRECL=80,BLKSIZE=1600)
//PAYTRAN DD DSN=MMA2.PAY.TRAN,DISP=OLD
//PAYLIST DD SYSOUT=*
//REPORT EXEC PGM=PAY3300
//PAYMAST DD DSN=MMA2.PAY.MAST(+1),DISP=OLD
//PAYRPT DD SYSOUT=*
Example 9.3a Adding datasets to a GDG
Following code contains 2 jobs.........
//JOB1 JOB
//UPDATE EXEC PGM=PAY3200
//OLDMAST DD DSN=MMA2.PAY.MAST(0),DISP=OLD
//NEWMAST DSN=MMA2.PAY.MAST(+1),
DISP=(NEW, CATLG), UNIT=3300,
VOL=SER=BS3001,
DCB=(LRECL=80, BLKSIZE=1600)
//PAYTRAN DD DSN=MMA2.PAY.TRAN,DISP=OLD
//PAYLIST DD SYSOUT =*
//JOB2 JOB ...........
//REPORT EXEC PGM=PAY3300
//PAYMAST DD DSN=MMA2.PAY.MAST(0),DISP=OLD
//PAYRPT DD SYSOUT=*
Example 9.3b Adding datasets to a GDG
GDG’s are a group of datasets which are related to each other chronologically and functionally. Generations can continue until a specified limit is reached. The LIMIT parameter specifies total number of generations that can exist at any one time. Once limit is reached the oldest generation is deleted.
GDG Index have to be created using the IDCAMS command ‘DEFINE GDG’ before datasets that are to be included in them can be made a part of them.
Model containing parameter information of the datasets to be included in the GDG has to be specified. All datasets within a GDG will have the same name. Generation number of a dataset, within a GDG is automatically assigned by OS when created. Datasets within a GDG can be referenced by their relative generation number. Generation 0 always references current generation
Creation of GDGs
Create and catalog the index
Use IDCAMS statement DEFINE GDG for creating Index
Parameters for creating index
Specification
Name of GDG
Number of generations
Limit …. maximum no of datasets in a GDG.
Action to be taken when limit is reached
• Uncataloging oldest generation once limit reached
• Uncataloging all generations when limit reached
Physical deletion of entry
Uncataloging entry without physical deletion
Defining a model for the GDG.
NAME …… refers to the name of the GDG Index
LIMIT ….. refers to the maximum no of datasets in a GDG.
NOEMPTY…
EMPTY …
SCRATCH ….
NOSCRATCH …
Modifying Features of GDG
You can modify a GDG only with the ALTER command
//STEP1 EXEC PGM=IDCAMS
//SYSIN DD
ALTER DA0001T.ACCOUNTS.MONTHLY -
NOSCRATCH -
EMPTY
/*
//
Example 9.4 Modifying a GDG
Deleting GDG Index
Can be deleted by the DELETE parameter of IDCAMS
Will result in an error on reference to any generation datasets of the GDG
/STEP1 EXEC PGM=IDCAMS
//SYSIN DD
DELETE DA0001T.ACCOUNTS.MONTHLY GDG
/*
//
Example 9.5 Deleting GDG Index
Adding a Dataset to a GDG
Name of the model containing the GDG DCB parameter’s is coded in the DCB parameter of the DD statement
//STEP1 EXEC PGM=GDG1
//FILE1 DD
// DSN=DA0001T.ACCOUNTS.MONTHLY (+1),
// DISP=(NEW,CATLG,DELETE),UNIT=SYSDA,
// SPACE=(TRK,(30,10),RLSE),
// DCB=(MODEL.DCB,
// RECFM=FB,LRECL=80,
// BLKSIZE=800)
Example 9.6 Adding a Dataset to a GDG
Deleting GDG Index and Datasets
FORCE parameter in the DELETE statement of IDCAMS can be used
Example :
/STEP1 EXEC PGM=IDCAMS
//SYSIN DD
DELETE DA0001T.ACCOUNTS.MONTHLY` -
GDG -
FORCE
/*
//
Example 9.7 Deleting GDG Index and Datasets
Back to VSAM index
10. COBOL VSAM Considerations
SELECT CLAUSE
SELECT file ASSIGN TO DDNAME / AS-DDNAME
ORGANIZATION IS SEQUENTIAL/INDEXED/RELATIVE
ACCESS MODE IS SEQUENTIAL/INDEXED/DYNAMIC
RECORD KEY IS primary Key Dataname
ALTERNATE KEY IS Alternate Key Dataname [With Duplicates]
FILE STATUS IS status-key.
Example 10.1 SELECT clause for VSAM datasets
status key=Cobol, VSAM
x(2) 9(2) - Return code
9(1) - Junction code
9(3) - Feedback code
FD Entry
Should have the record structure
If KSDS then key field must match with length and position of KEYS parameter in DEFINE CLUSTER information
File Processing
Regular COBOL file handling commands
Alternate index processing :
In JCL there must be a DD statement for base cluster and one or more DD statement for alternate index path name.
Note: There is no COBOL standard for assigning ddnames to alternate indexes, so a quasi-standard has emerged whereby a sequential number is appended to the eighth character of the base cluster ddname.
//LIBMAST DD DSN=DA0001T.LIB.KSDS.CLUSTER,
// DISP=SHR
//LIBMAST1 DD DSN=DA0001T.LIB.KSDS.NAME.PATH,
// DISP=SHR
//LIBMAST2 DD DSN=DA0001T.LIB.KSDS.DEPT.PATH,
// DISP=SHR
Example 10.2 JCL to access AIX
Remark:
No matter how many alternate indexes you specify in the program, there’s only one ASSIGN clause pointing to the ddname of the base cluster.
SELECT file ASSIGN TO LIBMAST
RECORD KEY IS ............
ALTERNATE KEY IS .........
[WITH DUPLICATES]
Example 10.3 Cobol SELECT clause for AIX
FD : Should have record description having primary key dataname and alternate key
dataname
KEY of reference : READ filename
KEY IS primary/alternate key
dataname
Key of Reference.
The key that is currently being used to access records is called the key of reference. When the program opens the dataset, the primary key becomes, by default, the key of reference. The primary key remains the key of reference when accessing records until it is changed. To start accessing records by an alternate index key, you merely change the key of reference by using the KEY phrase as part of one of the following statements.
A random READ statement, for example
READ EMP-MAST KEY IS EMP-NAME
Example 10.4 READ
A sequential READ statement, for example
READ EMP-MAST NEXT
KEY IS EMP-NAMEA
Example 10.5 READ for Accessing AIX
START statement, for example
START EMP-MAST
KEY IS EQUAL TO EMP-NAME.
Example 10.6 START verb
key-1 key-2 Cause
Successful Completion:
0 0 No further information,
2 Duplicate key detected.
4 Wrong fixed-length record.
5 Data set created when pened.With
sequential VSAM datasets,0 is returned.
7 CLOSE with NO REWIND or
REEL, for non-tape.
End-of-file.
1 0 No further information.
4 Relative record READ outside
dataset boundary.
Invalid key.
2 1 Sequence error.
2 Duplicate key.
3 No record found.
4 Key outside boundary of dataset.
Permanent I/O error :
3 0 No further information.
4 Record outside dataset boundary.
5 OPEN and required dataset not found.
7 OPEN with invalid mode.
8 OPEN of dataset closed with LOCK.
9 OPEN unsuccessful because of
conflicting dataset attributes.
Logic error :
4 1 OPEN of dataset already open.
2 CLOSE for dataset not open.
3 READ not executed before REWRITE.
4 REWRITE of different-record size.
6 READ after EOF reached.
7 READ attempted for dataset not opened I-O
or INPUT.
8 WRITE for dataset not opened OUTPUT,I-O
or EXTEND.
9 DELETE or REWRITE for dataset not opened I-O.
Specific compiler-defined conditions :
9 0 No further information.
1 VSAM password failure.
2 Logic error.
3 VSAM resource not available.
4 VSAM sequential record not available.
5 VSAM invalid or incomplete dataset information.
9 6 VSAM-no DD statement.
7 VSAM OPEN successful.Dataset integrity verified.
VSAM I/O error processing
I/O error handling is one vital area where VSAM dataset processing differs from non-VSAM dataset processing. When processing non-VSAM datasets, most programmers code their application programs to ignore errors, because the access method would abend the program if a serious I/O error occurs. Not so when processing VSAM datasets.
The COBOL FILE STATUS Key
VSAM places program control in the hands of the programmer, not the O/S. For this reason, it is important to check the COBOL status key designated in the FILE STATUS clause after every I/O operation. For some error keys you'll want to abend the program immediately; for others you can just display the key, the record, and an informative message and continue processing.
For these status key values, continue processing normally :
00 successful I/O.
02 duplicate alternate key encountered (expected).
10 end of file.
For these status key values, bypass the record, display pertinent information, and continue processing :
21 Input record out of sequence.
22 duplicate primary key or unique alternate key
encountered (un-expected).
23 record (or Key) not found.
Note: You may want to have the program count the number of times these key values are returned and terminate the program if the counter reaches an unacceptable number, which would likely to indicate that your input is bad
For the following status key values, terminate the program :
24 out-of-space condition (KSDS or RRDS).
30 Nonspecific I/O problem.
34 out-of-space condition(ESDS).
49 REWRITE attempted; dataset not opened for I-O.
90 Dataset unusable or logic error.
92 logic error.
93 Resource not available.
94 current record pointer undefined.
95 Nonzero HURBA for OPEN OUTPUT.
96 No corresponding JCL DD statement.
97 If your shop has enabled the implicit VERIFY command, this means
that the dataset was opened after and implicit VERIFY, and you can
continue processing.
Back to VSAM index
SELECT file ASSIGN TO DDNAME / AS-DDNAME
ORGANIZATION IS SEQUENTIAL/INDEXED/RELATIVE
ACCESS MODE IS SEQUENTIAL/INDEXED/DYNAMIC
RECORD KEY IS primary Key Dataname
ALTERNATE KEY IS Alternate Key Dataname [With Duplicates]
FILE STATUS IS status-key.
Example 10.1 SELECT clause for VSAM datasets
status key=Cobol, VSAM
x(2) 9(2) - Return code
9(1) - Junction code
9(3) - Feedback code
FD Entry
Should have the record structure
If KSDS then key field must match with length and position of KEYS parameter in DEFINE CLUSTER information
File Processing
Regular COBOL file handling commands
Alternate index processing :
In JCL there must be a DD statement for base cluster and one or more DD statement for alternate index path name.
Note: There is no COBOL standard for assigning ddnames to alternate indexes, so a quasi-standard has emerged whereby a sequential number is appended to the eighth character of the base cluster ddname.
//LIBMAST DD DSN=DA0001T.LIB.KSDS.CLUSTER,
// DISP=SHR
//LIBMAST1 DD DSN=DA0001T.LIB.KSDS.NAME.PATH,
// DISP=SHR
//LIBMAST2 DD DSN=DA0001T.LIB.KSDS.DEPT.PATH,
// DISP=SHR
Example 10.2 JCL to access AIX
Remark:
No matter how many alternate indexes you specify in the program, there’s only one ASSIGN clause pointing to the ddname of the base cluster.
SELECT file ASSIGN TO LIBMAST
RECORD KEY IS ............
ALTERNATE KEY IS .........
[WITH DUPLICATES]
Example 10.3 Cobol SELECT clause for AIX
FD : Should have record description having primary key dataname and alternate key
dataname
KEY of reference : READ filename
KEY IS primary/alternate key
dataname
Key of Reference.
The key that is currently being used to access records is called the key of reference. When the program opens the dataset, the primary key becomes, by default, the key of reference. The primary key remains the key of reference when accessing records until it is changed. To start accessing records by an alternate index key, you merely change the key of reference by using the KEY phrase as part of one of the following statements.
A random READ statement, for example
READ EMP-MAST KEY IS EMP-NAME
Example 10.4 READ
A sequential READ statement, for example
READ EMP-MAST NEXT
KEY IS EMP-NAMEA
Example 10.5 READ for Accessing AIX
START statement, for example
START EMP-MAST
KEY IS EQUAL TO EMP-NAME.
Example 10.6 START verb
key-1 key-2 Cause
Successful Completion:
0 0 No further information,
2 Duplicate key detected.
4 Wrong fixed-length record.
5 Data set created when pened.With
sequential VSAM datasets,0 is returned.
7 CLOSE with NO REWIND or
REEL, for non-tape.
End-of-file.
1 0 No further information.
4 Relative record READ outside
dataset boundary.
Invalid key.
2 1 Sequence error.
2 Duplicate key.
3 No record found.
4 Key outside boundary of dataset.
Permanent I/O error :
3 0 No further information.
4 Record outside dataset boundary.
5 OPEN and required dataset not found.
7 OPEN with invalid mode.
8 OPEN of dataset closed with LOCK.
9 OPEN unsuccessful because of
conflicting dataset attributes.
Logic error :
4 1 OPEN of dataset already open.
2 CLOSE for dataset not open.
3 READ not executed before REWRITE.
4 REWRITE of different-record size.
6 READ after EOF reached.
7 READ attempted for dataset not opened I-O
or INPUT.
8 WRITE for dataset not opened OUTPUT,I-O
or EXTEND.
9 DELETE or REWRITE for dataset not opened I-O.
Specific compiler-defined conditions :
9 0 No further information.
1 VSAM password failure.
2 Logic error.
3 VSAM resource not available.
4 VSAM sequential record not available.
5 VSAM invalid or incomplete dataset information.
9 6 VSAM-no DD statement.
7 VSAM OPEN successful.Dataset integrity verified.
VSAM I/O error processing
I/O error handling is one vital area where VSAM dataset processing differs from non-VSAM dataset processing. When processing non-VSAM datasets, most programmers code their application programs to ignore errors, because the access method would abend the program if a serious I/O error occurs. Not so when processing VSAM datasets.
The COBOL FILE STATUS Key
VSAM places program control in the hands of the programmer, not the O/S. For this reason, it is important to check the COBOL status key designated in the FILE STATUS clause after every I/O operation. For some error keys you'll want to abend the program immediately; for others you can just display the key, the record, and an informative message and continue processing.
For these status key values, continue processing normally :
00 successful I/O.
02 duplicate alternate key encountered (expected).
10 end of file.
For these status key values, bypass the record, display pertinent information, and continue processing :
21 Input record out of sequence.
22 duplicate primary key or unique alternate key
encountered (un-expected).
23 record (or Key) not found.
Note: You may want to have the program count the number of times these key values are returned and terminate the program if the counter reaches an unacceptable number, which would likely to indicate that your input is bad
For the following status key values, terminate the program :
24 out-of-space condition (KSDS or RRDS).
30 Nonspecific I/O problem.
34 out-of-space condition(ESDS).
49 REWRITE attempted; dataset not opened for I-O.
90 Dataset unusable or logic error.
92 logic error.
93 Resource not available.
94 current record pointer undefined.
95 Nonzero HURBA for OPEN OUTPUT.
96 No corresponding JCL DD statement.
97 If your shop has enabled the implicit VERIFY command, this means
that the dataset was opened after and implicit VERIFY, and you can
continue processing.
Back to VSAM index
VSAM Index
1. INTRODUCTION TO VSAM
Features of VSAM
Advantages of VSAM
Types of VSAM Datasets
VSAM history
2. VSAM Catalogs
Vsam catalog
3. Inside VSAM Datasets
Control Interval
Spanned Records
ESDS
KSDS
KSDS Structure
4. IDCAMS COMMANDS
Format of IDCAMS command
IDCAMS return codes
Defining an ESDS Cluster
5. LISTCAT
6. Creating Alternate Indexes
Building Alternate Indexes
7. Reorganizing VSAM datasets
With REPRO you can do the following
Redefine the cluster using IDCAMS DEFINE CLUSTER command
8. VERIFY , PRINT, DELETE, ALTER Command
VERIFY
DELETE
ALTER
9. Generation DataSets
Physical deletion of entry
10. COBOL VSAM Considerations
Friday, May 8, 2009
MVS (Multiple Virtual Storage)
1. Introduction
2. Computing Environment
Key Concepts and Terminology
Command Processing
Data Processing
Multi-programming
Multi-programming Overheads
Relevance of Multi-programming
Multi-processing
Spooling
Virtual Storage
3. Typical IBM Main Frame Site
4. IBM Operating Systems
MVS Evolution
5. Operating System Considerations
a) Process Management
b) Memory Management
c) Input-Output Management
System 370 I/O Architecture
6. IBM Hardware
7. Key Terminology
• Cache Memory
• Expanded Memory
• Processor Resource / System Manager (PR/SM)
• Channels
• Channel - I/O Device Connectivity
• ESCON - Enterprise System Connection
I/O Devices
• Unit Record Devices - Each record is a single physical unit
• Magnetic Tape
• DASD - Direct Access Storage Device
• 3990 Storage Controller
Data Communication Network
• Components of data communication
• 3270 Information Display System
8. Data Communication Equipment
Often Asked Questions About IBM
9. Characteristics Features Of MVS
MVS Terminology
Address Space
MVS
Paging
Demand Paging
Swapping
Page Stealing
RSM (Real Storage Manager)
ASM (Auxiliary Storage Manager)
VSM (Virtual Storage Manager)
Virtual Storage Layout
10. MVS Functions
Data Management Overview
Types of Data
Dataset Organization
Non-VSAM datasets organization
Dataset Organization
VSAM datasets organization
Data Organization - Salient Points
Data Set Naming Rules
MVS Datasets
11. MVS Concepts
How datasets are Accessed
12. Job Management Overview
What is a Job?
Job Management
Definitions
Job Scheduling
13. Dataset Allocation And Job Step Execution
14. MVS Tools Overview
Components of Job Output
An Introduction to TSO
15. Interactive System Productivity Facility (ISPF)
Primary Options Menu
Termination Panel
Key Mapping
Browsing Datasets (Option 1)
Browse Commands
Editing Datasets (Option 2)
Standard editing commands
Edit Profiles
Profile Settings
Edit Modes
Advanced Edit Options
Shifting text source
Utilities Menu
Library Utility
Dataset Utility
New dataset allocation
Renaming Dataset
Dataset information
Allocate datasets managed by SMS
Move / Copy
Move / Copy- 2
DSLIST Utility
DSLIST Dataset Selection
DSLIST Commands
Primary Commands
Reset
1. Introduction
Before you begin to work on the “Mainframe environment”, which by default means “IBM Mainframe”, you need to have a basic idea of the IBM mainframe operating system. Today, it is know as MVS, which expands to Multiple Virtual Storage.
The MVS operating system has evolved over many years and has adapted to the changing technology and modern day requirements. Since the user base of MVS is very large, a change is not easy to Implement. The costs of the Mainframes are very high and the customer base is mostly made up of long-term customers with huge application and large databases to support. Most of these applications are also ‘Mission Critical’ applications. It is therefore imperative that any change to MVS also be backward compatible.
MVS is designed to work with many hundreds of users working together, located in the same locality or across continents. The MVS operating System was created by IBM and is said to ‘propriety’ OS. It has the capacity to supports a large number of peripherals like disks, tapes, printers, Network devices etc. The applications on these “Legacy systems” are typically where there is a huge amount of data and a large user base. Examples are Banking sector, Insurance Sector, Newspapers, Material & Inventory, Airlines, Credit Card System, Billing, Accounting, Shipping and others. Company’s that own these mainframes are typically those that are very big inherently or have to deal with vast amounts of data, which has to be processed fast.
2. Computing Environment
Key Concepts and Terminology
Command Processing
• Command Issue Mode
This how a user (programmer / end-user) interacts with the computer E.g. To edit a program, to execute a program
On-line Mode - Using Terminal
Batch Mode - Using Punched Cards or JCL’s
• Command Execution Mode
All computer commands can be executed in two modes
Foreground - Terminal is locked while the command is being executed
Background - Terminal is free while the command is being executed
Data Processing
How Business applications are executed
• On-line
End User performs business functions
Application programs work interactively with End User
Execution is in foreground mode
Database is immediately updated to reflect the changes
Typically used for transaction processing, queries, master updates functions
• Batch
Application programs are executed in background mode
Periodic business functions are executed automatically
“As and when” business functions are triggered by End User
Operations department is responsible for monitoring the execution
A command file is created to execute these functions
One command file may consist of multiple programs / system utilities
Typically used for bulk transaction processing, report printing, periodic processing (e.g. invoice generation, payroll calculation)
• Time Sharing
1. Resource Sharing
2. Multiple Users compete for computer resources at the same time
3. At any given point in time only one user can have control of the resources
4. What should be the basis of sharing?
• First come first served?
• Priority based?
• Who so ever can grab it - Law of Jungle?
• Equal - Democratically?
• Need based?
Usually combination of 2 and 4 is used i.e. all are equal but some are more equal!!!
• Time Slice
Each user is given control of resources for a pre-defined period - time slice
The control is passed on to next in queue user at the end of time slice (even if first user’s work is incomplete)
If the user requires I/O before the time slice is over, the control is handed over to the next user (since CPU cannot do anything until I/O is complete)
• Priority
Each user / function is assigned a priority level
The higher priority users are serviced first in a round robin fashion
Only if the higher priority users are in “wait” state for I/O completion the users in the lower priority are serviced
Time Sharing typically refers to sharing of resources in an interactive processing mode
Multi-programming
• Why Multi-programming ?
The program has CPU based and Non-CPU based instructions
CPU is kept waiting during the non-CPU based instructions execution
E.g. I/O operations (Disk, Terminal, Printer)
This results in wastage of CPU time - a precious resource
Multi-programming results in better CPU utilization
• How does it Work ?
Multiple programs are kept “ready” for execution
CPU can execute only one program at any given point in time
If the currently executing program requires I/O, it is put in a “wait” state
Another program is immediately taken for execution
On completion of I/O the program again becomes “ready” for execution
This results in an illusion that multiple programs are being executed simultaneously, hence multiprogramming.
Multi-programming Overheads
Program Queue Management
Program Status Management
Context Switching during Changeover
Multiple programs must be in main memory
Management of Common Resource Sharing (e.g. Printer)
It is critical to determine optimum level of Multi-programming to maintain certain service level.
Relevance of Multi-programming
Multi-programming is applicable even for single user system
Multi-programming is a must for multi-user system
Multi-processing
• There are multiple CPUs (processors) in one machine
• These work together under single operating system
• Each CPU executes a separate program
• O/S assigns programs to each CPU
• Essentially CPU is treated as an allocable device!!!!!
Spooling
Why Spooling?
Multiple programs may need same printer at the same time
May result in intermixing of output
Exclusive access to a program will hold other programs
Printer is much slower, results in longer “wait” state
How it is Implemented?
Output to printer is intercepted and written to a disk i.e. “spooled”
On completion of program “spooled” output is queued for Printing
This queue is processed by O/S print routine
The O/S print routine is multi-programmed along with application programs
Virtual Storage
Why Virtual Storage ?
Required to enable execution of programs with are larger than the main memory size
What is Virtual Storage ?
Technique to simulate large amount of main storage
Reality is main storage is much less
E.g. Real main storage is 16MB but virtual storage is 2GB
How Virtual Storage is Implemented ?
Program executable code is generated assuming virtual storage size
Only part of the program is loaded in main memory
Address translation mechanism is used to map virtual address to actual address
Feasible because only the instruction currently being executed and the corresponding data need to be in the main storage
Advantages of Virtual Storage
Main memory can be shared by multiple programs
Enables effective use of the limited main storage
Overheads of Virtual Storage
Address mapping
Keeping track of what is in memory and what is not
Data/Instructions need to be “brought in” main memory as an when required
“Remove” from main memory what is not currently required (to make room for instructions of other program)
Memory Management
3. Typical IBM Main Frame Site
Business Environment
Large Local or Global Operations or Both
User Community in Hundreds
Almost non-stop operations (weekly maintenance window of about 1/2 day)
Large Volumes of Data / High Volumes of Transactions
Hundreds of Applications / Mission Critical Applications
Processing Environment
On-line during prime time (might mean 24 hours for global operations)
Batch during non-prime time (wrt local time) of 12 - 15 hours
Software Environment
Variety of Databases / OLTP packages
EDI Processing
Two Tier Database Architecture - C/S and Central
Hardware Environment
Multiple Machines - Networked Together
Multiple Processors for Each Machine
Huge Number of Data Storage Devices - Disks and Tapes
Support Environment
Huge IT Departments
Application Programming Staff
Development
Maintenance / Support
DBAs
Operations Multiple Data Centers to Manage Batch Processing
System Programmers for;
- O/S
- Database Packages
- OLTP Packages
Network Support Staff
4. IBM Operating Systems
IBM Families of Operating Systems
MVS Evolution
1995 MVS/ESA 5.2.2
1993 MVS/OPEN EDITION (POSIX)
1990 SYSTEM 390
1988 MVS/ESA 16 B
1981 MVS/XA 2 GB (31-bit)
1974 OS/VS2R2(MVS) 16 MB(24-bit)
1972 OS/VS1 OS/VS2R1(SVS) 16 MB
1970 SYSTEM 370
1966 OS/MFT OS/MVT 3 MB
1966 PRIMARY CONTROL PROGRAM (PCP)
• Migrating from Dos to OS was a major change
VM is not very popular
Today most of the sites use MVS
• Major Handicaps
Limited and inefficient spooling
No Virtual Storage
• Utilities to Overcome these Handicaps
HASP - Houston Automatic Spooling Priority
- Developed unofficially (self initiative) by IBM employees
- Distributed free to MVT/MFT users
- Became very popular
- Eventually owned and supported by IBM
• ASP - Attached Support Processor
Developed (officially) by IBM
Intended for MVT
Several mainframes can work together under single O/S (predecessor of multi-processing?)
Provided better spooling capability
Relatively less takers
System 370
• Announced in early 70s
• Supported Virtual Storage
• New Operating Systems OS/VS were introduced
• OS/VS1 (Virtual System 1) - adopted from MFT
• OS/VS2 (Virtual System 2)
• Version SVS - Single Virtual Storage
- Adopted from MVT (1972)
• Version MVS - Multiple Virtual Storage
- Completely Rewritten (1974)
• HASP and ASP were migrated to OS/VS2 under the names JES2 and JES3
• MVS and its derivatives are the mainstay of IBM O/S now
The Von Neumann Computing Model
• Most common model for computing systems
• Prepared by John Von Neumann in year 1940
Instructions are executed one at a time sequentially
5. Operating System Considerations
a) Process Management
Problem :- According to Jon Von Neumann model only one instruction gets executed at a time. What will happen if that instruction is waiting for I/O. In this case CPU time is wasted.
Solution :-
b) Memory Management
Problem :- Any thing that is to be executed, must be in memory. (memory limitation)
Solution :- 1. Place task in real memory
2. Place task in virtual memory
1. Real memory implementation :
code & data are in real memory
size of code & data limited by size of installed memory
good performance, low overhead
possible wastage of memory
2. Virtual memory implementation :
based on assumption for a task, not all code & data is needed in real memory all the time
implemented on a combination of real plus auxiliary storage
operating system takes responsibility of bringing rest part of tasks in real memory when required.
Advantage : code and data size independent of the real memory
c) Input-Output Management
Problem :- Application should not worry about device characteristics. I/O device speed is 100 times slower than CPU.
Solution : Let all I/O be handled by a specialized system-I/O Subsystem
System 370 I/O Architecture
• Channels
Provide paths between the processor & I/O devices
3090 processors can have a maximum of128 channels
A channel itself is a computer & executes I/O instructions called channel commands
I/O devices are connected to channels through an intermediate device called “Control Unit”.
Each channel can have up to 8 control units.
• Control Unit
These are DASD units, can be connected to common control unit, called “String Controller”.
String Controller can be connected to a channel directly or indirectly
A control unit called “Storage Control” connects string controllers to a channel.
6. IBM Hardware
How do today’s PC and medium sized IBM MF compare?
Characteristics PC (Pentium 100) Main-Frame (4381)
Processor-speed 16-31 MB 32 MB
Main Memory 16-32 MB 32 MB
Individual Disk 1.2 GB 946 MB
Storage
Monitor SVGA/Graphics Character based dumb terminal
Where does the power of IBM MF come from?
Multiple processors with partitioning capability
Cache memory and expandable memory
Multi-user / Multi-programming Support
Batch and on-line processing support
Local and remote terminal support
High number of devices
Strong data management capability
7. Key Terminology
• Cache Memory
High speed memory buffer (faster than main memory)
Operates between CPU and main memory
Used to store frequently accessed storage locations (instructions)
Usually available on all processors
• Expanded Memory
Supplements main memory
Not directly available to application program
Not directly accessible by CPU
Implemented using high speed disk
Usually available with higher-end machines
• Processor Resource / System Manager (PR/SM)
Used to control Multi-processor Configurations
Allows division of multi-processors in partitions - LPAR
Each partition functions as independent system
Enables fault tolerance implementation by designating Primary and Secondary Partitions
Secondary partition takes over automatically if primary fails
Allows reconfiguration of I/O channel to partitions
• Channels
Device Management Concept - Unique to IBM
Provides access path between CPU and I/O devices (DMA)
Up to eight control units can be connected to one channel
Up to eight I/O devices can be connected to one control unit
A channel is a small computer in itself with a set of instructions (Channel commands)
Channel controls the I/O device operations independent of CPU
Cannel processing can overlap CPU processing - improved performance
• Channel - I/O Device Connectivity
Parallel architecture i.e. all bits of a byte are transmitted simultaneously
Information transfer is in unit of two bytes
Sixteen data wires and additional control wires are required
Maximum length of 120 meters (400 feet)
Data speed of 4.5mbps
Use of copper results in heavy, expensive cabling
• ESCON - Enterprise System Connection
Announced in 1990
Uses fiber optic
Results in reduced size and weight
Length limit extended to approximately 42Km (26 miles)
Faster data speed (17mbps)
I/O Devices
• Unit Record Devices - Each record is a single physical unit
Card Devices (now obsolete) : Readers / Punches / Reader and Punches
Printer
- Impact Printers - 600 to 2000 LPM
- Non-Impact Printers - 3800 sub-system, 20,000 LPM
Built-in control units for each device
Directly attached to channel
• Magnetic Tape
High volume storage
Sequential processing
Normally used as back-up device
Also used for physical transfer of data
4 to 8 tape drives are connected to one control unit
• DASD - Direct Access Storage Device
IBM’s official name for Disk
Non-removable - offers better reliability and are faster
Each unit is called as disk pack or Volume
Each pack has multiple surfaces
Each surface has multiple tracks
Same track no. of all surfaces together constitute a Cylinder
DASD capacity ranges from 100 MB (3330) to 8514MB (3390/9)
A group of DASDs of same type are connected together to form a String and are connected to a string controller
Multiple string controller are connected to a storage controller
Storage controller is connected to channel
• 3990 Storage Controller
Can connect 2 strings of 32 each 3390 model DASDs (totally 64 DASDs
Consists of high speed cache storage (32MB to 1024MB)
Data is buffered using cache
Frequently accessed data is stored in Cache - improved performance
Supports more than 4 channel connection to processor
- Enables several simultaneous disk operations
Data Communication Network
Allows local and remote terminals access to the computer systems
• Components of data communication
Host Computer - System/370 processor
Communications Controller - Attached to the channel
- Devices (terminals and printers) are connected to the terminal controller (also known as cluster controller)
- Terminal controller is connected to communications controller
- Terminal Controller managing Local terminals / printers can be connected directly to the channel
Modems and telecommunication lines (telephone line, Satellite Link)
- Remote terminals / printers are connected to terminal controller (at local site)
- Terminal controller is connected to modem
- Modem is connected to telecommunications line
- At the receiving end telecommunications line is connected to modem
- Modem is connected to communication controller
• 3270 Information Display System
Sub-system of terminals, printers and controllers connected to Host computer
Locally through communications controller or directly to channel
Remotely through communications controller, modem and telecommunications line
A typical 3270 terminal controller (3274) controls up to 32 terminals / printers
Emulator programs (Shine Link, Erma Link) allow computers (typically PCs) to mimic 3270 devices
These are useful since they allow upload / download of data between MF and PC
8. Data Communication Equipment
Data Communication equipment lets an installation create a data communication network that lets users at local terminals & remote terminals to access computer system
• At the center of the network is the host system, a system/370 processor
• The control unit that attaches to the host system’s channels is called a communication controller
it manages the communication function
necessary to connect remote terminal system
via modems and telecommunication lines
• A modem is a device that translates digital signals from the computer equipment at the sending end into audio signal that are transmitted over a telecommunication line, which can be telephone line, a satellite link or some other type of connection
• At the receiving end of the line, another modem converts those audio signals back into digital signal
Often Asked Questions About IBM
Why is grasping IBM difficult?
User interface is poor
- Non-intuitive
- Unfriendly and Formidable
Ancient terminology (e.g. Card, Punch queue) which is irrelevant now
Different terminology (e.g. DASD, DATA SET)
Too many options / parameters
Too many terms / acronyms
Variety of software results in site specific variations
Why is IBM so Popular
Sturdy and Secure HW/SW
Downward compatibility (does not make application SW obsolete)
Excellent customer support
Variety of software (Databases, OLTP packages) - IBM and Third Party
Market Leader
- First in DBMS, First in OLTP, first in PC!!!!
- First to develop a chess playing computer that beat world champion
The old legend : Nobody got fired for buying IBM
Future of IBM (is there any?)
Large existing application base will need support
Downsizing will need knowledge of current application/platform
Dual skills will be much in demand
Not all applications are suitable for downsizing - many will remain on MF
MF will be increasingly used as back-end server
New applications (data warehousing type) will be developed on MF
Multi-tier architecture will become common
Bottom Line : It is too soon to herald death of IBM MF
9. Characteristics Features Of MVS
1) VS : The use of virtual storage increases the number of storage locations available to hold programs and data
2) MULTIPROGRAMMING : Multiprogramming simply reclaims the CPU during idle periods to let other programs execute
3) SPOOLING : To provide shared access to printer devices, spooling is used
4) BATCH PROCESSING : When batch processing is used, work is processed in units called “Jobs”. A job may cause one or more programs to be executed in sequence. Batch jobs get collectively processed by the system
5) TIMESHARING : In this system, each user has access to the system through a terminal device. Instead of submitting jobs that are scheduled for later execution, the user enters commands that are processed immediately.
Time sharing is also called as Online Processing because it lets users interact directly with the computer.
MVS Terminology
Address Space
• An address space is simply the complete range of addresses and as a result, the number of storage locations; that can be accessed by the computer.
• An address space is a group of digits that identify a physical location in main storage
• In MVS an address space has 24-bit positions i.e. 16MB addressability.
• MVS allows each programmer to use all 16MB address space, even though real storage
includes only, for example 4MB physical locations.
• In MVS, references in the program address space are not associated with a particular real storage location. They remain reference to a particular piece of information called Virtual Addresses. They become real only when assigned to a physical location.
• When the program is ready to execute the system, using a system/370 hardware feature called Dynamic Address Translation(DAT), maps the virtual addresses in the program to the real storage addresses.
• By doing this, MVS can make the program address space larger than the number of physical location available in real storage.
MVS
• It uses real storage to simulate several address spaces, each of which is independent of the others
• Auxiliary storage and real storage are used in combination to simulate several virtual storage address space
• Each batch job or TSO user is given its own address space
• Various factors such as the speed of the processor and the amount of real storage installed effectively limit the number of address spaces than can be simulated.
• To provide for the larger virtual storage, MVS treats DASD as an extension of real storage
• Only one address space can be in control of CPU
Paging
• To enable the movement of the parts of a program executing in virtual storage between real storage and auxiliary storage, the MVS system breaks real storage, virtual storage & Auxiliary storage into blocks.
A block of Real Storage is a Frame
A block of Virtual Storage is a Page
A block of Auxiliary storage is a Slot
• A page, a frame and a slot are all the same size each is a 4K byte
• An active virtual storage page resides in a real storage frame, an inactive virtual storage page resides in an auxiliary storage slot
• Moving pages between real storage frames and auxiliary storage slots is called PAGING
Demand Paging
• Assume that DAT encounters an invalid page table entry during address translation, indicating that a page is required that is not in a real storage frame. To resolve this Page Fault, the system must locate an available real storage frame to map the required page(page-in). If there is no available frame, an assigned frame must be freed. To free a frame, the system moves its contents to a auxiliary storage. This movement is called a Page-Out.
• System performs page-out only when the contents of the frame have changed since the page was brought into real storage.
• Once a frame is located for the required page, the contents of the page are moved from auxiliary storage to real storage. This movement is called as Page-In.
• The process of bringing a page from auxiliary storage to real storage in response to a Page Fault is called DEMAND PAGING
• MVS tries to avoid the time consuming process of demand paging by keeping an adequate supply of available real storage frames constantly on hand. Swapping is one means of ensuring this adequate supply. Page stealing is another.
Swapping
• Swapping is the movement of an entire address space between Virtual storage & Auxiliary storage.
• It is one of the several methods MVS employs to balance system workload, as well as to ensure that an adequate supply of available real storage frames is maintained.
• Address space that are swapped in are active, having pages in real storage frames & pages in auxiliary storage slots.
• Address spaces that are swapped out are inactive; the address space resides on auxiliary
storage and cannot execute until it is swapped in.
Page Stealing
• If there are not enough 4K frames available then frames which are not referenced for long time will get thrown out and written to the auxiliary storage. So now those 4K frames are free. This is known as Page Stealing.
• The paging process is managed by several components of MVS. The 3 major one are :
Real Storage Manager (RSM)
Auxiliary Storage Manager(ASM)
Virtual Storage Manager (VSM)
RSM (Real Storage Manager)
manages real storage
directs movements of pages among real and auxiliary
builds segment & page table
ASM (Auxiliary Storage Manager)
keeps track of the contents of the page dataset and swap dataset
page dataset contain virtual pages that are currently occupying a real storage frame.
Swap dataset contain the LSQA pages of swapped out address space.
VSM (Virtual Storage Manager)
controls allocation/deallocation of virtual storage
maintain storage use information for Storage Management Facility (SMF)
Virtual Storage Layout
Each Virtual Storage Address Space consists of a System Area, a Private Area and a Common Area.
System Area
• It contains the nucleus load module, page frame table entries. Data blocks for system libraries and so many other things
• Nucleus and other contents of the System Area make up the resident part of the MVS system control program
• Its contents are mapped one for one into real storage frames at initialization time.
• The size of System Area does not change once it is initialized
Common Area
• It contains parts of the system control program, control blocks, tables and data areas
• The basic parts of the Common Area are:
System Queue Area (SQA)
Pageable Link Pack Area (PLPA)
Common Service Area (CSA)
System Queue Area
contains tables and queues relating to the entire system
the contents of SQA depends on an installation’s configuration & job requirement.
It is allocated from the top of the virtual storage in 64K segments, a minimum of 3 segments
are allocated during system initialization.
Allocated SQA space is both non-swappable and non-pageable
Pageable Link Pack Area
Contains svc routines, access methods, other system programs, and selected user programs.
It is pageable
Because the modules in PLPA are shared by all users, all program modules in PLPA must be reentrant and read-only
PLPA space is allocated in 4K block directly below SQA.
The size of PLPA is determined by the number of modules included
Once the size is set, PLPA does not expand
Common Service Area
Contains pageable system and user data areas.
It is addressable by all active virtual storage address space and shared by all swapped-in users.
Virtual storage for CSA is allocated in 4K pages directly below PLPA.
Private Area
The Private Area is made up of :
Local System Queue Area(LSQA)
Scheduler Work Area(SWA)
Subpools 229/230
System Region
User Region
The user region is the space within Private Area that is available for running the user’s program
Local System Queue Area
• LSQA contains tables and queues that are unique to a particular address space
Scheduler Work Area
• SWA contain control blocks that exist from task initiation to task termination
• The information in SWA is created when a job is interpreted and used during job initiation and execution
• It is pageable and swappable
10. MVS Functions
Data Management Overview
Anything that needs to be stored and accessed on user request is a Data for MVS
Types of Data
Business Data
Database
Indexed Files
Flat Files
Application Components
Source Programs
Executable Programs
Screen Definitions
Record Layout Definitions
Command File Scripts
MVS (System) Data
O/S program
User Information (ID, Password, Profile)
Access Permissions
Temporary Data
O/S Built Data (e.g. task queues, segment table, page table)
Spooled Output
Work Files for Sort
Dataset Organization
• Dataset organization fall into two categories under MVS : VSAM and NON-VSAM
• Non-VSAM provides four basic ways of organizing data stored in datasets
Physical Sequential
Indexed Sequential
Direct
Partitioned
• VSAM provides four basic ways of organizing data stored in datasets
Entry Sequence Dataset - ESDS
Key Sequence Dataset - KSDS
Relative Record Dataset - RRDS
Linear Dataset - LDS
Non-VSAM datasets organization
Physical Sequential
• Records are stored one after another in consecutive sequence
• Can reside on just any type of I/O devices
• Appropriate when file’s records don’t have to be retrieved at random
Indexed Sequential
• Includes an index, which relates key field values to the location of their corresponding data records
Direct
• Permits random access of records
• It doesn’t use an index
• To access record, the disk location address (By hashing) of that record to be specified
Partitioned
• Consists of one or more members
• Each of these members can be processed, as if it were a separate physical sequential file.
• Names of members in a Partitioned dataset(PDS) is stored in a directory
Dataset Organization
Partitioned Data Set - Salient Features
Commonly referred as PDS
Also known as Library
Used to store application components
PDS is divided into one or many members
Member name can be up to 8 characters long
There is no extension for member
Each member can be processed as an individual unit
Entire PDS can be processed as one unit
Each PDS contains a directory
Directory has an entry for each member in a PDS
- PDS Examples:
PAYROLL.TEST.SOURCE, PAYROLL.PROD.SOURCE,
INV.TEST.LOADLIB
Normally consists of 3 qualifiers called as
- PROJECT
- GROUP
- TYPE
– Personal PDS start with high level qualifier as User ID
– E.g. DA00T23.NEW.SOURCE
– Member Name Examples
– PAB0017, PAB0105, PAC0021 etc.
Usually, the application component type cannot be identified from the member name. For that naming conventions are used for PDS.
VSAM datasets organization
ESDS
• Can only reside on DASD
• Functionally equivalent to Physical Sequential File
KSDS
• Functionally equivalent to Indexed Sequential File
RRDS
• Lets you retrieve the record by specifying the location relative to the start of the file
All VSAM datasets must be cataloged
Data Organization - Salient Points
Non-VSAM Data Organization was developed in mid 1960s
VSAM - (Virtual Storage Access Method) was introduced in early 1970s
VSAM was expected to replace Non-VSAM Data Organization Functions
Today, most sites use both VSAM and Non-VSAM Data Organization
VSAM is the primary data organization for user data
VSAM is also called as “native” file management system of IBM
Most of the DBMS running under MVS use VSAM as underlying Data Organization (e.g. DB2, IDMS)
Physical Sequential Data Organization is used for “flat” files
Index Sequential and Direct Data Organization are not very popular now (these functions are handled better by VSAM)
Partitioned Data Sets (PDS) also used by MVS to store O/S programs
Data Set Naming Rules
Data Set Naming Rules
Allows - Alpha, Digits, National Characters @,#$, and “.”
Maximum Length 44 characters for DASD, 17 for Tape
If Length is more than 8, must be broken into qualifiers of maximum 8 characters each
Qualifiers are separated by “.”
“.” are counted in overall length
First character of the qualifier must be alpha or national character
Last character of data set must not be “.”
First qualifier is called as high-level qualifier
High-level qualifier has special significance
E.g. Data Set name PAYROLL.P9710.TRAN
Has three qualifiers
High-level qualifier is PAYROLL
Total length is 18
Dataset Tracking
Data Set Tracking Mechanisms
Label
Catalog
Label
• Data Set Label
First record of each data set is a label record called as;
File label or Data Set Control Block (DSCB)
There are several formats for DSCB
DSCB describes data set’s name, it’s DASD location and other details
• DASD Label
Each DASD is labeled; called Volume Label (VOL1 label)
DASD label is stored on a disk at third record of track 0 in cylinder 0
DASD label contains Volume Serial Number and address of the VTOC file
• Volume Serial Number
Each DASD is identified by a unique number, Volume Serial Number vol-ser
Vol Ser must be specified for accessing the Data Set (which is not cataloged)
• VTOC
VTOC - Volume Table Of Contents is a special file for each DASD
VTOC contains the file labels for all data sets on the volume
MVS Datasets
Label Processing
• When a dataset is stored on disk or tape, MVS identifies it with special records called ‘labels’.
• There are 2 types of DASD labels : Volume, File Label
• All DASD volumes must contain a volume label, often called a VOL1 label. This label is always in the same place on a disk volume : the 3rd record of track zero in cylinder zero.
• Volume label has 2 important functions
It identifies the volume by providing a volume serial no. : Vol-ser. Every DASD volume must have a unique six-characters vol-ser.
It contains the disk address of the VTOC.
• The VTOC (Volume Table of Contents) is a special file that contains the file labels for the datasets on the volume.
• These labels are called Data Set Control Block (DSCB) have several formats called Format-1, Format-2 and so on.
Format-4-dscb : describes VTOC itself
Format-1-dscb : describes a dataset by supplying dataset name, DASD location & other characteristics [space is allocated to DASD file in area called extents. Each extent consists of one or more adjacent tracks]
[has room to define 3 extents for a file (1 primary, 2 secondary)
Format-3-dscb : if file requires more than 3 extents, this dscb is created
It contains room for 13 additional secondary extents [As a result file can contain up to 16 extents]
Format-5-dscb : contain information about free extents that aren’t allocated to files
each can define up to 26 free extents
Catalog
• Obviates the need of specifying Vol Ser for the data set
• Catalog Types
Master Catalog
User Catalog
• Catalog Features
Each MVS has only one Master Catalog
Master Catalog is used by MVS for system data sets
User Catalog is used for user data sets
There can be multiple User Catalogs
Master Catalog contains one entry of each User Catalog
- VSAM data sets must be Cataloged
- Non-VSAM Data Sets may or may not be cataloged
- An Alias can be created for a Catalog
• Usually, the high-level qualifier of a data set is same as the catalog name or catalog alias name
• Multiple data sets can be cataloged in single user catalog
• Alias helps to catalog data sets with different high-level qualifiers to be cataloged in a single user catalog
Data Management
• Data Management Functions (for Non-PDS)
Allocate
Process
- Add Records
- Modify Records
- Delete Records
Deallocate (delete)
Copy
Rename
Catalog
• Additional Functions for PDS
Compress
Member Management
Create, Modify, Delete, Copy, Rename
• How Data Management is Achieved
Interactively using MVS Commands
Executing MVS Utility Programs (batch mode)
Through Application Programs
- On-line Processing
- Batch Processing
11. MVS Concepts
How datasets are Accessed
• Generally dataset goes through three phases when handled through program :
• Allocation
• Processing
• Deallocation
Allocation
• The process of locating an existing dataset or space for a new dataset and preparing the system control block needed to use the dataset is called “Allocation”
• Allocation occurs at 3 levels
Unit is selected and allocated e.g. SYSALLDA-DASD, TAPE
Volume is allocated
Dataset on that volume is allocated
Processing
• Processing involves 3 steps
Opening datasets
Processing I/O
Closing datasets
Deallocation
• Each file is automatically deallocated when job is finished with it
• While deallocating, disposition of dataset can be decided, whether you want to retain the file
or should be deleted
12. Job Management Overview
What is a Job?
Simply put, job is execution of one or more related programs in sequence
E.g. 1
A job of creating an executable module (load module) from a source program consists of
executing Compiler program and executing Linker program.
E.g. 2
A job of printing invoices may consist of execution of three programs;
• an EXTRACT program to pull out transactions from database,
• a SORT program for sorting the transactions,
• a PRINT program to print the invoices.
Job - Salient Points
• Executed in a background mode
• Job details are specified using some command language
Job Management Functions
• Receive the job into operating system
• Schedule the job for processing by O/S
• Execute the Job
• Process the output
Stages of Job
1. Job Preparation
• User keys-in commands using Editor
• Save as a member in PDS
2. Job Scheduling
• Initiated using TSO SUBMIT command
• Not necessarily on FIFO basis
• Prioritization is implemented using concept of class and priority code
3. Job Execution
4. End of execution (normal, erroneous)
• Intimate the user
• Job log management
• Job output management
• Printer output
• Data set output
• Erroneous Termination of job
Type of execution errors
Incorrect commands (command syntax errors)
Required resources (Data Sets, Program Library, Program Load Module) not available
Violation of access permissions for data sets, program load module etc.
Mismatch in data set status; as required by job and as it actually exists e.g. a create is issued for a data set while data set which already exists
Program errors
Mismatch for Data set - Between program definition
and actual characteristics
Infinite loop
Data Type mismatch - numeric variable contains non-numeric data
Any abnormal termination of program is called as “Abend”
Job Management
Definitions
• JOB - Is the execution of one or more related programs in sequence
• JOB STEP - Each program to be executed by a Job is called a job step
• JCL (Job Control Language) - Is a set of control statements that provide the specifications necessary to process a job
• JES (Job Entry Subsystem) :
Meant for job entry into system, also for job returning after completion
Shares the load on the operating system
Takes care of all inputs and outputs
Do simple basic syntax checking
Resource Initialization
Creation of address space
It is also known as Job Scheduler
Classified into
- JES2-design for uniprocessor environment
- JES3-design for multiprocessing environment (Decided at the time of system initialization)
Jobs are sent to MVS depending on the class priority schemes
How Job Is Entered Into the System
• When you submit the job, JES reads the job stream(sequence of JCL commands) from a DASD file and copy it to a job queue, which is a part of a special DASD file called JES SPOOL.
Job Scheduling
How Job Is Scheduled For Execution
• MVS does not necessarily process jobs in the order in which they are submitted. Instead, JES examines the jobs in the job queue and selects the most important jobs for execution. That way JES can prioritize its work, giving preference to more important jobs.
• JES uses 2 characteristics to classify a job’s importance, both of which can be specified in the job’s JCL : Job Class and Job Priority
• If two or more jobs are waiting to execute, the JES scheduler selects the one with higher priority
• Each job class is represented by a single character, either a letter (A-Z) or a digit (0-9). Job classes are assigned based on the processing characteristics of the job.
• INITIATOR :- An initiator is a program that runs in the system region of an address space. Each initiator can handle one job at a time. It examines the JES spool, selects an appropriate job for execution, executes the job in its address space and returns to the JES spool for another job.
• The number of active initiator on a system and as a result the number of address spaces eligible for batch job processing determines the number of batch jobs that can be multi programmed at once.
• Each initiator has one or more job classes associated with it. It executes jobs only from those classes.
Initiator Eligible Job Classes
1 A
2 B,C,D,H,L,T
3 B,C,D,H,L,T
4 B,C
5 B,C
6 C
• Within a job class, initiator selects jobs for execution based on their priorities, which can range from 0 to 15
• If two or more jobs have same class & priority, they are executed in the order in which they are submitted.
How Job Is Executed
• Once an initiator has selected job for execution, it invokes a program called the interpreter
• The interpreter's job is to examine the job information passed to it by JES and create a series of control blocks in the SWA, a part of the address space’s private area
• Among other things, these control blocks describes all of the datasets the job needs
• Now initiator goes through 3 phases for each step of job
Allocation (required resources are allocated)
Processing (region is created & program is loaded and executed)
Deallocation (resources are released)
• This continues until there are no more job steps to process
• This continues until there are no more job steps to process
• Then, the initiator releases the job and searches the spool again for another job from the proper class to execute
• As a user’s program to execute, it can retrieve data that was included as part of job stream and stores in the JES spool
How The Job’s Output Is Processed
• Like Jobs, SYSOUT data is assigned an output class that determines how the output will be handled
Common O/P classes are;
A - Printer
B - Card Punch O/P
X - Held O/P
[Held O/P stays on the sysout queue indefinitely; Usually, O/P is held so that it can be examined from a TSO terminal]
How A Job Is Purged
• After the job’s output has been processed, the job is purged from the system, i.e. JES spool space, the job used, is freed so it can be used by other jobs and any JES control blocks associated with the job are deleted
• Once a job has been purged, JES no longer knows of its existence
13. Dataset Allocation And Job Step Execution
When the user program completes, the initiator invokes Unallocation routine to Deallocate the resources used by the job step
14. MVS Tools Overview
Tools are a set of sub-systems and facilities that;
• Implement MVS functions
• Are directly used by MVS user
These are essentially Software programs; system programs
Interactive Processing Tools ( TSO - Time Sharing Option )
• Used by the terminal user to invoke MVS facilities interactively
• TSO internally treats each terminal user as a Job
• Job Stream is created when terminal user logs in
• Each terminal user is given a separate address space
ISPF - Interactive System Productive Facility
• Runs as part of TSO
• Takes advantage of full screen (24 x 80) capability of 3270 terminals
• Panels are provided for terminal users for issuing commands
• Key Functions Implemented Using ISPF
Editor - Program Sources, Job Commands
Data Management - PDS and Physical Sequential Data Set Management
Job Processing - Initiate Job, Check job log
Miscellaneous
• PDF - Program Development Facility is Part of ISPF
Job Management Tools
• Job Control Language (JCL)
• Used to code job commands
• Job Entry System (JES)
• Manages the job before and after execution; receive, schedule, process output
• Base Control Program (BCP)
Manages the job during execution
• Simultaneous Peripheral Operations on-line (SPOOL)
Used for staging of input and output
Why and What of JCL?
JCL is the most dreaded word for newcomer to IBM world
Why JCL?
Since the job is executed in background, without user interaction, all information required for the execution must be supplied in advance
JCL is used to specify this information
The most common information supplied through JCL is;
- To whom the job belongs (which user id)?
- What is the program / utility that is to be executed?
- Where (in which library / PDS) to find the load module of the program or utility?
- Where (which DASD volume / catalog, what data set name) to find the input data files for the program / utility?
- Where should (which DASD volume, what data set name) the output files be created?
- The printer output should be directed to which printer?
• What is JCL?
Stands for Job Control Language
Connotation is; set for job commands stored as a MEMBER in a PDS e.g. JCL to execute a batch program, JCL to compile and link a COBOL program, JCL to allocate a VSAM data set, JCL to SORT and MERGE two Physical Sequential Data Sets
Thus, JCL is nothing but a set of commands
- User keys-in commands using a editor
- Saves as PDS Member e.g. PAYROLL.TEST.JCL(PROG1JCL)
• What makes learning JCL so difficult
JCL is powerful and flexible, that leads to some complexity
It is non-intuitive
The user interface is formidable
The terms are ancient
Very little has changed since 1965 when JCL was first developed
• However, JCL can be understood and mastered with logical approach and open mind
• Good grasp of JCL is a must to be a versatile IBM programmer
JES - Job Entry System
• Introduction
Two versions of JES; JES2/JES3
- JES2 is primarily for single processor systems
- JES3 is for multiple processor systems
Each MVS system uses either JES2 or JES3
JES3 has additional scheduler functions than JES2 (e.g. schedule job at specific time of the day, interdependent job scheduling )
MVS Tools Overview
• How Job Enters the System?
.Job can enter the system from local or remote card readers (now obsolete)
By starting a cataloged JCL procedure (e.g. when user logs in, a predefined set of commands are executed as a batch job. These commands are stored as cataloged JCL procedure)
By interactive users ‘thru’ SUBMIT command. Users can create a PDS member in which commands are specified. On issuing SUBMIT command these are executed as a job.
We will focus on third approach
Input
On SUBMIT, internal reader reads the JCL and creates an input stream
JES2 reads the input stream, assigns a Job Number and places input stream in SPOOL data set (a message is sent to TSO user about the job number)
Job is put in the conversion queue
Conversion
Converter program analyzes JCL statements
Converts into converter / interpreter text
Checks for Syntax errors
- If any error, Job is queued for output processing
- If no error, Job is queued for processing
Processing
• Selection based on job class and priority
Selected job is passed to Initiator
Initiator invokes Interpreter
Interpreter builds control blocks from converter / interpreter text in a Scheduler Work Area (SWA)
- SWA is part of address space’s private area
- Control blacks describe the data sets required by the job
Initiator allocates resources required by the Job
- Initiator starts the program to be executed
- Builds the user region
- Loads the program in the user region
- Transfers control to the program
On completion of the program execution, initiator de-allocates the resources
The process of allocation / execution and de-allocation is repeated for each job step
Initiator Characteristics
Each initiator can handle one job at a time
There can be multiple initiators
Each initiator has a job class associated with it
System Operators can control the number of initiators and the class/es associated with each initiator
Input Data
Input data to the user’s program can be specified in the job
Called as in-stream data or SYSIN data
SYSIN data is read and stored in JES spool
SYSIN data is treated like a data coming from card reader
Output
Management of System Messages, User Data Sets that need to be Printed / Punched
Organized by output class and device set-up requirements
User ‘s program can produce output data that is stored in a JES spool; called as SYSOUT data
Each SYSOUT data is assigned an output class
Output class indicates the printer selection
“Held” Output
- Special class (usually Z) is assigned to “hold” the output
- “Held” output remains in the SYSOUT indefinitely
- Usually used to verify before printing
- User can change the class and thus release the “held” output
Hard-Copy
Local or remote processing
Device Selection
Queue the output for print /punch
Purge
Release SPOOL and Job Queue space
Intimate TSO user about job completion
Job Output
Output is produced at each stage job processing
Includes output produced by;
- JES
- MVS
- User’s program where SYSOUT is allocated as output device
Job output is available to user (you can see it dynamically)
It can be viewed using ISPF
Components of Job Output
Component 1- Separator Page
First and last page of job output
Inserted by JES
Helps operator the segregate the job outputs when directed to printer
Component 2 – part I Job Log
Messages produced by JES
Also displayed on operator’s console
If the job Abends, error messages are logged in Job Log
Component 2 – part II Job Statistics
Summary information of system resources used by the job e.g.;
Number of JCL cards
Spool usage
Execution time
Component 3 - JCL Listing
• List of JCL that was processed for the job
• Should be same as what user has created
Component 4 - Message Log
• Messages regarding job execution
• Messages produced by MVS
• Includes details of
• Resource Allocation
• Program Execution
• Resource De-allocation
• Consists of Message label and message text
• Message label starting with IEF indicates a MVS message
• Installation specific messages
•
Component 5 - SYSOUT
• Separate sub-component for each SYSOUT allocation
• Each SYSOUT can have different characteristics e.g. class, record length etc.
VTAM – Virtual Telecommunications Access Method
• Telecommunications (TC) Access Method
• Required to support terminal devices
• Part of SNA – System Network Architecture
• Provides centralized control over all terminals attached to the system
• VTAM Application programs (e.g. TSO, CICS IMS-DC) communicate with terminal devices via VTAM
CICS – Customer Information Control Program (optional component)
• Interactive applications are developed using CICS
• CICS is a VTAM application program
• Works with VTAM to support on-line functions
• CICS implements multi-programming within itself
• Multiple programs which are part of same application are executed within CICS address space
• CICS selects one program at a time for execution
• CICS itself is multi-programmed by MVS along with other programs
DB2 - DataBase 2 (optional component)
• Database Management System
• Relational Implementation
RACF - Resource Access Control Facility
• Comprehensive Security Package
• Though optional used by most of the installations
• Users and Resources (e.g. Data Sets) are identified to RACF
• Whenever user tries to access a resource the security is checked by RACF
• RACF is a set of routines
• Invoked as and when required
SMF - System Management Facility
• Keeps track of system usage
– CPU, DASD I/O, Records Printed etc.
• Data collected when job is executed
• Stored in a special data sets
• Used for billing
Language Translators / Linkage Editor / Loader
• Language Translators- Convert source to object module
• Separate for each language, Assembler Language Translator is part of MVS
• Linkage Editor (part of MVS) - Converts object module to executable I.e. load module
• Loader - Creates temporary load module (used during testing phase)
Utilities
• Set of general purpose programs
• Executed like a user program through JCL
• Common Utilities are :
IEBGENER
IEFBR14
SORT
IDCAMS
An Introduction to TSO
• Time Sharing
• Resource sharing
• MVS handles each TSO user as it handles batch jobs
• The user specific batch job that starts up handles
what datasets are available
What terminal monitor program is to be used
what procedure to auto execute at logon
TSO Commands
• About 26 commands providing a variety of functions can be used.
• Allow Dataset Management functions
• Program Development functions.
• Batch job functions.
• Other functions like Help, Broadcast, Clist and Rexx.
• You can issue these at the READY prompt or TSO command.
Dataset Management functions
• Allocate Datasets dynamically
• List Datasets
• Print Datasets
• Copy Datasets
• Delete Datasets
• Rename Datasets
• List Catalog Entries
• List VTOC Entries
• Use AMS Services
Program Development functions
• Create program.
• Edit program.
• Compile program.
• Linkedit a program.
• View output.
• Route output to a printer
Batch job functions
• Submit Jobs
• Monitor job
• View output
• Route output
Help
• Help on TSO commands can be obtained by typing “HELP” at the “READY” prompt.
15. Interactive System Productivity Facility (ISPF)
• Access to ISPF is gained by Keying ISPF at the READY prompt
• This is done as default in the auto executed clist at startup.
• When this is entered you get the Primary Options Menu.
Primary Options Menu
----------------------- ISPF/PDF PRIMARY OPTION MENU ------------------------
OPTION ===> pfshow USERID - DA0034T
0 ISPF PARMS - Specify terminal and user parameters TIME - 06:58
1 BROWSE - Display source data or output listings TERMINAL - 3278
2 EDIT - Create or change source data PF KEYS - 12
3 UTILITIES - Perform utility functions
4 FOREGROUND - Invoke language processors in foreground
5 BATCH - Submit job for language processing
6 COMMAND - Enter TSO Command, CLIST, or REXX exec
7 DIALOG TEST - Perform dialog testing
8 LM UTILITIES - Perform library administrator utility functions
9 IBM PRODUCTS - Additional IBM program development products
10 SCLM - Software Configuration and Library Manager
C CHANGES - Display summary of changes for this release
T TUTORIAL - Display information about ISPF/PDF
X EXIT - Terminate ISPF using log and list defaults
D DATACENTER - Perform Datacenter Defined Functions
S SDSF - Spool Display and Search Facility
U USER - Perform User Defined Functions
F1=HELP F2=SPLIT F3=END F4=RETURN F5=RFIND F6=RCHANGE
F7=UP F8=DOWN F9=SWAP F10=LEFT F11=RIGHT F12=RETRIEVE
PA/PF Key Map
PF1 ===> HELP Enter the Tutorial
PF2 ===> SPLIT Enter Split Screen Mode
PF3 ===> END Terminate the current operation
PF4 ===> RETURN Return to primary options menu
PF5 ===> RFIND Repeat find
PF6 ===> RCHANGE Repeat Change
PF7 ===> UP Move screen window up
PF8 ===> DOWN Move screen window down
PF9 ===> SWAP Activate the other logical screen in split screen mode
PF10 ===> LEFT Scroll screen left
PF11 ===> RIGHT Scroll screen right
PF12 ===> RETRIEVE Retrieve last command
PA1 ===> ATTENTION Interrupt Current operation
PA2 ==> RESHOW Redisplay the current screen
PF1 - PF12 Keys may be duplicated from PF13 to PF24 in 24 key mode.
Split Screen Mode and Tutorial (Help)
• Entered by keying “SPLIT” on the command line
• or by positioning the cursor where required and pressing PF2
• Context Sensitive help can be accessed by typing help on the command line or through the PF1 key
List and Log files
• Some ISPF commands generate outputs. Printed output like this is collected and stored in a special dataset call list dataset.
• Whether the list dataset is to be retained, printed and/or deleted can be specified as a default in the setup panels.
• The ISPF operations done are recorded in a Log dataset. The disposition can be specified in the defaults panel.
User Profile
• ISPF maintains a user profile
• This profile contains default values of various entry panels.
Exiting ISPF
To terminate ISPF you can
• type =x at the command line
• or use the PF3 key to exit
If you haven’t specified default dispositions for your List and log datasets then the termination panel is displayed.
Termination Panel
------------------- SPECIFY DISPOSITION OF LOG DATA SET ---------------------
COMMAND ===>
LOG DATA SET DISPOSITION LIST DATA SET OPTIONS NOT AVAILABLE
------------------------- -----------------------------------
Process option ===>
SYSOUT class ===>
Local printer ID ===>
VALID PROCESS OPTIONS:
PD - Print data set and delete
D - Delete data set without printing
K - Keep data set (allocate same data set in next session)
KN - Keep data set and allocate new data set in next session
Press ENTER key to complete ISPF termination.
Enter END command to return to the primary option menu.
Key Mapping
Option 0.3
------------------------ PF KEY DEFINITIONS AND LABELS ------------------------
COMMAND ===>
NUMBER OF PF KEYS ===> 12 TERMINAL TYPE ===> 3278
PF1 ===> HELP
PF2 ===> SPLIT
PF3 ===> END
PF4 ===> RETURN
PF5 ===> RFIND
PF6 ===> RCHANGE
PF7 ===> UP
PF8 ===> DOWN
PF9 ===> SWAP
PF10 ===> LEFT
PF11 ===> RIGHT
PF12 ===> RETRIEVE
PF1 LABEL ===> PF2 LABEL ===> PF3 LABEL ===>
PF4 LABEL ===> PF5 LABEL ===> PF6 LABEL ===>
PF7 LABEL ===> PF8 LABEL ===> PF9 LABEL ===>
PF10 LABEL ===> PF11 LABEL ===> PF12 LABEL ===>
Browsing Datasets (Option 1)
------------------------- BROWSE - ENTRY PANEL ------------------------------
COMMAND ===>
ISPF LIBRARY:
PROJECT ===> DA0034T
GROUP ===> TRG ===> ===> ===>
TYPE ===> JCL
MEMBER ===> (Blank or pattern for member selection list)
OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>
VOLUME SERIAL ===> (If not cataloged)
DATA SET PASSWORD ===> (If password protected)
MIXED MODE ===> NO (Specify YES or NO)
FORMAT NAME ===>
Browse Commands
• Cols - for displaying Columns
• Scroll up, down, left right with PF7, PF8, PF10 and PF11 respectively.
• Set Scroll amount to CRSR, HALF, PAGE, n lines, max, DATA
• Scroll by n lines, to top or bottom
• Define/LOCATE {line number}/label.
• FIND string {NEXT/PREV/FIRST/LAST/ALL}.
• PF5 for repeat find and use of “&”.
• Use of PF12 to recall last command.
• Terminate Browse with PF3 Key.
• FIND string {NEXT/PREV/FIRST/LAST/ALL} {CHAR/PREFIX/SUFFIX/WORD} col-1 col-2
• Column limitation search
• T ’text’ - for case insensitive search
• X ’hex-string’ for a hex search
Editing Datasets (Option 2)
• The Primary Editor entry is similar to that for Browse as regards concatenating datasets and dataset selection.
• Labels can be defined as in browse but may be entered as line commands.
• Error messages may be removed by typing RESET on the command line.
Standard editing commands
I/In Insert 1 or n lines.
D(n) Delete line or n lines.
DD Delete the block marked by the 2 DD line commands.
R(n) Repeat 1 or n lines.
RR Repeat the block marked by the 2 RR line commands.
C(n) Copy 1 or n lines.
CC Copy the block marked between the 2 CC line commands.
M(n) Move 1 or n lines.
MM Move the block marked between the 2 CC line commands.
A(n) Copy or Move lines 1 or n times after this line.
B(n) Copy or Move lines 1 or n times before this line.
Creating datasets and exiting editor
To create a new member specify non-existent member name in the current PDS.
You can quit the editor without saving changes by the CANCEL command.
You can update the dataset with the save command
You can exit with implicit save with the END command or PF3 key.
Edit Profiles
Edit profiles control editing options
Normally editing a new dataset uses the default profile - the dataset type
To display the edit profile type PROFILE on the command line in the editor
To remove it from the screen type RESET.
This gives you a display as follows..
EDIT ---- DA0034T.TRG.JCL(JCL1) - 01.27 ---------------------- COLUMNS 001 072
COMMAND ===> SCROLL ===> CSR
****** ***************************** TOP OF DATA ******************************
=PROF> ....STD (FIXED - 150)....RECOVERY OFF....NUMBER ON STD..................
=PROF> ....CAPS ON....HEX OFF....NULLS ON STD....TABS ON STD....SETUNDO OFF....
=PROF> ....AUTOSAVE ON....AUTONUM OFF....AUTOLIST OFF....STATS ON..............
=PROF> ....PROFILE UNLOCK....IMACRO NONE....PACK OFF....NOTE ON................
=BNDS> <
=TABS>
=COLS> ----+----1----+----2----+----3----+----4----+----5----+----6----+----7--
000100 //DA0034TA JOB LA2719,'PARAG',NOTIFY=DA0034T,
000200 // CLASS=A,MSGCLASS=X
000300 //*
000400 //COBRUN EXEC PGM=PROG11
000500 //STEPLIB DD DSN=DA0034T.TRG.LNK,DISP=SHR
000510 //*STEPLIB DD DSN=DA0034T.TRG.COBOL2,DISP=SHR
000600 //INVMAS DD DSN=DA0034T.TRG.INVMAS,DISP=SHR
000700 //OP1 DD SYSOUT=*
000710 //*OP1 DD DSN=DA0034T.TRG.EXE7,DISP=(NEW,CATLG,CATLG),
Profile Settings
• To switch to a different profile key “profile <“profile-name”>
• To lock a profile, at the command line key “PROFILE LOCK”
• Any changes made to the locked profile are not saved permanently.
• Caps, number Pack and STATS modes are set each time you begin an edit session
• To define tab stops . Key TABS on the command line and place ‘@’ on the tabs line one character before where you would like a tab stop. On the command line, Key TABS ON/OFF
• If you omit the tabbing character hardware tabbing is assumed.
• Line control Commands
Nonumber/NUM OFF turns off line numbering
NUM ON turns on line numbering
AUTONUM resequence line numbers on save
RENUM resequence line numbers
NUM ON COBOL checks for valid COBOL numbering
NUM ON STD checks for standard line numbering
UNNUM removes line numbering.
Edit Modes
• STATS ON/OFF Controls dataset statistics
• AUTOLIST ON/OFF Controls Automatic listing
• NULLS ON/OFF Controls if nulls or spaces are padded.
• RECOVERY ON/OFF Recovers a dataset being edited in case of a system crash. It also permits
• The use of the UNDO command. This works up to the last save only.
• HEX ON/OFF Displays data in HEX/ASCII mode.
• CAPS ON/OFF Converts Lower case letters to upper case if set to on.
Line command for this function are LC or UC. LCLC & UCUC are blocked line commands.
• PACK ON/OFF Specifies that the data is store in compressed mode.
• AUTOSAVE ON/OFF PROMPT/NOPROMPT Auto save data when PF3 key is pressed
• IMACRO Specify initial macro to be run at startup.
Advanced Edit Options
To locate a String within another;
FIND string range NEXT/PREV/FIRST/LAST/ALL CHARS/PREFIX/SUFFIX/WORD X/NX col-1 col-2\
Where:
range is denoted by 2 labels
string is the string to be found
NEXT start search at current line and locate the next occurrence of the string (default).
PREV start search at current line and locate the previous occurrence of the string.
FIRST locate the first occurrence of the string
LAST locate the last occurrence of the string
ALL Same as first but count the occurrences in the file.
CHARS any occurrence of the string
PREFIX string must be at the beginning of the word
SUFFIX string must be at the end of a word
X/NX Search only excluded/Non excluded lines
col-1 col-2 starting and ending column numbers defining the search boundaries.
To Modify/Change a string with another String;
CHANGE string1 string2 range NEXT/PREV/FIRST/LAST/ALL CHARS/PREFIX/SUFFIX/WORD X/NX col-1 col-2
String2 replaces string1
Shifting text source
<--------------Data shift----------> <---------- Column shift ----------->
< > >n >> ) )n ))n right shifts
Data shifts
• does not drop blank characters
• does not combine words by drooping spaces
• does not delete spaces within apostrophes
• COPY [member] [AFTER/BEFORE label]
• MOVE [member] [AFTER/BEFORE label]
• CREATE [member] [range]
• REPLACE [member] [range]
• Edit member-name to edit recursively
Utilities Menu
Option 3
------------------------- UTILITY SELECTION MENU ----------------------------
OPTION ===>
1 LIBRARY - Compress or print data set. Print index listing.
Print, rename, delete, browse, or edit members
2 DATASET - Allocate, rename, delete, catalog, uncatalog, or
display information of an entire data set
3 MOVE/COPY - Move, copy, or promote members or data sets
4 DSLIST - Print or display (to process) list of data set names
Print or display VTOC information
5 RESET - Reset statistics for members of ISPF library
6 HARDCOPY - Initiate hardcopy output
8 OUTLIST - Display, delete, or print held job output
9 COMMANDS - Create/change an application command table
10 CONVERT - Convert old format menus/messages to new format
11 FORMAT - Format definition for formatted data Edit/Browse
12 SUPERC - Compare data sets (Standard Dialog)
13 SUPERCE - Compare data sets and Search-for strings (Extended Dialog)
14 SEARCH-FOR - Search data sets for strings of data (Standard Dialog)
Library Utility
Option 3.1
---------------------------- LIBRARY UTILITY --------------------------------
OPTION ===>
blank - Display member list B - Browse member
C - Compress data set P - Print member
X - Print index listing R - Rename member
L - Print entire data set D - Delete member
I - Data set information E - Edit member
S - Data set information (short)
ISPF LIBRARY:
PROJECT ===> DA0034T
GROUP ===> TRG ===> ===> ===>
TYPE ===> JCL
MEMBER ===> (If "P", "R", "D", "B", "E" or blank selected)
NEWNAME ===> (If "R" selected)
OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>
VOLUME SERIAL ===> (If not cataloged)
DATA SET PASSWORD ===> (If password protected)
Dataset Utility
Option 3.2
---------------------------- DATA SET UTILITY -------------------------------
OPTION ===> A
A - Allocate new data set C - Catalog data set
R - Rename entire data set U - Uncatalog data set
D - Delete entire data set S - Data set information (short)
blank - Data set information M - Enhanced data set allocation
ISPF LIBRARY:
PROJECT ===> DA0034T
GROUP ===> TRG
TYPE ===> JCL
OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>
VOLUME SERIAL ===> (If not cataloged, required for option "C")
DATA SET PASSWORD ===> (If password protected)
New dataset allocation
option 3.2.A
------------------------ ALLOCATE NEW DATA SET ------------------------------
COMMAND ===>
DATA SET NAME: DA0034T.TRG.JCLS
VOLUME SERIAL ===> BS3008 (Blank for authorized default volume)
GENERIC UNIT ===> (Generic group name or unit address)
SPACE UNITS ===> BLOCK (BLKS, TRKS, or CYLS)
PRIMARY QUANTITY ===> 26 (In above units)
SECONDARY QUANTITY ===> 12 (In above units)
DIRECTORY BLOCKS ===> 0 (Zero for sequential data set)
RECORD FORMAT ===> FB
RECORD LENGTH ===> 150
BLOCK SIZE ===> 1500
EXPIRATION DATE ===> (YY/MM/DD, YYYY/MM/DD
YY.DDD, YYYY.DDD in Julian form DDDD for retention period
in days or blank)
( * Only one of these fields may be specified)
Renaming Dataset
Option 3.2.R
---------------------------- RENAME DATA SET --------------------------------
COMMAND ===>
DATA SET NAME: DA0034T.TRG.JCL
VOLUME: BS3008
ENTER NEW NAME BELOW: (The data set will be recataloged.)
ISPF LIBRARY:
PROJECT ===> DA0034T
GROUP ===> TRG
TYPE ===> JCL
OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>
Dataset information
Option 3.2.s
-------------------------- DATA SET INFORMATION -----------------------------
COMMAND ===>
DATA SET NAME: DA0034T.TRG.JCL
GENERAL DATA: CURRENT ALLOCATION:
Management class: MCSTANDS Allocated blocks: 26
Storage class: SCNORM Allocated extents: 1
Volume: BS3008 Maximum dir. blocks: 1
Device type: 3390
Data class:
Organization: PO CURRENT UTILIZATION:
Record format: FB Used blocks: 11
Record length: 150 Used extents: 1
Block size: 1500 Used dir. blocks: 1
1st extent blocks: 26 Number of members: 5
Secondary blocks: 12
Data set name type: PDS
Creation date: 1996/08/08
Expiration date: ***NONE***
Allocate datasets managed by SMS
------------------------ ALLOCATE NEW DATA SET ------------------------------
COMMAND ===>
DATA SET NAME: DA0034T.TRG.JCL
MANAGEMENT CLASS ===> MCSTANDS (Blank for default management class)
STORAGE CLASS ===> SCNORM (Blank for default storage class)
VOLUME SERIAL ===> (Blank for authorized default volume)
DATA CLASS ===> (Blank for default data class)
SPACE UNITS ===> BLOCK (BLKS, TRKS, CYLS, KB, MB or BYTES)
PRIMARY QUANTITY ===> 26 (In above units)
SECONDARY QUANTITY ===> 12 (In above units)
DIRECTORY BLOCKS ===> 1 (Zero for sequential data set) *
RECORD FORMAT ===> FB
RECORD LENGTH ===> 150
BLOCK SIZE ===> 1500
DATA SET NAME TYPE ===> PDS (LIBRARY, PDS, or blank) *
EXPIRATION DATE ===> (YY/MM/DD, YYYY/MM/DD
YY.DDD, YYYY.DDD in Julian form
DDDD for retention period in days
or blank)
(* Specifying LIBRARY may override zero directory block)
Move / Copy
Option 3.3
--------------------------- MOVE/COPY UTILITY -------------------------------
OPTION ===>
C - Copy data set or member(s) CP - Copy and print
M - Move data set or member(s) MP - Move and print
L - Copy and LMF lock member(s) LP - Copy, LMF lock, and print
P - LMF Promote data set or member(s) PP - LMF Promote and print
SPECIFY "FROM" DATA SET BELOW, THEN PRESS ENTER KEY
FROM ISPF LIBRARY: ------ Options C, CP, L, and LP only -------
PROJECT ===> DA0034T
GROUP ===> TRG ===> ===> ===>
TYPE ===> JCL
MEMBER ===> (Blank or pattern for member selection list,
'*' for all members)
FROM OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>
VOLUME SERIAL ===> (If not cataloged)
DATA SET PASSWORD ===> (If password protected)
Move / Copy- 2
COPY --- FROM DA0034T.TRG.JCL -------------------------------------------------
COMMAND ===>
SPECIFY "TO" DATA SET BELOW.
TO ISPF LIBRARY:
PROJECT ===> DA0034T
GROUP ===> TRG
TYPE ===> JCL
TO OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>
VOLUME SERIAL ===> (If not cataloged)
DATA SET PASSWORD ===> (If password protected)
"TO" DATA SET OPTIONS:
IF PARTITIONED, REPLACE LIKE-NAMED MEMBERS ===> YES (YES or NO)
IF SEQUENTIAL, "TO" DATA SET DISPOSITION ===> OLD (OLD or MOD)
SPECIFY PACK OPTION FOR "TO" DATA SET ===> (YES, NO or blank)
DSLIST Utility
Option 3.4
--------------------------- DATA SET LIST UTILITY -----------------------------
OPTION ===>
blank - Display data set list * P - Print data set list
V - Display VTOC information only PV - Print VTOC information only
Enter one or both of the parameters below:
DSNAME LEVEL ===> DA0034T.TRG.*
VOLUME ===>
INITIAL DISPLAY VIEW ===> VOLUME (VOLUME,SPACE,ATTRIB,TOTAL)
CONFIRM DELETE REQUEST ===> YES (YES or NO)
* The following line commands will be available when the list is displayed:
B - Browse data set C - Catalog data set F - Free unused space
E - Edit data set U - Uncatalog data set = - Repeat last command
D - Delete data set P - Print data set
R - Rename data set X - Print index listing
I - Data set information M - Display member list
S - Information (short) Z - Compress data set TSO cmd, CLIST or REXX exec
DSLIST Dataset Selection
DSLIST - DATA SETS BEGINNING WITH DA0034T.TRG.* ----------------- ROW 1 OF 23
COMMAND ===> SCROLL ===> PAGE
COMMAND NAME MESSAGE VOLUME
-------------------------------------------------------------------------------
DA0034T.TRG.ACCOUNT MIGRAT
DA0034T.TRG.BADCOBOL MIGRAT
DA0034T.TRG.COBOL MIGRAT
DA0034T.TRG.COBOL1 MIGRAT
m DA0034T.TRG..JC BS3008
DA0034T.TRG.LNK MIGRAT
DSLIST Commands
M - Member list
C - Catalog a dataset
D - Delete a dataset
E - Edit a dataset
F - Free unused dataspace in a dataset
I - Display information for a dataset
M - Display a memberlist
P - Print a dataset
R - Rename a dataset
S - Display a shortened version of dataset information
U - Uncatalog a dataset
X - Print a dataset indexed listing
Z - Compress a dataset
= - Repeat the last command
Primary Commands
LOCATE To locate a dataset
TSO SUBMIT To execute Clists from the command line
SHOWCMD ON/OFF To show the expanded form of the command
CONFIRM ON/OFF Same as Confirm delete request Yes/NO on the delete panel
SORT Sorts the dataset list based on the fields shown on the next transparency
FIND Finds occurrence of a string with the list of datasets
SAVE dataset-name Saves the current dataset list into the datasetname specified
SELECT pattern [linecommand] To make a selection of datasets to be acted upon determined
by the line command
Reset
Option 3.5
-------------------------- RESET ISPF STATISTICS ----------------------------
OPTION ===>
R - Reset (create/update) ISPF statistics
D - Delete ISPF statistics
NEW USERID ===> (If userid is to be changed)
NEW VERSION NUMBER ===> (If version number is to be changed)
RESET MOD LEVEL ===> YES (YES or NO)
RESET SEQ NUMBERS ===> YES (YES or NO)
ISPF LIBRARY:
PROJECT ===> DA0034T
GROUP ===> TRG
TYPE ===> JCL
MEMBER ===> (Blank or pattern for member selection
list, '*' for all members)
OTHER PARTITIONED DATA SET:
DATA SET NAME ===>
VOLUME SERIAL ===> (If not cataloged)
2. Computing Environment
Key Concepts and Terminology
Command Processing
Data Processing
Multi-programming
Multi-programming Overheads
Relevance of Multi-programming
Multi-processing
Spooling
Virtual Storage
3. Typical IBM Main Frame Site
4. IBM Operating Systems
MVS Evolution
5. Operating System Considerations
a) Process Management
b) Memory Management
c) Input-Output Management
System 370 I/O Architecture
6. IBM Hardware
7. Key Terminology
• Cache Memory
• Expanded Memory
• Processor Resource / System Manager (PR/SM)
• Channels
• Channel - I/O Device Connectivity
• ESCON - Enterprise System Connection
I/O Devices
• Unit Record Devices - Each record is a single physical unit
• Magnetic Tape
• DASD - Direct Access Storage Device
• 3990 Storage Controller
Data Communication Network
• Components of data communication
• 3270 Information Display System
8. Data Communication Equipment
Often Asked Questions About IBM
9. Characteristics Features Of MVS
MVS Terminology
Address Space
MVS
Paging
Demand Paging
Swapping
Page Stealing
RSM (Real Storage Manager)
ASM (Auxiliary Storage Manager)
VSM (Virtual Storage Manager)
Virtual Storage Layout
10. MVS Functions
Data Management Overview
Types of Data
Dataset Organization
Non-VSAM datasets organization
Dataset Organization
VSAM datasets organization
Data Organization - Salient Points
Data Set Naming Rules
MVS Datasets
11. MVS Concepts
How datasets are Accessed
12. Job Management Overview
What is a Job?
Job Management
Definitions
Job Scheduling
13. Dataset Allocation And Job Step Execution
14. MVS Tools Overview
Components of Job Output
An Introduction to TSO
15. Interactive System Productivity Facility (ISPF)
Primary Options Menu
Termination Panel
Key Mapping
Browsing Datasets (Option 1)
Browse Commands
Editing Datasets (Option 2)
Standard editing commands
Edit Profiles
Profile Settings
Edit Modes
Advanced Edit Options
Shifting text source
Utilities Menu
Library Utility
Dataset Utility
New dataset allocation
Renaming Dataset
Dataset information
Allocate datasets managed by SMS
Move / Copy
Move / Copy- 2
DSLIST Utility
DSLIST Dataset Selection
DSLIST Commands
Primary Commands
Reset
1. Introduction
Before you begin to work on the “Mainframe environment”, which by default means “IBM Mainframe”, you need to have a basic idea of the IBM mainframe operating system. Today, it is know as MVS, which expands to Multiple Virtual Storage.
The MVS operating system has evolved over many years and has adapted to the changing technology and modern day requirements. Since the user base of MVS is very large, a change is not easy to Implement. The costs of the Mainframes are very high and the customer base is mostly made up of long-term customers with huge application and large databases to support. Most of these applications are also ‘Mission Critical’ applications. It is therefore imperative that any change to MVS also be backward compatible.
MVS is designed to work with many hundreds of users working together, located in the same locality or across continents. The MVS operating System was created by IBM and is said to ‘propriety’ OS. It has the capacity to supports a large number of peripherals like disks, tapes, printers, Network devices etc. The applications on these “Legacy systems” are typically where there is a huge amount of data and a large user base. Examples are Banking sector, Insurance Sector, Newspapers, Material & Inventory, Airlines, Credit Card System, Billing, Accounting, Shipping and others. Company’s that own these mainframes are typically those that are very big inherently or have to deal with vast amounts of data, which has to be processed fast.
2. Computing Environment
Key Concepts and Terminology
Command Processing
• Command Issue Mode
This how a user (programmer / end-user) interacts with the computer E.g. To edit a program, to execute a program
On-line Mode - Using Terminal
Batch Mode - Using Punched Cards or JCL’s
• Command Execution Mode
All computer commands can be executed in two modes
Foreground - Terminal is locked while the command is being executed
Background - Terminal is free while the command is being executed
Data Processing
How Business applications are executed
• On-line
End User performs business functions
Application programs work interactively with End User
Execution is in foreground mode
Database is immediately updated to reflect the changes
Typically used for transaction processing, queries, master updates functions
• Batch
Application programs are executed in background mode
Periodic business functions are executed automatically
“As and when” business functions are triggered by End User
Operations department is responsible for monitoring the execution
A command file is created to execute these functions
One command file may consist of multiple programs / system utilities
Typically used for bulk transaction processing, report printing, periodic processing (e.g. invoice generation, payroll calculation)
• Time Sharing
1. Resource Sharing
2. Multiple Users compete for computer resources at the same time
3. At any given point in time only one user can have control of the resources
4. What should be the basis of sharing?
• First come first served?
• Priority based?
• Who so ever can grab it - Law of Jungle?
• Equal - Democratically?
• Need based?
Usually combination of 2 and 4 is used i.e. all are equal but some are more equal!!!
• Time Slice
Each user is given control of resources for a pre-defined period - time slice
The control is passed on to next in queue user at the end of time slice (even if first user’s work is incomplete)
If the user requires I/O before the time slice is over, the control is handed over to the next user (since CPU cannot do anything until I/O is complete)
• Priority
Each user / function is assigned a priority level
The higher priority users are serviced first in a round robin fashion
Only if the higher priority users are in “wait” state for I/O completion the users in the lower priority are serviced
Time Sharing typically refers to sharing of resources in an interactive processing mode
Multi-programming
• Why Multi-programming ?
The program has CPU based and Non-CPU based instructions
CPU is kept waiting during the non-CPU based instructions execution
E.g. I/O operations (Disk, Terminal, Printer)
This results in wastage of CPU time - a precious resource
Multi-programming results in better CPU utilization
• How does it Work ?
Multiple programs are kept “ready” for execution
CPU can execute only one program at any given point in time
If the currently executing program requires I/O, it is put in a “wait” state
Another program is immediately taken for execution
On completion of I/O the program again becomes “ready” for execution
This results in an illusion that multiple programs are being executed simultaneously, hence multiprogramming.
Multi-programming Overheads
Program Queue Management
Program Status Management
Context Switching during Changeover
Multiple programs must be in main memory
Management of Common Resource Sharing (e.g. Printer)
It is critical to determine optimum level of Multi-programming to maintain certain service level.
Relevance of Multi-programming
Multi-programming is applicable even for single user system
Multi-programming is a must for multi-user system
Multi-processing
• There are multiple CPUs (processors) in one machine
• These work together under single operating system
• Each CPU executes a separate program
• O/S assigns programs to each CPU
• Essentially CPU is treated as an allocable device!!!!!
Spooling
Why Spooling?
Multiple programs may need same printer at the same time
May result in intermixing of output
Exclusive access to a program will hold other programs
Printer is much slower, results in longer “wait” state
How it is Implemented?
Output to printer is intercepted and written to a disk i.e. “spooled”
On completion of program “spooled” output is queued for Printing
This queue is processed by O/S print routine
The O/S print routine is multi-programmed along with application programs
Virtual Storage
Why Virtual Storage ?
Required to enable execution of programs with are larger than the main memory size
What is Virtual Storage ?
Technique to simulate large amount of main storage
Reality is main storage is much less
E.g. Real main storage is 16MB but virtual storage is 2GB
How Virtual Storage is Implemented ?
Program executable code is generated assuming virtual storage size
Only part of the program is loaded in main memory
Address translation mechanism is used to map virtual address to actual address
Feasible because only the instruction currently being executed and the corresponding data need to be in the main storage
Advantages of Virtual Storage
Main memory can be shared by multiple programs
Enables effective use of the limited main storage
Overheads of Virtual Storage
Address mapping
Keeping track of what is in memory and what is not
Data/Instructions need to be “brought in” main memory as an when required
“Remove” from main memory what is not currently required (to make room for instructions of other program)
Memory Management
3. Typical IBM Main Frame Site
Business Environment
Large Local or Global Operations or Both
User Community in Hundreds
Almost non-stop operations (weekly maintenance window of about 1/2 day)
Large Volumes of Data / High Volumes of Transactions
Hundreds of Applications / Mission Critical Applications
Processing Environment
On-line during prime time (might mean 24 hours for global operations)
Batch during non-prime time (wrt local time) of 12 - 15 hours
Software Environment
Variety of Databases / OLTP packages
EDI Processing
Two Tier Database Architecture - C/S and Central
Hardware Environment
Multiple Machines - Networked Together
Multiple Processors for Each Machine
Huge Number of Data Storage Devices - Disks and Tapes
Support Environment
Huge IT Departments
Application Programming Staff
Development
Maintenance / Support
DBAs
Operations Multiple Data Centers to Manage Batch Processing
System Programmers for;
- O/S
- Database Packages
- OLTP Packages
Network Support Staff
4. IBM Operating Systems
IBM Families of Operating Systems
MVS Evolution
1995 MVS/ESA 5.2.2
1993 MVS/OPEN EDITION (POSIX)
1990 SYSTEM 390
1988 MVS/ESA 16 B
1981 MVS/XA 2 GB (31-bit)
1974 OS/VS2R2(MVS) 16 MB(24-bit)
1972 OS/VS1 OS/VS2R1(SVS) 16 MB
1970 SYSTEM 370
1966 OS/MFT OS/MVT 3 MB
1966 PRIMARY CONTROL PROGRAM (PCP)
• Migrating from Dos to OS was a major change
VM is not very popular
Today most of the sites use MVS
• Major Handicaps
Limited and inefficient spooling
No Virtual Storage
• Utilities to Overcome these Handicaps
HASP - Houston Automatic Spooling Priority
- Developed unofficially (self initiative) by IBM employees
- Distributed free to MVT/MFT users
- Became very popular
- Eventually owned and supported by IBM
• ASP - Attached Support Processor
Developed (officially) by IBM
Intended for MVT
Several mainframes can work together under single O/S (predecessor of multi-processing?)
Provided better spooling capability
Relatively less takers
System 370
• Announced in early 70s
• Supported Virtual Storage
• New Operating Systems OS/VS were introduced
• OS/VS1 (Virtual System 1) - adopted from MFT
• OS/VS2 (Virtual System 2)
• Version SVS - Single Virtual Storage
- Adopted from MVT (1972)
• Version MVS - Multiple Virtual Storage
- Completely Rewritten (1974)
• HASP and ASP were migrated to OS/VS2 under the names JES2 and JES3
• MVS and its derivatives are the mainstay of IBM O/S now
The Von Neumann Computing Model
• Most common model for computing systems
• Prepared by John Von Neumann in year 1940
Instructions are executed one at a time sequentially
5. Operating System Considerations
a) Process Management
Problem :- According to Jon Von Neumann model only one instruction gets executed at a time. What will happen if that instruction is waiting for I/O. In this case CPU time is wasted.
Solution :-
b) Memory Management
Problem :- Any thing that is to be executed, must be in memory. (memory limitation)
Solution :- 1. Place task in real memory
2. Place task in virtual memory
1. Real memory implementation :
code & data are in real memory
size of code & data limited by size of installed memory
good performance, low overhead
possible wastage of memory
2. Virtual memory implementation :
based on assumption for a task, not all code & data is needed in real memory all the time
implemented on a combination of real plus auxiliary storage
operating system takes responsibility of bringing rest part of tasks in real memory when required.
Advantage : code and data size independent of the real memory
c) Input-Output Management
Problem :- Application should not worry about device characteristics. I/O device speed is 100 times slower than CPU.
Solution : Let all I/O be handled by a specialized system-I/O Subsystem
System 370 I/O Architecture
• Channels
Provide paths between the processor & I/O devices
3090 processors can have a maximum of128 channels
A channel itself is a computer & executes I/O instructions called channel commands
I/O devices are connected to channels through an intermediate device called “Control Unit”.
Each channel can have up to 8 control units.
• Control Unit
These are DASD units, can be connected to common control unit, called “String Controller”.
String Controller can be connected to a channel directly or indirectly
A control unit called “Storage Control” connects string controllers to a channel.
6. IBM Hardware
How do today’s PC and medium sized IBM MF compare?
Characteristics PC (Pentium 100) Main-Frame (4381)
Processor-speed 16-31 MB 32 MB
Main Memory 16-32 MB 32 MB
Individual Disk 1.2 GB 946 MB
Storage
Monitor SVGA/Graphics Character based dumb terminal
Where does the power of IBM MF come from?
Multiple processors with partitioning capability
Cache memory and expandable memory
Multi-user / Multi-programming Support
Batch and on-line processing support
Local and remote terminal support
High number of devices
Strong data management capability
7. Key Terminology
• Cache Memory
High speed memory buffer (faster than main memory)
Operates between CPU and main memory
Used to store frequently accessed storage locations (instructions)
Usually available on all processors
• Expanded Memory
Supplements main memory
Not directly available to application program
Not directly accessible by CPU
Implemented using high speed disk
Usually available with higher-end machines
• Processor Resource / System Manager (PR/SM)
Used to control Multi-processor Configurations
Allows division of multi-processors in partitions - LPAR
Each partition functions as independent system
Enables fault tolerance implementation by designating Primary and Secondary Partitions
Secondary partition takes over automatically if primary fails
Allows reconfiguration of I/O channel to partitions
• Channels
Device Management Concept - Unique to IBM
Provides access path between CPU and I/O devices (DMA)
Up to eight control units can be connected to one channel
Up to eight I/O devices can be connected to one control unit
A channel is a small computer in itself with a set of instructions (Channel commands)
Channel controls the I/O device operations independent of CPU
Cannel processing can overlap CPU processing - improved performance
• Channel - I/O Device Connectivity
Parallel architecture i.e. all bits of a byte are transmitted simultaneously
Information transfer is in unit of two bytes
Sixteen data wires and additional control wires are required
Maximum length of 120 meters (400 feet)
Data speed of 4.5mbps
Use of copper results in heavy, expensive cabling
• ESCON - Enterprise System Connection
Announced in 1990
Uses fiber optic
Results in reduced size and weight
Length limit extended to approximately 42Km (26 miles)
Faster data speed (17mbps)
I/O Devices
• Unit Record Devices - Each record is a single physical unit
Card Devices (now obsolete) : Readers / Punches / Reader and Punches
Printer
- Impact Printers - 600 to 2000 LPM
- Non-Impact Printers - 3800 sub-system, 20,000 LPM
Built-in control units for each device
Directly attached to channel
• Magnetic Tape
High volume storage
Sequential processing
Normally used as back-up device
Also used for physical transfer of data
4 to 8 tape drives are connected to one control unit
• DASD - Direct Access Storage Device
IBM’s official name for Disk
Non-removable - offers better reliability and are faster
Each unit is called as disk pack or Volume
Each pack has multiple surfaces
Each surface has multiple tracks
Same track no. of all surfaces together constitute a Cylinder
DASD capacity ranges from 100 MB (3330) to 8514MB (3390/9)
A group of DASDs of same type are connected together to form a String and are connected to a string controller
Multiple string controller are connected to a storage controller
Storage controller is connected to channel
• 3990 Storage Controller
Can connect 2 strings of 32 each 3390 model DASDs (totally 64 DASDs
Consists of high speed cache storage (32MB to 1024MB)
Data is buffered using cache
Frequently accessed data is stored in Cache - improved performance
Supports more than 4 channel connection to processor
- Enables several simultaneous disk operations
Data Communication Network
Allows local and remote terminals access to the computer systems
• Components of data communication
Host Computer - System/370 processor
Communications Controller - Attached to the channel
- Devices (terminals and printers) are connected to the terminal controller (also known as cluster controller)
- Terminal controller is connected to communications controller
- Terminal Controller managing Local terminals / printers can be connected directly to the channel
Modems and telecommunication lines (telephone line, Satellite Link)
- Remote terminals / printers are connected to terminal controller (at local site)
- Terminal controller is connected to modem
- Modem is connected to telecommunications line
- At the receiving end telecommunications line is connected to modem
- Modem is connected to communication controller
• 3270 Information Display System
Sub-system of terminals, printers and controllers connected to Host computer
Locally through communications controller or directly to channel
Remotely through communications controller, modem and telecommunications line
A typical 3270 terminal controller (3274) controls up to 32 terminals / printers
Emulator programs (Shine Link, Erma Link) allow computers (typically PCs) to mimic 3270 devices
These are useful since they allow upload / download of data between MF and PC
8. Data Communication Equipment
Data Communication equipment lets an installation create a data communication network that lets users at local terminals & remote terminals to access computer system
• At the center of the network is the host system, a system/370 processor
• The control unit that attaches to the host system’s channels is called a communication controller
it manages the communication function
necessary to connect remote terminal system
via modems and telecommunication lines
• A modem is a device that translates digital signals from the computer equipment at the sending end into audio signal that are transmitted over a telecommunication line, which can be telephone line, a satellite link or some other type of connection
• At the receiving end of the line, another modem converts those audio signals back into digital signal
Often Asked Questions About IBM
Why is grasping IBM difficult?
User interface is poor
- Non-intuitive
- Unfriendly and Formidable
Ancient terminology (e.g. Card, Punch queue) which is irrelevant now
Different terminology (e.g. DASD, DATA SET)
Too many options / parameters
Too many terms / acronyms
Variety of software results in site specific variations
Why is IBM so Popular
Sturdy and Secure HW/SW
Downward compatibility (does not make application SW obsolete)
Excellent customer support
Variety of software (Databases, OLTP packages) - IBM and Third Party
Market Leader
- First in DBMS, First in OLTP, first in PC!!!!
- First to develop a chess playing computer that beat world champion
The old legend : Nobody got fired for buying IBM
Future of IBM (is there any?)
Large existing application base will need support
Downsizing will need knowledge of current application/platform
Dual skills will be much in demand
Not all applications are suitable for downsizing - many will remain on MF
MF will be increasingly used as back-end server
New applications (data warehousing type) will be developed on MF
Multi-tier architecture will become common
Bottom Line : It is too soon to herald death of IBM MF
9. Characteristics Features Of MVS
1) VS : The use of virtual storage increases the number of storage locations available to hold programs and data
2) MULTIPROGRAMMING : Multiprogramming simply reclaims the CPU during idle periods to let other programs execute
3) SPOOLING : To provide shared access to printer devices, spooling is used
4) BATCH PROCESSING : When batch processing is used, work is processed in units called “Jobs”. A job may cause one or more programs to be executed in sequence. Batch jobs get collectively processed by the system
5) TIMESHARING : In this system, each user has access to the system through a terminal device. Instead of submitting jobs that are scheduled for later execution, the user enters commands that are processed immediately.
Time sharing is also called as Online Processing because it lets users interact directly with the computer.
MVS Terminology
Address Space
• An address space is simply the complete range of addresses and as a result, the number of storage locations; that can be accessed by the computer.
• An address space is a group of digits that identify a physical location in main storage
• In MVS an address space has 24-bit positions i.e. 16MB addressability.
• MVS allows each programmer to use all 16MB address space, even though real storage
includes only, for example 4MB physical locations.
• In MVS, references in the program address space are not associated with a particular real storage location. They remain reference to a particular piece of information called Virtual Addresses. They become real only when assigned to a physical location.
• When the program is ready to execute the system, using a system/370 hardware feature called Dynamic Address Translation(DAT), maps the virtual addresses in the program to the real storage addresses.
• By doing this, MVS can make the program address space larger than the number of physical location available in real storage.
MVS
• It uses real storage to simulate several address spaces, each of which is independent of the others
• Auxiliary storage and real storage are used in combination to simulate several virtual storage address space
• Each batch job or TSO user is given its own address space
• Various factors such as the speed of the processor and the amount of real storage installed effectively limit the number of address spaces than can be simulated.
• To provide for the larger virtual storage, MVS treats DASD as an extension of real storage
• Only one address space can be in control of CPU
Paging
• To enable the movement of the parts of a program executing in virtual storage between real storage and auxiliary storage, the MVS system breaks real storage, virtual storage & Auxiliary storage into blocks.
A block of Real Storage is a Frame
A block of Virtual Storage is a Page
A block of Auxiliary storage is a Slot
• A page, a frame and a slot are all the same size each is a 4K byte
• An active virtual storage page resides in a real storage frame, an inactive virtual storage page resides in an auxiliary storage slot
• Moving pages between real storage frames and auxiliary storage slots is called PAGING
Demand Paging
• Assume that DAT encounters an invalid page table entry during address translation, indicating that a page is required that is not in a real storage frame. To resolve this Page Fault, the system must locate an available real storage frame to map the required page(page-in). If there is no available frame, an assigned frame must be freed. To free a frame, the system moves its contents to a auxiliary storage. This movement is called a Page-Out.
• System performs page-out only when the contents of the frame have changed since the page was brought into real storage.
• Once a frame is located for the required page, the contents of the page are moved from auxiliary storage to real storage. This movement is called as Page-In.
• The process of bringing a page from auxiliary storage to real storage in response to a Page Fault is called DEMAND PAGING
• MVS tries to avoid the time consuming process of demand paging by keeping an adequate supply of available real storage frames constantly on hand. Swapping is one means of ensuring this adequate supply. Page stealing is another.
Swapping
• Swapping is the movement of an entire address space between Virtual storage & Auxiliary storage.
• It is one of the several methods MVS employs to balance system workload, as well as to ensure that an adequate supply of available real storage frames is maintained.
• Address space that are swapped in are active, having pages in real storage frames & pages in auxiliary storage slots.
• Address spaces that are swapped out are inactive; the address space resides on auxiliary
storage and cannot execute until it is swapped in.
Page Stealing
• If there are not enough 4K frames available then frames which are not referenced for long time will get thrown out and written to the auxiliary storage. So now those 4K frames are free. This is known as Page Stealing.
• The paging process is managed by several components of MVS. The 3 major one are :
Real Storage Manager (RSM)
Auxiliary Storage Manager(ASM)
Virtual Storage Manager (VSM)
RSM (Real Storage Manager)
manages real storage
directs movements of pages among real and auxiliary
builds segment & page table
ASM (Auxiliary Storage Manager)
keeps track of the contents of the page dataset and swap dataset
page dataset contain virtual pages that are currently occupying a real storage frame.
Swap dataset contain the LSQA pages of swapped out address space.
VSM (Virtual Storage Manager)
controls allocation/deallocation of virtual storage
maintain storage use information for Storage Management Facility (SMF)
Virtual Storage Layout
Each Virtual Storage Address Space consists of a System Area, a Private Area and a Common Area.
System Area
• It contains the nucleus load module, page frame table entries. Data blocks for system libraries and so many other things
• Nucleus and other contents of the System Area make up the resident part of the MVS system control program
• Its contents are mapped one for one into real storage frames at initialization time.
• The size of System Area does not change once it is initialized
Common Area
• It contains parts of the system control program, control blocks, tables and data areas
• The basic parts of the Common Area are:
System Queue Area (SQA)
Pageable Link Pack Area (PLPA)
Common Service Area (CSA)
System Queue Area
contains tables and queues relating to the entire system
the contents of SQA depends on an installation’s configuration & job requirement.
It is allocated from the top of the virtual storage in 64K segments, a minimum of 3 segments
are allocated during system initialization.
Allocated SQA space is both non-swappable and non-pageable
Pageable Link Pack Area
Contains svc routines, access methods, other system programs, and selected user programs.
It is pageable
Because the modules in PLPA are shared by all users, all program modules in PLPA must be reentrant and read-only
PLPA space is allocated in 4K block directly below SQA.
The size of PLPA is determined by the number of modules included
Once the size is set, PLPA does not expand
Common Service Area
Contains pageable system and user data areas.
It is addressable by all active virtual storage address space and shared by all swapped-in users.
Virtual storage for CSA is allocated in 4K pages directly below PLPA.
Private Area
The Private Area is made up of :
Local System Queue Area(LSQA)
Scheduler Work Area(SWA)
Subpools 229/230
System Region
User Region
The user region is the space within Private Area that is available for running the user’s program
Local System Queue Area
• LSQA contains tables and queues that are unique to a particular address space
Scheduler Work Area
• SWA contain control blocks that exist from task initiation to task termination
• The information in SWA is created when a job is interpreted and used during job initiation and execution
• It is pageable and swappable
10. MVS Functions
Data Management Overview
Anything that needs to be stored and accessed on user request is a Data for MVS
Types of Data
Business Data
Database
Indexed Files
Flat Files
Application Components
Source Programs
Executable Programs
Screen Definitions
Record Layout Definitions
Command File Scripts
MVS (System) Data
O/S program
User Information (ID, Password, Profile)
Access Permissions
Temporary Data
O/S Built Data (e.g. task queues, segment table, page table)
Spooled Output
Work Files for Sort
Dataset Organization
• Dataset organization fall into two categories under MVS : VSAM and NON-VSAM
• Non-VSAM provides four basic ways of organizing data stored in datasets
Physical Sequential
Indexed Sequential
Direct
Partitioned
• VSAM provides four basic ways of organizing data stored in datasets
Entry Sequence Dataset - ESDS
Key Sequence Dataset - KSDS
Relative Record Dataset - RRDS
Linear Dataset - LDS
Non-VSAM datasets organization
Physical Sequential
• Records are stored one after another in consecutive sequence
• Can reside on just any type of I/O devices
• Appropriate when file’s records don’t have to be retrieved at random
Indexed Sequential
• Includes an index, which relates key field values to the location of their corresponding data records
Direct
• Permits random access of records
• It doesn’t use an index
• To access record, the disk location address (By hashing) of that record to be specified
Partitioned
• Consists of one or more members
• Each of these members can be processed, as if it were a separate physical sequential file.
• Names of members in a Partitioned dataset(PDS) is stored in a directory
Dataset Organization
Partitioned Data Set - Salient Features
Commonly referred as PDS
Also known as Library
Used to store application components
PDS is divided into one or many members
Member name can be up to 8 characters long
There is no extension for member
Each member can be processed as an individual unit
Entire PDS can be processed as one unit
Each PDS contains a directory
Directory has an entry for each member in a PDS
- PDS Examples:
PAYROLL.TEST.SOURCE, PAYROLL.PROD.SOURCE,
INV.TEST.LOADLIB
Normally consists of 3 qualifiers called as
- PROJECT
- GROUP
- TYPE
– Personal PDS start with high level qualifier as User ID
– E.g. DA00T23.NEW.SOURCE
– Member Name Examples
– PAB0017, PAB0105, PAC0021 etc.
Usually, the application component type cannot be identified from the member name. For that naming conventions are used for PDS.
VSAM datasets organization
ESDS
• Can only reside on DASD
• Functionally equivalent to Physical Sequential File
KSDS
• Functionally equivalent to Indexed Sequential File
RRDS
• Lets you retrieve the record by specifying the location relative to the start of the file
All VSAM datasets must be cataloged
Data Organization - Salient Points
Non-VSAM Data Organization was developed in mid 1960s
VSAM - (Virtual Storage Access Method) was introduced in early 1970s
VSAM was expected to replace Non-VSAM Data Organization Functions
Today, most sites use both VSAM and Non-VSAM Data Organization
VSAM is the primary data organization for user data
VSAM is also called as “native” file management system of IBM
Most of the DBMS running under MVS use VSAM as underlying Data Organization (e.g. DB2, IDMS)
Physical Sequential Data Organization is used for “flat” files
Index Sequential and Direct Data Organization are not very popular now (these functions are handled better by VSAM)
Partitioned Data Sets (PDS) also used by MVS to store O/S programs
Data Set Naming Rules
Data Set Naming Rules
Allows - Alpha, Digits, National Characters @,#$, and “.”
Maximum Length 44 characters for DASD, 17 for Tape
If Length is more than 8, must be broken into qualifiers of maximum 8 characters each
Qualifiers are separated by “.”
“.” are counted in overall length
First character of the qualifier must be alpha or national character
Last character of data set must not be “.”
First qualifier is called as high-level qualifier
High-level qualifier has special significance
E.g. Data Set name PAYROLL.P9710.TRAN
Has three qualifiers
High-level qualifier is PAYROLL
Total length is 18
Dataset Tracking
Data Set Tracking Mechanisms
Label
Catalog
Label
• Data Set Label
First record of each data set is a label record called as;
File label or Data Set Control Block (DSCB)
There are several formats for DSCB
DSCB describes data set’s name, it’s DASD location and other details
• DASD Label
Each DASD is labeled; called Volume Label (VOL1 label)
DASD label is stored on a disk at third record of track 0 in cylinder 0
DASD label contains Volume Serial Number and address of the VTOC file
• Volume Serial Number
Each DASD is identified by a unique number, Volume Serial Number vol-ser
Vol Ser must be specified for accessing the Data Set (which is not cataloged)
• VTOC
VTOC - Volume Table Of Contents is a special file for each DASD
VTOC contains the file labels for all data sets on the volume
MVS Datasets
Label Processing
• When a dataset is stored on disk or tape, MVS identifies it with special records called ‘labels’.
• There are 2 types of DASD labels : Volume, File Label
• All DASD volumes must contain a volume label, often called a VOL1 label. This label is always in the same place on a disk volume : the 3rd record of track zero in cylinder zero.
• Volume label has 2 important functions
It identifies the volume by providing a volume serial no. : Vol-ser. Every DASD volume must have a unique six-characters vol-ser.
It contains the disk address of the VTOC.
• The VTOC (Volume Table of Contents) is a special file that contains the file labels for the datasets on the volume.
• These labels are called Data Set Control Block (DSCB) have several formats called Format-1, Format-2 and so on.
Format-4-dscb : describes VTOC itself
Format-1-dscb : describes a dataset by supplying dataset name, DASD location & other characteristics [space is allocated to DASD file in area called extents. Each extent consists of one or more adjacent tracks]
[has room to define 3 extents for a file (1 primary, 2 secondary)
Format-3-dscb : if file requires more than 3 extents, this dscb is created
It contains room for 13 additional secondary extents [As a result file can contain up to 16 extents]
Format-5-dscb : contain information about free extents that aren’t allocated to files
each can define up to 26 free extents
Catalog
• Obviates the need of specifying Vol Ser for the data set
• Catalog Types
Master Catalog
User Catalog
• Catalog Features
Each MVS has only one Master Catalog
Master Catalog is used by MVS for system data sets
User Catalog is used for user data sets
There can be multiple User Catalogs
Master Catalog contains one entry of each User Catalog
- VSAM data sets must be Cataloged
- Non-VSAM Data Sets may or may not be cataloged
- An Alias can be created for a Catalog
• Usually, the high-level qualifier of a data set is same as the catalog name or catalog alias name
• Multiple data sets can be cataloged in single user catalog
• Alias helps to catalog data sets with different high-level qualifiers to be cataloged in a single user catalog
Data Management
• Data Management Functions (for Non-PDS)
Allocate
Process
- Add Records
- Modify Records
- Delete Records
Deallocate (delete)
Copy
Rename
Catalog
• Additional Functions for PDS
Compress
Member Management
Create, Modify, Delete, Copy, Rename
• How Data Management is Achieved
Interactively using MVS Commands
Executing MVS Utility Programs (batch mode)
Through Application Programs
- On-line Processing
- Batch Processing
11. MVS Concepts
How datasets are Accessed
• Generally dataset goes through three phases when handled through program :
• Allocation
• Processing
• Deallocation
Allocation
• The process of locating an existing dataset or space for a new dataset and preparing the system control block needed to use the dataset is called “Allocation”
• Allocation occurs at 3 levels
Unit is selected and allocated e.g. SYSALLDA-DASD, TAPE
Volume is allocated
Dataset on that volume is allocated
Processing
• Processing involves 3 steps
Opening datasets
Processing I/O
Closing datasets
Deallocation
• Each file is automatically deallocated when job is finished with it
• While deallocating, disposition of dataset can be decided, whether you want to retain the file
or should be deleted
12. Job Management Overview
What is a Job?
Simply put, job is execution of one or more related programs in sequence
E.g. 1
A job of creating an executable module (load module) from a source program consists of
executing Compiler program and executing Linker program.
E.g. 2
A job of printing invoices may consist of execution of three programs;
• an EXTRACT program to pull out transactions from database,
• a SORT program for sorting the transactions,
• a PRINT program to print the invoices.
Job - Salient Points
• Executed in a background mode
• Job details are specified using some command language
Job Management Functions
• Receive the job into operating system
• Schedule the job for processing by O/S
• Execute the Job
• Process the output
Stages of Job
1. Job Preparation
• User keys-in commands using Editor
• Save as a member in PDS
2. Job Scheduling
• Initiated using TSO SUBMIT command
• Not necessarily on FIFO basis
• Prioritization is implemented using concept of class and priority code
3. Job Execution
4. End of execution (normal, erroneous)
• Intimate the user
• Job log management
• Job output management
• Printer output
• Data set output
• Erroneous Termination of job
Type of execution errors
Incorrect commands (command syntax errors)
Required resources (Data Sets, Program Library, Program Load Module) not available
Violation of access permissions for data sets, program load module etc.
Mismatch in data set status; as required by job and as it actually exists e.g. a create is issued for a data set while data set which already exists
Program errors
Mismatch for Data set - Between program definition
and actual characteristics
Infinite loop
Data Type mismatch - numeric variable contains non-numeric data
Any abnormal termination of program is called as “Abend”
Job Management
Definitions
• JOB - Is the execution of one or more related programs in sequence
• JOB STEP - Each program to be executed by a Job is called a job step
• JCL (Job Control Language) - Is a set of control statements that provide the specifications necessary to process a job
• JES (Job Entry Subsystem) :
Meant for job entry into system, also for job returning after completion
Shares the load on the operating system
Takes care of all inputs and outputs
Do simple basic syntax checking
Resource Initialization
Creation of address space
It is also known as Job Scheduler
Classified into
- JES2-design for uniprocessor environment
- JES3-design for multiprocessing environment (Decided at the time of system initialization)
Jobs are sent to MVS depending on the class priority schemes
How Job Is Entered Into the System
• When you submit the job, JES reads the job stream(sequence of JCL commands) from a DASD file and copy it to a job queue, which is a part of a special DASD file called JES SPOOL.
Job Scheduling
How Job Is Scheduled For Execution
• MVS does not necessarily process jobs in the order in which they are submitted. Instead, JES examines the jobs in the job queue and selects the most important jobs for execution. That way JES can prioritize its work, giving preference to more important jobs.
• JES uses 2 characteristics to classify a job’s importance, both of which can be specified in the job’s JCL : Job Class and Job Priority
• If two or more jobs are waiting to execute, the JES scheduler selects the one with higher priority
• Each job class is represented by a single character, either a letter (A-Z) or a digit (0-9). Job classes are assigned based on the processing characteristics of the job.
• INITIATOR :- An initiator is a program that runs in the system region of an address space. Each initiator can handle one job at a time. It examines the JES spool, selects an appropriate job for execution, executes the job in its address space and returns to the JES spool for another job.
• The number of active initiator on a system and as a result the number of address spaces eligible for batch job processing determines the number of batch jobs that can be multi programmed at once.
• Each initiator has one or more job classes associated with it. It executes jobs only from those classes.
Initiator Eligible Job Classes
1 A
2 B,C,D,H,L,T
3 B,C,D,H,L,T
4 B,C
5 B,C
6 C
• Within a job class, initiator selects jobs for execution based on their priorities, which can range from 0 to 15
• If two or more jobs have same class & priority, they are executed in the order in which they are submitted.
How Job Is Executed
• Once an initiator has selected job for execution, it invokes a program called the interpreter
• The interpreter's job is to examine the job information passed to it by JES and create a series of control blocks in the SWA, a part of the address space’s private area
• Among other things, these control blocks describes all of the datasets the job needs
• Now initiator goes through 3 phases for each step of job
Allocation (required resources are allocated)
Processing (region is created & program is loaded and executed)
Deallocation (resources are released)
• This continues until there are no more job steps to process
• This continues until there are no more job steps to process
• Then, the initiator releases the job and searches the spool again for another job from the proper class to execute
• As a user’s program to execute, it can retrieve data that was included as part of job stream and stores in the JES spool
How The Job’s Output Is Processed
• Like Jobs, SYSOUT data is assigned an output class that determines how the output will be handled
Common O/P classes are;
A - Printer
B - Card Punch O/P
X - Held O/P
[Held O/P stays on the sysout queue indefinitely; Usually, O/P is held so that it can be examined from a TSO terminal]
How A Job Is Purged
• After the job’s output has been processed, the job is purged from the system, i.e. JES spool space, the job used, is freed so it can be used by other jobs and any JES control blocks associated with the job are deleted
• Once a job has been purged, JES no longer knows of its existence
13. Dataset Allocation And Job Step Execution
When the user program completes, the initiator invokes Unallocation routine to Deallocate the resources used by the job step
14. MVS Tools Overview
Tools are a set of sub-systems and facilities that;
• Implement MVS functions
• Are directly used by MVS user
These are essentially Software programs; system programs
Interactive Processing Tools ( TSO - Time Sharing Option )
• Used by the terminal user to invoke MVS facilities interactively
• TSO internally treats each terminal user as a Job
• Job Stream is created when terminal user logs in
• Each terminal user is given a separate address space
ISPF - Interactive System Productive Facility
• Runs as part of TSO
• Takes advantage of full screen (24 x 80) capability of 3270 terminals
• Panels are provided for terminal users for issuing commands
• Key Functions Implemented Using ISPF
Editor - Program Sources, Job Commands
Data Management - PDS and Physical Sequential Data Set Management
Job Processing - Initiate Job, Check job log
Miscellaneous
• PDF - Program Development Facility is Part of ISPF
Job Management Tools
• Job Control Language (JCL)
• Used to code job commands
• Job Entry System (JES)
• Manages the job before and after execution; receive, schedule, process output
• Base Control Program (BCP)
Manages the job during execution
• Simultaneous Peripheral Operations on-line (SPOOL)
Used for staging of input and output
Why and What of JCL?
JCL is the most dreaded word for newcomer to IBM world
Why JCL?
Since the job is executed in background, without user interaction, all information required for the execution must be supplied in advance
JCL is used to specify this information
The most common information supplied through JCL is;
- To whom the job belongs (which user id)?
- What is the program / utility that is to be executed?
- Where (in which library / PDS) to find the load module of the program or utility?
- Where (which DASD volume / catalog, what data set name) to find the input data files for the program / utility?
- Where should (which DASD volume, what data set name) the output files be created?
- The printer output should be directed to which printer?
• What is JCL?
Stands for Job Control Language
Connotation is; set for job commands stored as a MEMBER in a PDS e.g. JCL to execute a batch program, JCL to compile and link a COBOL program, JCL to allocate a VSAM data set, JCL to SORT and MERGE two Physical Sequential Data Sets
Thus, JCL is nothing but a set of commands
- User keys-in commands using a editor
- Saves as PDS Member e.g. PAYROLL.TEST.JCL(PROG1JCL)
• What makes learning JCL so difficult
JCL is powerful and flexible, that leads to some complexity
It is non-intuitive
The user interface is formidable
The terms are ancient
Very little has changed since 1965 when JCL was first developed
• However, JCL can be understood and mastered with logical approach and open mind
• Good grasp of JCL is a must to be a versatile IBM programmer
JES - Job Entry System
• Introduction
Two versions of JES; JES2/JES3
- JES2 is primarily for single processor systems
- JES3 is for multiple processor systems
Each MVS system uses either JES2 or JES3
JES3 has additional scheduler functions than JES2 (e.g. schedule job at specific time of the day, interdependent job scheduling )
MVS Tools Overview
• How Job Enters the System?
.Job can enter the system from local or remote card readers (now obsolete)
By starting a cataloged JCL procedure (e.g. when user logs in, a predefined set of commands are executed as a batch job. These commands are stored as cataloged JCL procedure)
By interactive users ‘thru’ SUBMIT command. Users can create a PDS member in which commands are specified. On issuing SUBMIT command these are executed as a job.
We will focus on third approach
Input
On SUBMIT, internal reader reads the JCL and creates an input stream
JES2 reads the input stream, assigns a Job Number and places input stream in SPOOL data set (a message is sent to TSO user about the job number)
Job is put in the conversion queue
Conversion
Converter program analyzes JCL statements
Converts into converter / interpreter text
Checks for Syntax errors
- If any error, Job is queued for output processing
- If no error, Job is queued for processing
Processing
• Selection based on job class and priority
Selected job is passed to Initiator
Initiator invokes Interpreter
Interpreter builds control blocks from converter / interpreter text in a Scheduler Work Area (SWA)
- SWA is part of address space’s private area
- Control blacks describe the data sets required by the job
Initiator allocates resources required by the Job
- Initiator starts the program to be executed
- Builds the user region
- Loads the program in the user region
- Transfers control to the program
On completion of the program execution, initiator de-allocates the resources
The process of allocation / execution and de-allocation is repeated for each job step
Initiator Characteristics
Each initiator can handle one job at a time
There can be multiple initiators
Each initiator has a job class associated with it
System Operators can control the number of initiators and the class/es associated with each initiator
Input Data
Input data to the user’s program can be specified in the job
Called as in-stream data or SYSIN data
SYSIN data is read and stored in JES spool
SYSIN data is treated like a data coming from card reader
Output
Management of System Messages, User Data Sets that need to be Printed / Punched
Organized by output class and device set-up requirements
User ‘s program can produce output data that is stored in a JES spool; called as SYSOUT data
Each SYSOUT data is assigned an output class
Output class indicates the printer selection
“Held” Output
- Special class (usually Z) is assigned to “hold” the output
- “Held” output remains in the SYSOUT indefinitely
- Usually used to verify before printing
- User can change the class and thus release the “held” output
Hard-Copy
Local or remote processing
Device Selection
Queue the output for print /punch
Purge
Release SPOOL and Job Queue space
Intimate TSO user about job completion
Job Output
Output is produced at each stage job processing
Includes output produced by;
- JES
- MVS
- User’s program where SYSOUT is allocated as output device
Job output is available to user (you can see it dynamically)
It can be viewed using ISPF
Components of Job Output
Component 1- Separator Page
First and last page of job output
Inserted by JES
Helps operator the segregate the job outputs when directed to printer
Component 2 – part I Job Log
Messages produced by JES
Also displayed on operator’s console
If the job Abends, error messages are logged in Job Log
Component 2 – part II Job Statistics
Summary information of system resources used by the job e.g.;
Number of JCL cards
Spool usage
Execution time
Component 3 - JCL Listing
• List of JCL that was processed for the job
• Should be same as what user has created
Component 4 - Message Log
• Messages regarding job execution
• Messages produced by MVS
• Includes details of
• Resource Allocation
• Program Execution
• Resource De-allocation
• Consists of Message label and message text
• Message label starting with IEF indicates a MVS message
• Installation specific messages
•
Component 5 - SYSOUT
• Separate sub-component for each SYSOUT allocation
• Each SYSOUT can have different characteristics e.g. class, record length etc.
VTAM – Virtual Telecommunications Access Method
• Telecommunications (TC) Access Method
• Required to support terminal devices
• Part of SNA – System Network Architecture
• Provides centralized control over all terminals attached to the system
• VTAM Application programs (e.g. TSO, CICS IMS-DC) communicate with terminal devices via VTAM
CICS – Customer Information Control Program (optional component)
• Interactive applications are developed using CICS
• CICS is a VTAM application program
• Works with VTAM to support on-line functions
• CICS implements multi-programming within itself
• Multiple programs which are part of same application are executed within CICS address space
• CICS selects one program at a time for execution
• CICS itself is multi-programmed by MVS along with other programs
DB2 - DataBase 2 (optional component)
• Database Management System
• Relational Implementation
RACF - Resource Access Control Facility
• Comprehensive Security Package
• Though optional used by most of the installations
• Users and Resources (e.g. Data Sets) are identified to RACF
• Whenever user tries to access a resource the security is checked by RACF
• RACF is a set of routines
• Invoked as and when required
SMF - System Management Facility
• Keeps track of system usage
– CPU, DASD I/O, Records Printed etc.
• Data collected when job is executed
• Stored in a special data sets
• Used for billing
Language Translators / Linkage Editor / Loader
• Language Translators- Convert source to object module
• Separate for each language, Assembler Language Translator is part of MVS
• Linkage Editor (part of MVS) - Converts object module to executable I.e. load module
• Loader - Creates temporary load module (used during testing phase)
Utilities
• Set of general purpose programs
• Executed like a user program through JCL
• Common Utilities are :
IEBGENER
IEFBR14
SORT
IDCAMS
An Introduction to TSO
• Time Sharing
• Resource sharing
• MVS handles each TSO user as it handles batch jobs
• The user specific batch job that starts up handles
what datasets are available
What terminal monitor program is to be used
what procedure to auto execute at logon
TSO Commands
• About 26 commands providing a variety of functions can be used.
• Allow Dataset Management functions
• Program Development functions.
• Batch job functions.
• Other functions like Help, Broadcast, Clist and Rexx.
• You can issue these at the READY prompt or TSO command.
Dataset Management functions
• Allocate Datasets dynamically
• List Datasets
• Print Datasets
• Copy Datasets
• Delete Datasets
• Rename Datasets
• List Catalog Entries
• List VTOC Entries
• Use AMS Services
Program Development functions
• Create program.
• Edit program.
• Compile program.
• Linkedit a program.
• View output.
• Route output to a printer
Batch job functions
• Submit Jobs
• Monitor job
• View output
• Route output
Help
• Help on TSO commands can be obtained by typing “HELP” at the “READY” prompt.
15. Interactive System Productivity Facility (ISPF)
• Access to ISPF is gained by Keying ISPF at the READY prompt
• This is done as default in the auto executed clist at startup.
• When this is entered you get the Primary Options Menu.
Primary Options Menu
----------------------- ISPF/PDF PRIMARY OPTION MENU ------------------------
OPTION ===> pfshow USERID - DA0034T
0 ISPF PARMS - Specify terminal and user parameters TIME - 06:58
1 BROWSE - Display source data or output listings TERMINAL - 3278
2 EDIT - Create or change source data PF KEYS - 12
3 UTILITIES - Perform utility functions
4 FOREGROUND - Invoke language processors in foreground
5 BATCH - Submit job for language processing
6 COMMAND - Enter TSO Command, CLIST, or REXX exec
7 DIALOG TEST - Perform dialog testing
8 LM UTILITIES - Perform library administrator utility functions
9 IBM PRODUCTS - Additional IBM program development products
10 SCLM - Software Configuration and Library Manager
C CHANGES - Display summary of changes for this release
T TUTORIAL - Display information about ISPF/PDF
X EXIT - Terminate ISPF using log and list defaults
D DATACENTER - Perform Datacenter Defined Functions
S SDSF - Spool Display and Search Facility
U USER - Perform User Defined Functions
F1=HELP F2=SPLIT F3=END F4=RETURN F5=RFIND F6=RCHANGE
F7=UP F8=DOWN F9=SWAP F10=LEFT F11=RIGHT F12=RETRIEVE
PA/PF Key Map
PF1 ===> HELP Enter the Tutorial
PF2 ===> SPLIT Enter Split Screen Mode
PF3 ===> END Terminate the current operation
PF4 ===> RETURN Return to primary options menu
PF5 ===> RFIND Repeat find
PF6 ===> RCHANGE Repeat Change
PF7 ===> UP Move screen window up
PF8 ===> DOWN Move screen window down
PF9 ===> SWAP Activate the other logical screen in split screen mode
PF10 ===> LEFT Scroll screen left
PF11 ===> RIGHT Scroll screen right
PF12 ===> RETRIEVE Retrieve last command
PA1 ===> ATTENTION Interrupt Current operation
PA2 ==> RESHOW Redisplay the current screen
PF1 - PF12 Keys may be duplicated from PF13 to PF24 in 24 key mode.
Split Screen Mode and Tutorial (Help)
• Entered by keying “SPLIT” on the command line
• or by positioning the cursor where required and pressing PF2
• Context Sensitive help can be accessed by typing help on the command line or through the PF1 key
List and Log files
• Some ISPF commands generate outputs. Printed output like this is collected and stored in a special dataset call list dataset.
• Whether the list dataset is to be retained, printed and/or deleted can be specified as a default in the setup panels.
• The ISPF operations done are recorded in a Log dataset. The disposition can be specified in the defaults panel.
User Profile
• ISPF maintains a user profile
• This profile contains default values of various entry panels.
Exiting ISPF
To terminate ISPF you can
• type =x at the command line
• or use the PF3 key to exit
If you haven’t specified default dispositions for your List and log datasets then the termination panel is displayed.
Termination Panel
------------------- SPECIFY DISPOSITION OF LOG DATA SET ---------------------
COMMAND ===>
LOG DATA SET DISPOSITION LIST DATA SET OPTIONS NOT AVAILABLE
------------------------- -----------------------------------
Process option ===>
SYSOUT class ===>
Local printer ID ===>
VALID PROCESS OPTIONS:
PD - Print data set and delete
D - Delete data set without printing
K - Keep data set (allocate same data set in next session)
KN - Keep data set and allocate new data set in next session
Press ENTER key to complete ISPF termination.
Enter END command to return to the primary option menu.
Key Mapping
Option 0.3
------------------------ PF KEY DEFINITIONS AND LABELS ------------------------
COMMAND ===>
NUMBER OF PF KEYS ===> 12 TERMINAL TYPE ===> 3278
PF1 ===> HELP
PF2 ===> SPLIT
PF3 ===> END
PF4 ===> RETURN
PF5 ===> RFIND
PF6 ===> RCHANGE
PF7 ===> UP
PF8 ===> DOWN
PF9 ===> SWAP
PF10 ===> LEFT
PF11 ===> RIGHT
PF12 ===> RETRIEVE
PF1 LABEL ===> PF2 LABEL ===> PF3 LABEL ===>
PF4 LABEL ===> PF5 LABEL ===> PF6 LABEL ===>
PF7 LABEL ===> PF8 LABEL ===> PF9 LABEL ===>
PF10 LABEL ===> PF11 LABEL ===> PF12 LABEL ===>
Browsing Datasets (Option 1)
------------------------- BROWSE - ENTRY PANEL ------------------------------
COMMAND ===>
ISPF LIBRARY:
PROJECT ===> DA0034T
GROUP ===> TRG ===> ===> ===>
TYPE ===> JCL
MEMBER ===> (Blank or pattern for member selection list)
OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>
VOLUME SERIAL ===> (If not cataloged)
DATA SET PASSWORD ===> (If password protected)
MIXED MODE ===> NO (Specify YES or NO)
FORMAT NAME ===>
Browse Commands
• Cols - for displaying Columns
• Scroll up, down, left right with PF7, PF8, PF10 and PF11 respectively.
• Set Scroll amount to CRSR, HALF, PAGE, n lines, max, DATA
• Scroll by n lines, to top or bottom
• Define/LOCATE {line number}/label.
• FIND string {NEXT/PREV/FIRST/LAST/ALL}.
• PF5 for repeat find and use of “&”.
• Use of PF12 to recall last command.
• Terminate Browse with PF3 Key.
• FIND string {NEXT/PREV/FIRST/LAST/ALL} {CHAR/PREFIX/SUFFIX/WORD} col-1 col-2
• Column limitation search
• T ’text’ - for case insensitive search
• X ’hex-string’ for a hex search
Editing Datasets (Option 2)
• The Primary Editor entry is similar to that for Browse as regards concatenating datasets and dataset selection.
• Labels can be defined as in browse but may be entered as line commands.
• Error messages may be removed by typing RESET on the command line.
Standard editing commands
I/In Insert 1 or n lines.
D(n) Delete line or n lines.
DD Delete the block marked by the 2 DD line commands.
R(n) Repeat 1 or n lines.
RR Repeat the block marked by the 2 RR line commands.
C(n) Copy 1 or n lines.
CC Copy the block marked between the 2 CC line commands.
M(n) Move 1 or n lines.
MM Move the block marked between the 2 CC line commands.
A(n) Copy or Move lines 1 or n times after this line.
B(n) Copy or Move lines 1 or n times before this line.
Creating datasets and exiting editor
To create a new member specify non-existent member name in the current PDS.
You can quit the editor without saving changes by the CANCEL command.
You can update the dataset with the save command
You can exit with implicit save with the END command or PF3 key.
Edit Profiles
Edit profiles control editing options
Normally editing a new dataset uses the default profile - the dataset type
To display the edit profile type PROFILE on the command line in the editor
To remove it from the screen type RESET.
This gives you a display as follows..
EDIT ---- DA0034T.TRG.JCL(JCL1) - 01.27 ---------------------- COLUMNS 001 072
COMMAND ===> SCROLL ===> CSR
****** ***************************** TOP OF DATA ******************************
=PROF> ....STD (FIXED - 150)....RECOVERY OFF....NUMBER ON STD..................
=PROF> ....CAPS ON....HEX OFF....NULLS ON STD....TABS ON STD....SETUNDO OFF....
=PROF> ....AUTOSAVE ON....AUTONUM OFF....AUTOLIST OFF....STATS ON..............
=PROF> ....PROFILE UNLOCK....IMACRO NONE....PACK OFF....NOTE ON................
=BNDS> <
=TABS>
=COLS> ----+----1----+----2----+----3----+----4----+----5----+----6----+----7--
000100 //DA0034TA JOB LA2719,'PARAG',NOTIFY=DA0034T,
000200 // CLASS=A,MSGCLASS=X
000300 //*
000400 //COBRUN EXEC PGM=PROG11
000500 //STEPLIB DD DSN=DA0034T.TRG.LNK,DISP=SHR
000510 //*STEPLIB DD DSN=DA0034T.TRG.COBOL2,DISP=SHR
000600 //INVMAS DD DSN=DA0034T.TRG.INVMAS,DISP=SHR
000700 //OP1 DD SYSOUT=*
000710 //*OP1 DD DSN=DA0034T.TRG.EXE7,DISP=(NEW,CATLG,CATLG),
Profile Settings
• To switch to a different profile key “profile <“profile-name”>
• To lock a profile, at the command line key “PROFILE LOCK”
• Any changes made to the locked profile are not saved permanently.
• Caps, number Pack and STATS modes are set each time you begin an edit session
• To define tab stops . Key TABS on the command line and place ‘@’ on the tabs line one character before where you would like a tab stop. On the command line, Key TABS ON/OFF
• If you omit the tabbing character hardware tabbing is assumed.
• Line control Commands
Nonumber/NUM OFF turns off line numbering
NUM ON turns on line numbering
AUTONUM resequence line numbers on save
RENUM resequence line numbers
NUM ON COBOL checks for valid COBOL numbering
NUM ON STD checks for standard line numbering
UNNUM removes line numbering.
Edit Modes
• STATS ON/OFF Controls dataset statistics
• AUTOLIST ON/OFF Controls Automatic listing
• NULLS ON/OFF Controls if nulls or spaces are padded.
• RECOVERY ON/OFF Recovers a dataset being edited in case of a system crash. It also permits
• The use of the UNDO command. This works up to the last save only.
• HEX ON/OFF Displays data in HEX/ASCII mode.
• CAPS ON/OFF Converts Lower case letters to upper case if set to on.
Line command for this function are LC or UC. LCLC & UCUC are blocked line commands.
• PACK ON/OFF Specifies that the data is store in compressed mode.
• AUTOSAVE ON/OFF PROMPT/NOPROMPT Auto save data when PF3 key is pressed
• IMACRO Specify initial macro to be run at startup.
Advanced Edit Options
To locate a String within another;
FIND string range NEXT/PREV/FIRST/LAST/ALL CHARS/PREFIX/SUFFIX/WORD X/NX col-1 col-2\
Where:
range is denoted by 2 labels
string is the string to be found
NEXT start search at current line and locate the next occurrence of the string (default).
PREV start search at current line and locate the previous occurrence of the string.
FIRST locate the first occurrence of the string
LAST locate the last occurrence of the string
ALL Same as first but count the occurrences in the file.
CHARS any occurrence of the string
PREFIX string must be at the beginning of the word
SUFFIX string must be at the end of a word
X/NX Search only excluded/Non excluded lines
col-1 col-2 starting and ending column numbers defining the search boundaries.
To Modify/Change a string with another String;
CHANGE string1 string2 range NEXT/PREV/FIRST/LAST/ALL CHARS/PREFIX/SUFFIX/WORD X/NX col-1 col-2
String2 replaces string1
Shifting text source
<--------------Data shift----------> <---------- Column shift ----------->
<
Data shifts
• does not drop blank characters
• does not combine words by drooping spaces
• does not delete spaces within apostrophes
• COPY [member] [AFTER/BEFORE label]
• MOVE [member] [AFTER/BEFORE label]
• CREATE [member] [range]
• REPLACE [member] [range]
• Edit member-name to edit recursively
Utilities Menu
Option 3
------------------------- UTILITY SELECTION MENU ----------------------------
OPTION ===>
1 LIBRARY - Compress or print data set. Print index listing.
Print, rename, delete, browse, or edit members
2 DATASET - Allocate, rename, delete, catalog, uncatalog, or
display information of an entire data set
3 MOVE/COPY - Move, copy, or promote members or data sets
4 DSLIST - Print or display (to process) list of data set names
Print or display VTOC information
5 RESET - Reset statistics for members of ISPF library
6 HARDCOPY - Initiate hardcopy output
8 OUTLIST - Display, delete, or print held job output
9 COMMANDS - Create/change an application command table
10 CONVERT - Convert old format menus/messages to new format
11 FORMAT - Format definition for formatted data Edit/Browse
12 SUPERC - Compare data sets (Standard Dialog)
13 SUPERCE - Compare data sets and Search-for strings (Extended Dialog)
14 SEARCH-FOR - Search data sets for strings of data (Standard Dialog)
Library Utility
Option 3.1
---------------------------- LIBRARY UTILITY --------------------------------
OPTION ===>
blank - Display member list B - Browse member
C - Compress data set P - Print member
X - Print index listing R - Rename member
L - Print entire data set D - Delete member
I - Data set information E - Edit member
S - Data set information (short)
ISPF LIBRARY:
PROJECT ===> DA0034T
GROUP ===> TRG ===> ===> ===>
TYPE ===> JCL
MEMBER ===> (If "P", "R", "D", "B", "E" or blank selected)
NEWNAME ===> (If "R" selected)
OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>
VOLUME SERIAL ===> (If not cataloged)
DATA SET PASSWORD ===> (If password protected)
Dataset Utility
Option 3.2
---------------------------- DATA SET UTILITY -------------------------------
OPTION ===> A
A - Allocate new data set C - Catalog data set
R - Rename entire data set U - Uncatalog data set
D - Delete entire data set S - Data set information (short)
blank - Data set information M - Enhanced data set allocation
ISPF LIBRARY:
PROJECT ===> DA0034T
GROUP ===> TRG
TYPE ===> JCL
OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>
VOLUME SERIAL ===> (If not cataloged, required for option "C")
DATA SET PASSWORD ===> (If password protected)
New dataset allocation
option 3.2.A
------------------------ ALLOCATE NEW DATA SET ------------------------------
COMMAND ===>
DATA SET NAME: DA0034T.TRG.JCLS
VOLUME SERIAL ===> BS3008 (Blank for authorized default volume)
GENERIC UNIT ===> (Generic group name or unit address)
SPACE UNITS ===> BLOCK (BLKS, TRKS, or CYLS)
PRIMARY QUANTITY ===> 26 (In above units)
SECONDARY QUANTITY ===> 12 (In above units)
DIRECTORY BLOCKS ===> 0 (Zero for sequential data set)
RECORD FORMAT ===> FB
RECORD LENGTH ===> 150
BLOCK SIZE ===> 1500
EXPIRATION DATE ===> (YY/MM/DD, YYYY/MM/DD
YY.DDD, YYYY.DDD in Julian form DDDD for retention period
in days or blank)
( * Only one of these fields may be specified)
Renaming Dataset
Option 3.2.R
---------------------------- RENAME DATA SET --------------------------------
COMMAND ===>
DATA SET NAME: DA0034T.TRG.JCL
VOLUME: BS3008
ENTER NEW NAME BELOW: (The data set will be recataloged.)
ISPF LIBRARY:
PROJECT ===> DA0034T
GROUP ===> TRG
TYPE ===> JCL
OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>
Dataset information
Option 3.2.s
-------------------------- DATA SET INFORMATION -----------------------------
COMMAND ===>
DATA SET NAME: DA0034T.TRG.JCL
GENERAL DATA: CURRENT ALLOCATION:
Management class: MCSTANDS Allocated blocks: 26
Storage class: SCNORM Allocated extents: 1
Volume: BS3008 Maximum dir. blocks: 1
Device type: 3390
Data class:
Organization: PO CURRENT UTILIZATION:
Record format: FB Used blocks: 11
Record length: 150 Used extents: 1
Block size: 1500 Used dir. blocks: 1
1st extent blocks: 26 Number of members: 5
Secondary blocks: 12
Data set name type: PDS
Creation date: 1996/08/08
Expiration date: ***NONE***
Allocate datasets managed by SMS
------------------------ ALLOCATE NEW DATA SET ------------------------------
COMMAND ===>
DATA SET NAME: DA0034T.TRG.JCL
MANAGEMENT CLASS ===> MCSTANDS (Blank for default management class)
STORAGE CLASS ===> SCNORM (Blank for default storage class)
VOLUME SERIAL ===> (Blank for authorized default volume)
DATA CLASS ===> (Blank for default data class)
SPACE UNITS ===> BLOCK (BLKS, TRKS, CYLS, KB, MB or BYTES)
PRIMARY QUANTITY ===> 26 (In above units)
SECONDARY QUANTITY ===> 12 (In above units)
DIRECTORY BLOCKS ===> 1 (Zero for sequential data set) *
RECORD FORMAT ===> FB
RECORD LENGTH ===> 150
BLOCK SIZE ===> 1500
DATA SET NAME TYPE ===> PDS (LIBRARY, PDS, or blank) *
EXPIRATION DATE ===> (YY/MM/DD, YYYY/MM/DD
YY.DDD, YYYY.DDD in Julian form
DDDD for retention period in days
or blank)
(* Specifying LIBRARY may override zero directory block)
Move / Copy
Option 3.3
--------------------------- MOVE/COPY UTILITY -------------------------------
OPTION ===>
C - Copy data set or member(s) CP - Copy and print
M - Move data set or member(s) MP - Move and print
L - Copy and LMF lock member(s) LP - Copy, LMF lock, and print
P - LMF Promote data set or member(s) PP - LMF Promote and print
SPECIFY "FROM" DATA SET BELOW, THEN PRESS ENTER KEY
FROM ISPF LIBRARY: ------ Options C, CP, L, and LP only -------
PROJECT ===> DA0034T
GROUP ===> TRG ===> ===> ===>
TYPE ===> JCL
MEMBER ===> (Blank or pattern for member selection list,
'*' for all members)
FROM OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>
VOLUME SERIAL ===> (If not cataloged)
DATA SET PASSWORD ===> (If password protected)
Move / Copy- 2
COPY --- FROM DA0034T.TRG.JCL -------------------------------------------------
COMMAND ===>
SPECIFY "TO" DATA SET BELOW.
TO ISPF LIBRARY:
PROJECT ===> DA0034T
GROUP ===> TRG
TYPE ===> JCL
TO OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>
VOLUME SERIAL ===> (If not cataloged)
DATA SET PASSWORD ===> (If password protected)
"TO" DATA SET OPTIONS:
IF PARTITIONED, REPLACE LIKE-NAMED MEMBERS ===> YES (YES or NO)
IF SEQUENTIAL, "TO" DATA SET DISPOSITION ===> OLD (OLD or MOD)
SPECIFY PACK OPTION FOR "TO" DATA SET ===> (YES, NO or blank)
DSLIST Utility
Option 3.4
--------------------------- DATA SET LIST UTILITY -----------------------------
OPTION ===>
blank - Display data set list * P - Print data set list
V - Display VTOC information only PV - Print VTOC information only
Enter one or both of the parameters below:
DSNAME LEVEL ===> DA0034T.TRG.*
VOLUME ===>
INITIAL DISPLAY VIEW ===> VOLUME (VOLUME,SPACE,ATTRIB,TOTAL)
CONFIRM DELETE REQUEST ===> YES (YES or NO)
* The following line commands will be available when the list is displayed:
B - Browse data set C - Catalog data set F - Free unused space
E - Edit data set U - Uncatalog data set = - Repeat last command
D - Delete data set P - Print data set
R - Rename data set X - Print index listing
I - Data set information M - Display member list
S - Information (short) Z - Compress data set TSO cmd, CLIST or REXX exec
DSLIST Dataset Selection
DSLIST - DATA SETS BEGINNING WITH DA0034T.TRG.* ----------------- ROW 1 OF 23
COMMAND ===> SCROLL ===> PAGE
COMMAND NAME MESSAGE VOLUME
-------------------------------------------------------------------------------
DA0034T.TRG.ACCOUNT MIGRAT
DA0034T.TRG.BADCOBOL MIGRAT
DA0034T.TRG.COBOL MIGRAT
DA0034T.TRG.COBOL1 MIGRAT
m DA0034T.TRG..JC BS3008
DA0034T.TRG.LNK MIGRAT
DSLIST Commands
M - Member list
C - Catalog a dataset
D - Delete a dataset
E - Edit a dataset
F - Free unused dataspace in a dataset
I - Display information for a dataset
M - Display a memberlist
P - Print a dataset
R - Rename a dataset
S - Display a shortened version of dataset information
U - Uncatalog a dataset
X - Print a dataset indexed listing
Z - Compress a dataset
= - Repeat the last command
Primary Commands
LOCATE To locate a dataset
TSO SUBMIT To execute Clists from the command line
SHOWCMD ON/OFF To show the expanded form of the command
CONFIRM ON/OFF Same as Confirm delete request Yes/NO on the delete panel
SORT Sorts the dataset list based on the fields shown on the next transparency
FIND Finds occurrence of a string with the list of datasets
SAVE dataset-name Saves the current dataset list into the datasetname specified
SELECT pattern [linecommand] To make a selection of datasets to be acted upon determined
by the line command
Reset
Option 3.5
-------------------------- RESET ISPF STATISTICS ----------------------------
OPTION ===>
R - Reset (create/update) ISPF statistics
D - Delete ISPF statistics
NEW USERID ===> (If userid is to be changed)
NEW VERSION NUMBER ===> (If version number is to be changed)
RESET MOD LEVEL ===> YES (YES or NO)
RESET SEQ NUMBERS ===> YES (YES or NO)
ISPF LIBRARY:
PROJECT ===> DA0034T
GROUP ===> TRG
TYPE ===> JCL
MEMBER ===> (Blank or pattern for member selection
list, '*' for all members)
OTHER PARTITIONED DATA SET:
DATA SET NAME ===>
VOLUME SERIAL ===> (If not cataloged)
Subscribe to:
Posts (Atom)