Skip to Main Content
IBM Z Software


This portal is to open public enhancement requests against IBM Z Software products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Not under consideration
Categories Other
Created by Guest
Created on Nov 20, 2012

Shared Frame Write For CICS Data Table

This RFE is for an enhancement to the EXEC CICS WRITE to support a SHARED FRAME option for a CICS Data Table write.



A SHARED FRAME WRITE would mean that when a program does a WRITE to a CICS data table with the proposed SHARED FRAME option, the underlying storage frames for the data being written to the data space would be shared between the data space and the CICS region. This should improve the wall time and CPU time of the write. This SHARED FRAME option should probably only be used when you do NOT need to support concurrent access to the data being written to the data table.



When doing performance investigation of the current EXEC CICS WRITE to a data table, the following was observed.



The following are results from performance tests for a program that did EC WRITEs of 32,000 byte records for x amount of data to a CICS data table named BASF906O:



Here are the results:



2 Mb test case

NUMBER OF 32000 BYTE WRITES WAS 000000063

Total Wall Time (seconds) for BASF9060 Writes - 0.001302

Total Wall Time (seconds) for Task - 0.002036

Total CPU Time (seconds) for Task - 0.001828



20 Mb test case

NUMBER OF 32000 BYTE WRITES WAS 000000625

Total Wall Time (seconds) for BASF9060 Writes - 0.012551

Total Wall Time (seconds) for Task - 0.013288

Total CPU Time (seconds) for Task - 0.012823



200 Mb test case

NUMBER OF 32000 BYTE WRITES WAS 000006250

Total Wall Time (seconds) for BASF9060 Writes - 0.258004

Total Wall Time (seconds) for Task - 0.317859

Total CPU Time (seconds) for Task - 0.142325



You can see the wall time and CPU time goes up almost proportional to the amount of data being written to the BASF906O data table for the different test cases. This performance data potentially suggests that a significant amount of the wall and CPU time is being spent allocating physical storage frames and moving the data from one physical storage frame to another. With this shared frame write approach, there would be no need allocate new physical storage frames and move the data between frames. Instead it would be virtual page table adjustments in the data space so that the CICS region and data space can share the same physical storage frames for the data.

Idea priority Medium
  • Guest
    Reply
    |
    Oct 5, 2015

    Due to processing by IBM, this request was reassigned to have the following updated attributes:
    Brand - Servers and Systems Software
    Product family - Transaction Processing
    Product - CICS Transaction Server

    For recording keeping, the previous attributes were:
    Brand - WebSphere
    Product family - Transaction Processing
    Product - CICS Transaction Server

  • Guest
    Reply
    |
    Dec 5, 2012

    Dear IBM,

    Thank you for your responses and information. I appreciate it.

    Thanks,
    Tim

  • Guest
    Reply
    |
    Dec 4, 2012

    We are aware of the overhead of copying data, and there may be ways to address this moving forward. However we have no plans to change data tables in the way suggested.

  • Guest
    Reply
    |
    Dec 4, 2012

    CICS TS 5.1 will support EXEC CICS LINK from a COBOL 31 bit program to an amode 64 non le assembler program. Native call is not supported. In the future when we support LE supported languages, that will not support 31 to 64 via CALL. LE will not be supporting mixed amode.  

    The FILEA assembler sample has been converted to run amode 64 giving an example of using relative addressing and the grande form of assembler instructions required. However it is not using GETMAIN64.

  • Guest
    Reply
    |
    Nov 27, 2012

    I did have one other follow up to the GETMAIN64/FREEMAIN64 suggestion for caching large amounts of data. I would assume with this approach we would still be duplicating the data in physical storage? In other words, if I have 4K of data at virtual address 3C450010 that is backed by a physical storage frame at 239E0020 and I do this GETMAIN64 and get an 8 byte pointer and move this 4K of data to that 8 byte pointer, it will be moved to another physical storage frame? So I will have the same data duplicated at physical address 239E0020 and another physical storage frame?

    If so, this is not really getting at the crux of this RFE. This RFE is asking for the ability to not incur the wall time/CPU time of moving the data from one storage frame to another when you really just want to cache the data and come back to it in another CICS task. This RFE is requesting if there is a way (specifically with the CICS data table use case) where just the virtual page tables for the data space can be updated to reuse the physical storage frames being referenced in the write and the result is new physical storage frames do not need to be allocated and populated when the data is cached. If we were to go with the GETMAIN64/FREEMAIN64 approach, we would still want something like this so the overhead of duplicating physical storage frames and populating them is not occurring. If I am not being clear, please let me know.

    Thanks,
    Tim

  • Guest
    Reply
    |
    Nov 26, 2012

    Hello,

    Yes, this is a possibility. It would require more development time on our part to write assembler code to put the storage above the bar. The advantage of using the extended addressability of the data spaces for a CICS data table is that we are already using a data table in our application for storing "cookies" to this session data. So it is not much more of a development effort to leverage this data table approach to store the raw session data.

    Would you have any assembler sample code for using this GETMAIN64 and FREEMAIN64 approach? And I assume a COBOL/LE 31 bit program could call this assembler 64 bit module?

    Thanks,
    Tim

  • Guest
    Reply
    |
    Nov 26, 2012

    A more immediate solution might be to use the new functionality in CICS TS 5.1 which will GA December 14th 2012. The release includes new functionality to run non-LE assembler programs amode 64 and use new CICS API:  GETMAIN64 and FREEMAIN64 to acquire 64 bit storage. This functionality is aimed a precisely this use case, to be able to write assembler routines to cache large amounts of data above the bar. Is this a possibility for you ?