It is currently Thu Mar 28, 2024 11:29 am


All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 4 posts ] 
Author Message
 Post subject: HOWTO: GT.M/OpenVMS Production Instance - 3: GDE and MUPIP
PostPosted: Sun Aug 21, 2011 8:26 pm 
User avatar

Joined: Mon Nov 01, 2010 1:39 pm
Posts: 51
Real Name: John Willis
Began Programming in MUMPS: 01 Apr 2010
HOWTO: GT.M Production Instance on OpenVMS/Alpha
Part 3: Defining the Global Directory and Creating the Data File

In this installment, we will use the GT.M Global Directory Editor (GDE) and the MUMPS Peripheral Interchange Program (MUPIP) to define the global directory and database file for the instance.

For this instance, you will need to be logged into the <instance-user> account created in the prior installment. This is crucial.

Historical Note
The MUPIP program's name has very deep roots. A Peripheral Interchange Program (PIP) was first used in the Digital Equipment Corporation PDP-6 series of computers in the early 1960's, and later made it into TOPS-10 on the PDP-10, RSTS/E on the PDP-11, and eventually into Gary Kildall's CP/M operating system, which is largely credited as an early and important foundation of the personal computer revolution. How it got into GT.M is a bit of trivia with which I am as yet unacquainted, but perhaps someone here can shed a little light on the subject. - JPW

So, without further ado, here are the commands used to set up your global directory:

Code:
$ RUN GTM$DIST:GDE
GDE> CHANGE /SEGMENT $DEFAULT /FILE=<data-device>:[<instance-user>.g]MUMPS.DAT /ALLOC=200000 /BLOCK_SIZE=4096 /LOCK_SPACE=1000 /EXTENSION_COUNT=0
GDE> CHANGE /REGION $DEFAULT /RECORD_SIZE=4080 /KEY_SIZE=255


The above commands bear further explanation.

The first line is the DCL command which will launch the GT.M Global Directory Editor, and should be familiar to anyone who has a passing familiarity with OpenVMS and DCL.

The second line sets the characteristics of the $DEFAULT database segment. The /FILE switch tells GDE to use <data-device>:[<instance-user>.g]MUMPS.DAT to store the data for the segment. The /ALLOC and /BLOCK_SIZE switches instruct GDE to allocate 200,000 blocks of 4,096 bytes each to the segment. The /LOCK_SPACE instructs GDE to allocate 1,000 pages for locking, which can prevent deadlocks under heavy load. The /EXTENSION_COUNT=0 switch instructs GT.M to disable its ability to automatically expand the database when storage grows short. Although you can set EXTENSION_COUNT to a rather arbitrary number of blocks, I do not recommend this practice, as the consequences of filling up your data drive at the OpenVMS level can be more catastrophic than filling up your database file, which will simply halt further writes to the database. A good solution is to employ a script to monitor database usage and notify you when a certain threshold is reached.

It is worth noting that you can calculate your database size by multiplying /BLOCK_SIZE by /ALLOC. In this case, the database will be slightly over 781MB (200,000 blocks * 4,096 bytes per block).

Next, we will set up journaling using the MUPIP program.

Journaling and MUPIP

The following commands will enable journaling to <journal-device>:[<instance-user>.j]<instance-user>.MJL:

Code:
$ RUN GTM$DIST:MUPIP
MUPIP> CREATE
Database file for region $DEFAULT created.
$ RUN GTM$DIST:MUPIP
MUPIP> SET /REGION $DEFAULT /JOURNAL=(ENABLE,ON,BEFORE,FILENAME=<journal-device>:[<instance-user>.j]<instance-user>.MJL)
%GTM-I-JNLCREATE, Journal file <journal-device>:[<instance-user>.j]<instance-user>.MJL created for region $DEFAULT
 with BEFORE_IMAGES
%GTM-I-JNLSTATE, Journaling state for region $DEFAULT is now ON


CREATE tells MUPIP to create the .DAT file as specified by the global directory.

The command containing SET /REGION tells MUPIP to enable journaling for region $DEFAULT. ENABLE tells MUPIP that the specified region is ready to be journaled. ON tells MUPIP to create a new journal file (as specified by FILENAME) and begin using the newly-created file to record future journal entries. BEFORE instructs GT.M's journaling system to archive data blocks prior to modifying them, and enables the use of the rollback recovery facility on the specified region.

Now that the database is being journaled, we need only to set the ownership and permissions on MUMPS.DAT and MUMPS.GLD to prevent unauthorized access. This procedure is detailed in the DCL example below:

Code:
$ SET FILE/OWNER=<instance-user> <data-device>:[<instance-user>.g]MUMPS.GLD
$ SET FILE/OWNER=<instance-user> <data-device>:[<instance-user>.g]MUMPS.DAT
$ SET SECURITY /PROTECTION=(S:RWED,O:RWE,G:RWE,W:"") <data-device>:[<instance-user>.g]MUMPS.GLD
$ SET SECURITY /PROTECTION=(S:RWED,O:RWE,G:RWE,W:"") <data-device>:[<instance-user>.g]MUMPS.DAT


The instance is now created with journaling and security protections in place. You can now install any local MUMPS applications' routines into <home-device>:[<instance-user>.r].

I am considering writing another installment covering the configuration of GT.M replication, and possibly the configuration of WorldVistA, depending on feedback received here. I hope you find this guide useful, and wish you all the best!


Top
Offline Profile  
 
 Post subject: Re: HOWTO: GT.M/OpenVMS Production Instance - 3: GDE and MUP
PostPosted: Sun Oct 09, 2011 10:40 am 

Joined: Sun Oct 09, 2011 9:03 am
Posts: 3
Real Name: plewin
Began Programming in MUMPS: 01 Jan 1980
I am very much interested in global replication.
I used to do this in DSM-11.
An advantage of the concept of 'replication' is that reads (aka vieww) from a local instance is more common than writes (aka updates to database). Since only writes need to by synchronized to replicated globals, remote systems do not have to be burdened as much. Here are my questions.

Will GT.M support a large number of remotely located replicated global? Even on hundreds of remote instances of GT.M. DEC DSM-11 used DDP for these disk/global update protocol. What does GT.M use. Can this be encrypted traffic. How secure? Will Intersystems' Cache provide for such replication as well. I think Intersystem's Cache uses proprietary ECP for its shadowing to remote disk subsystem. But it is not clear of Cache provides replication as well as shadowing. Is there a difference? Is replications needes as something beyond shadowing? Do these system use TCP/IP? Do they use http? Or is the replication subsystem its own proprietary protocol? Can one have both GT.M and Intersystem Cache instances particiapte in such replication. The idea is for globals on disks in say Wyoming and Maryland and etc to be syncronized in real time. Is synchronization better done using iSCSI of SAN. Is there inexpensive software that will keep disk synchronized (replicated) in real time accross TCP/IP locations across the country for Windows and/or Linux or mixed systems? My thinking is that if disk block level synchronization could be done at OS level then this would work for bits that were globals or Oracle relational databases that participated in the replication network. So many questions. A resonse from those that know would be most appreciated. Links to references would be great.



jollis wrote:
HOWTO: GT.M Production Instance on OpenVMS/Alpha
Part 3: Defining the Global Directory and Creating the Data File

In this installment, we will use the GT.M Global Directory Editor (GDE) and the MUMPS Peripheral Interchange Program (MUPIP) to define the global directory and database file for the instance.

For this instance, you will need to be logged into the <instance-user> account created in the prior installment. This is crucial.

Historical Note
The MUPIP program's name has very deep roots. A Peripheral Interchange Program (PIP) was first used in the Digital Equipment Corporation PDP-6 series of computers in the early 1960's, and later made it into TOPS-10 on the PDP-10, RSTS/E on the PDP-11, and eventually into Gary Kildall's CP/M operating system, which is largely credited as an early and important foundation of the personal computer revolution. How it got into GT.M is a bit of trivia with which I am as yet unacquainted, but perhaps someone here can shed a little light on the subject. - JPW

So, without further ado, here are the commands used to set up your global directory:

Code:
$ RUN GTM$DIST:GDE
GDE> CHANGE /SEGMENT $DEFAULT /FILE=<data-device>:[<instance-user>.g]MUMPS.DAT /ALLOC=200000 /BLOCK_SIZE=4096 /LOCK_SPACE=1000 /EXTENSION_COUNT=0
GDE> CHANGE /REGION $DEFAULT /RECORD_SIZE=4080 /KEY_SIZE=255


The above commands bear further explanation.

The first line is the DCL command which will launch the GT.M Global Directory Editor, and should be familiar to anyone who has a passing familiarity with OpenVMS and DCL.

The second line sets the characteristics of the $DEFAULT database segment. The /FILE switch tells GDE to use <data-device>:[<instance-user>.g]MUMPS.DAT to store the data for the segment. The /ALLOC and /BLOCK_SIZE switches instruct GDE to allocate 200,000 blocks of 4,096 bytes each to the segment. The /LOCK_SPACE instructs GDE to allocate 1,000 pages for locking, which can prevent deadlocks under heavy load. The /EXTENSION_COUNT=0 switch instructs GT.M to disable its ability to automatically expand the database when storage grows short. Although you can set EXTENSION_COUNT to a rather arbitrary number of blocks, I do not recommend this practice, as the consequences of filling up your data drive at the OpenVMS level can be more catastrophic than filling up your database file, which will simply halt further writes to the database. A good solution is to employ a script to monitor database usage and notify you when a certain threshold is reached.

It is worth noting that you can calculate your database size by multiplying /BLOCK_SIZE by /ALLOC. In this case, the database will be slightly over 781MB (200,000 blocks * 4,096 bytes per block).

Next, we will set up journaling using the MUPIP program.

Journaling and MUPIP

The following commands will enable journaling to <journal-device>:[<instance-user>.j]<instance-user>.MJL:

Code:
$ RUN GTM$DIST:MUPIP
MUPIP> CREATE
Database file for region $DEFAULT created.
$ RUN GTM$DIST:MUPIP
MUPIP> SET /REGION $DEFAULT /JOURNAL=(ENABLE,ON,BEFORE,FILENAME=<journal-device>:[<instance-user>.j]<instance-user>.MJL)
%GTM-I-JNLCREATE, Journal file <journal-device>:[<instance-user>.j]<instance-user>.MJL created for region $DEFAULT
 with BEFORE_IMAGES
%GTM-I-JNLSTATE, Journaling state for region $DEFAULT is now ON


CREATE tells MUPIP to create the .DAT file as specified by the global directory.

The command containing SET /REGION tells MUPIP to enable journaling for region $DEFAULT. ENABLE tells MUPIP that the specified region is ready to be journaled. ON tells MUPIP to create a new journal file (as specified by FILENAME) and begin using the newly-created file to record future journal entries. BEFORE instructs GT.M's journaling system to archive data blocks prior to modifying them, and enables the use of the rollback recovery facility on the specified region.

Now that the database is being journaled, we need only to set the ownership and permissions on MUMPS.DAT and MUMPS.GLD to prevent unauthorized access. This procedure is detailed in the DCL example below:

Code:
$ SET FILE/OWNER=<instance-user> <data-device>:[<instance-user>.g]MUMPS.GLD
$ SET FILE/OWNER=<instance-user> <data-device>:[<instance-user>.g]MUMPS.DAT
$ SET SECURITY /PROTECTION=(S:RWED,O:RWE,G:RWE,W:"") <data-device>:[<instance-user>.g]MUMPS.GLD
$ SET SECURITY /PROTECTION=(S:RWED,O:RWE,G:RWE,W:"") <data-device>:[<instance-user>.g]MUMPS.DAT


The instance is now created with journaling and security protections in place. You can now install any local MUMPS applications' routines into <home-device>:[<instance-user>.r].

I am considering writing another installment covering the configuration of GT.M replication, and possibly the configuration of WorldVistA, depending on feedback received here. I hope you find this guide useful, and wish you all the best!


Top
Offline Profile  
 
 Post subject: Re: HOWTO: GT.M/OpenVMS Production Instance - 3: GDE and MUP
PostPosted: Sun Oct 09, 2011 10:51 am 

Joined: Sun Oct 09, 2011 9:03 am
Posts: 3
Real Name: plewin
Began Programming in MUMPS: 01 Jan 1980
Can one have replicaton in a mixed GT.M and Intersystems Cache environment.
Say 10 GT.M and 10 Intersystems Cache instances across U.S. with snchronized/replicated globals?
As an alternative is there free/low cost real time synchroniztion software at disk block level, so that changes to a disk block at location A would automatically update the appropriated disk block at location B. Is that a better methodoloy, since a system that has both say Oracle, Intersyems Cache, and GT.m activities at location A would be guaranteed to by kept in synch with a similar system at location B, i.e. something like iSCSI or SAN functionality, but free or very low cost. Any links would be most appreciated.


Top
Offline Profile  
 
 Post subject: Re: HOWTO: GT.M/OpenVMS Production Instance - 3: GDE and MUP
PostPosted: Sun Oct 09, 2011 11:02 am 

Joined: Sun Oct 09, 2011 9:03 am
Posts: 3
Real Name: plewin
Began Programming in MUMPS: 01 Jan 1980
I also wanted to mention that DEC DSM-11 used DDP for such global updates to keep remotely located globals in synch. I believe Intersystems used ECP for their shadowing capability. Is shadowing same as replication? Does Intersystems have a global replication subsystem that would allow a global on 100 disks distribututed across U.S. to be kept in synchronization (i.e. replicated). An advantage of any synchronization/replication scheme is that read (views) are mor common than disk writes, and only the disk write traffic nees to be sent to each system. Can the traffic be sufficiently encrypted (SSL or RSA). I wish there was disk level secure and inexpensive software available for Window and Linux that did this at disk block level, in real time, because the bits do not know whether it is a GT.M global or Interystems global, or Oracle relational data structure that has been changed. Ideally a subysystem that achieved real time synchronization/replication (maybe with list of 'folder' to participate in such synchronization/replication) in a mixed OS network (Windows and Linux) would be the ideal solution. If free or low cost and if it used an efficient protocol like the old DEC DDP or Intersystems ECP would be perfect! Is there such a disk real time synchronization/replication subsystem?? Links would be most appreciated. Thanks ... just discovered this forum. Is there some other forums to post these questions?


Top
Offline Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 4 posts ] 

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 17 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
Theme created StylerBB.net