Make Your z/OS Deployments Failproof With Urbancode Agent Pools

Off
Strongback Consulting

If you’re using UCD in a large z/OS mainframe environment, and want to take advantage of agent pools, there are some “gotchas” you need to be aware of.

What are agent pools?

An agent pool is a group of Urbancode started tasks (BUZAGNT) running on more than one z/OS LPAR. These agents respond to deployment requests in an environment. If one agent is down during a deployment request (such as during the IPL of one LPAR), another agent will take over. Agent pools have been used on the distributed side for many years, but when using them on the z/OS side, there is some additional configuration you need. Also, after having dealt with a few customer environments, and a few upgrades, I’ve added some additional instructions to make upgrades go more smoothly.

Symbolic Links to the Installation Directory

Create a symbolic link for the UCD agent directory, so that it can withstand upgrades from version to version. Typically, the default installation will direct you to install into a version specific directory. This is fine, but its better if you create the directory, and the symbolic first. This can be a directory such as the following:

ln -s /usr/lpp/ucd/current /usr/lpp/ucd/v7r10

Create common work and deploy directories for agent pools

Agent pools will really on work right if you have shared DASD across your LPARs. You will need common work and deploy directories. If you don’t do this, you will only be able to rollback a deployment using the agent that the component version was deployed to.

mkdir /shared/ucd/var
mkdir /shared/ucd//deploy
chmod -R 744 /shared/ucd/var

When you do a deployment, look at the logs and you’ll see that the deploy datasets step will create a back up file before deploying the datasets from the component version. This backup file is stored in the agent’s var/deploy directory. If you’ve used the defaults, then this is specific to that agent only. If another agent in the agent pool picks up a request to execute a rollback, it will not be able to find it in its own var/deploy directory and will thus fail. This is why you need a common one. Once you’ve created the directories above, you’ll need to edit the BUZ_DEPLOY_BASE configuration property for all agents in the agent pool from the UCD web administration interface. Set all agents to this directory (/shared/ucd/var/deploy).

Update USS configuration files if needed

If you’ve installed the agent before setting up the symbolic links, you’ll need to manually change the configuration settings. This includes modifying the following files:

  • <agent home>/agent/bin/agent
  • <agent home>/agent/bin/buztool.sh
  • <agent home>/agent/bin/classpath.conf
  • <agent home>/agent/bin/configure-agent
  • <agent home>/agent/bin/init/agent
  • <agent home>/agent/bin/test-create-version.sh
  • <agent home>/agent/bin/worker-args.conf
  • <agent home>/agent/conf/agent/installed.properties
  • <agent home>/agent/conf/toolkit/ISPF.conf
  • <agent home>/agent/conf/toolkit/ISPXENV
  • The BUZAGNT started task JCL

Final thoughts

Keep in mind that adding any agent, even if its in an agent pool for failover/workload balancing, you will still incur the cost of the agent, and thus burn a license for such agent.

Any agent in an agent pool needs to have shared DASD between the agents to facilitate deployment. They also need access to each other’s job monitor (JMON) to handle post-deployment processing, if any (such as DB2 BINDs and CICS new copies).

This post touches on using Agent Pools for robustness, but there are other factors to consider when making a deployment environment robust. Contact us if you are planning an infrastructure using UCD on the mainframe.

Comments are closed.

Strongback Consulting