5 Oracle Fleet Patching and Provisioning Overview

Oracle Fleet Patching and Provisioning is a software lifecycle management method for provisioning and maintaining Oracle homes.

Oracle Fleet Patching and Provisioning (Oracle FPP) enables mass deployment and maintenance of standard operating environments for databases, clusters, and user-defined software types. With Oracle Fleet Patching and Provisioning, you can also install clusters and provision, patch, scale, and upgrade Oracle Grid Infrastructure and Oracle Database 11g release 2 (11.2), and later. Additionally, you can provision applications and middleware.

Note:

Starting with Oracle Grid Infrastructure 19c, the feature formerly known as Rapid Home Provisioning (RHP) is now Oracle Fleet Patching and Provisioning (Oracle FPP).

Fleet Patching and Provisioning Architecture

The Fleet Patching and Provisioning architecture consists of a Fleet Patching and Provisioning Server and any number of Fleet Patching and Provisioning Clients and targets.

Oracle recommends deploying the Fleet Patching and Provisioning Server in a multi-node cluster so that it is highly available.

The Fleet Patching and Provisioning Server cluster is a repository for all data, of which there are primarily three types:

  • Gold images
  • Working copies and clients
  • Metadata related to users, roles, permissions, and identities

The Fleet Patching and Provisioning Server acts as a central server for provisioning Oracle Database homes, Oracle Grid Infrastructure homes, and other application software homes, making them available to the cluster hosting the Fleet Patching and Provisioning Server and to the Fleet Patching and Provisioning Client clusters, their targets, and non-client targets.

Users operate on the Fleet Patching and Provisioning Server or Fleet Patching and Provisioning Client to request deployment of Oracle homes or to query gold images. When a user makes a request for an Oracle home, specifying a gold image, the Fleet Patching and Provisioning Client communicates with the Fleet Patching and Provisioning Server to pass on the request. The Fleet Patching and Provisioning Server processes the request by taking appropriate action to instantiate a copy of the gold image, and to make it available to the Fleet Patching and Provisioning Client cluster using available technologies such as Oracle Automatic Storage Management Cluster File System (Oracle ACFS) and local file systems.

Oracle Fleet Patching and Provisioning Server

The Oracle Fleet Patching and Provisioning Server (FPPS) is a highly available software provisioning system that uses Oracle Automatic Storage Management (Oracle ASM), Oracle Automatic Storage Management Cluster File System (Oracle ACFS), Grid Naming Service (GNS), and other components.

The Oracle Fleet Patching and Provisioning Server primarily acts as a central server for provisioning Oracle homes, making them available to Oracle Fleet Patching and Provisioning Client and targets.

Features of the Oracle Fleet Patching and Provisioning Server:

  • Efficiently stores gold images and image series for the managed homes, including separate binaries, and metadata related to users, roles, and permissions.

  • Stores working copies and Oracle Fleet Patching and Provisioning Client information.

  • Provides a list of available homes to clients upon request.

  • Patch a software home once and then deploy the home to any Oracle Fleet Patching and Provisioning Client or any other target, instead of patching every site.

  • Provides the ability to report on existing deployments.

  • Deploys homes on physical servers and virtual machines.

  • Notifies subscribers of changes to image series.

  • Maintains an audit log of all RHPCTL commands run.

Oracle Fleet Patching and Provisioning Targets

Computers of which Oracle Fleet Patching and Provisioning is aware are known as targets.

Oracle Fleet Patching and Provisioning Servers can create new targets, and can also install and configure Oracle Grid Infrastructure on targets with only an operating system installed. Subsequently, Oracle Fleet Patching and Provisioning Server can provision database and other software on those targets, perform maintenance, scale the target cluster, in addition to many other operations. All Oracle Fleet Patching and Provisioning commands are run on the Oracle Fleet Patching and Provisioning Server. Targets running the Oracle Fleet Patching and Provisioning Client in Oracle Clusterware 12c release 2 (12.2), and later, may also run many of the Oracle Fleet Patching and Provisioning commands to request new software from the Oracle Fleet Patching and Provisioning Server and initiate maintenance themselves, among other tasks.

Note:

The Oracle Fleet Patching and Provisioning Server communicates with Oracle Grid Infrastructure Clusters at version 12.2.0.1 and later through an Oracle Fleet Patching and Provisioning Client that can be configured and started up on the target cluster. The Oracle Fleet Patching and Provisioning Client is not supported for targets at Oracle Grid Infrastructure version 12.1 and earlier, on all versions of Oracle Restart and database standalone targets, such as database homes without an Oracle Grid Infrastructure home or Oracle Restart home.

Oracle Fleet Patching and Provisioning Clients

The Oracle Fleet Patching and Provisioning Client is part of the Oracle Grid Infrastructure. Users operate on an Oracle Fleet Patching and Provisioning Client to perform tasks such as requesting deployment of Oracle homes and listing available gold images.

When a user requests an Oracle home specifying a gold image, the Oracle Fleet Patching and Provisioning Client communicates with the Oracle Fleet Patching and Provisioning Server to pass on the request. The Oracle Fleet Patching and Provisioning Server processes the request by instantiating a working copy of the gold image and making it available to the Oracle Fleet Patching and Provisioning Client using Oracle ACFS or a different local file system.

The difference between an Oracle FPP Target and an Oracle FPP Client is that the Oracle FPP Client has Oracle Grid Infrastructure installed and the additional rhpclient component enabled. This additional rhpclient component allows the Oracle FPP Client to initiate the tasks, while on Oracle FPP Targets only the Oracle FPP Server can initiate the operations. All the remote targets managed by the Oracle FPP Servers are known as Oracle FPP Targets.

The Oracle Fleet Patching and Provisioning Client:

  • Can use Oracle ACFS to store working copies of gold images which can be rapidly provisioned as local homes; new homes can be quickly created or undone using Oracle ACFS snapshots.

    Note:

    Oracle supports using other local file systems besides Oracle ACFS.

  • Provides a list of available homes from the Oracle Fleet Patching and Provisioning Server.

  • Has full functionality in Oracle Clusterware 12c release 2 (12.2) and can communicate with Oracle Fleet Patching and Provisioning Servers from Oracle Clusterware 12c release 2 (12.2), or later.

Authentication Options for Oracle Fleet Patching and Provisioning Operations

Some RHPCTL commands show authentication choices as an optional parameter.

Specifying an authentication option is not required when running an RHPCTL command on an Oracle Fleet Patching and Provisioning Client, nor when running an RHPCTL command on the Oracle Fleet Patching and Provisioning Server and operating on an Oracle Fleet Patching and Provisioning Client, because the server and client establish a trusted relationship when the client is created, and authentication is handled internally each time a transaction takes place. (The only condition for server/client communication under which an authentication option must be specified is when the server is provisioning a new Oracle Grid Infrastructure deployment—in this case, the client does not yet exist.)

To operate on a target that is not an Oracle Fleet Patching and Provisioning Client, you must provide the Oracle Fleet Patching and Provisioning Server with information allowing it to authenticate with the target. The options are as follows:
  • Provide the root password (on stdin) for the target

  • Provide the sudo user name, sudo binary path, and the password (stdin) for target

  • Provide a password (either root or sudouser) non-interactively from local encrypted store (using the -cred authentication parameter)

  • Provide a path to the identity file stored on the Oracle Fleet Patching and Provisioning Server for SSL-encrypted passwordless authentication (using the -auth sshkey option)

Passwordless Authentication Details

The Oracle Fleet Patching and Provisioning Server can authenticate to targets over SSH using a key pair. To enable this option, you must establish user equivalence between the crsusr on the Oracle Fleet Patching and Provisioning Server and root or a sudouser on the target.

Note:

The steps to create that equivalence are platform-dependent and so not shown in detail here. For Linux, see commands ssh-keygen to be run on the target and ssh-copy-id to be run on the Oracle Fleet Patching and Provisioning Server.
For example, assuming that you have established user equivalency between crsusr on the Oracle Fleet Patching and Provisioning Server and root on the target node, nonRHPClient4004.example.com, and saved the key information on the Oracle Fleet Patching and Provisioning Server at /home/oracle/rhp/ssh-key/key -path, then the following command will provision a copy of the specified gold image to the target node with passwordless authentication:
$ rhpctl add workingcopy -workingcopy db12102_160607wc1 -image db12102_160607
  -targetnode nonRHPClient4004.example.com -path /u01/app/oracle/12.1/rhp/dbhome_1
  -oraclebase /u01/app/oracle -auth sshkey -arg1 user:root -arg2
   identity_file:/home/oracle/rhp/ssh-key/key
For equivalency between crsusr on the Oracle Fleet Patching and Provisioning Server and a privileged user (other than root) on the target, the -auth portion of the command would be similar to the following:
-auth sshkey -arg1 user:ssh_user -arg2 identity_file:path_to_identity_file_on_RHPS
 -arg3 sudo_location:path_to_sudo_binary_on_target

Oracle Fleet Patching and Provisioning Roles

An administrator assigns roles to Oracle Fleet Patching and Provisioning users with access-level permissions defined for each role.

Users on Oracle Fleet Patching and Provisioning Clients are also assigned specific roles. Oracle Fleet Patching and Provisioning includes basic built-in and composite built-in roles.

Basic Built-In Roles

The basic built-in roles and their functions are:

  • GH_ROLE_ADMIN: An administrative role for everything related to roles. Users assigned this role are able to run rhpctl verb role commands.

  • GH_SITE_ADMIN: An administrative role for everything related to Oracle Fleet Patching and Provisioning Clients. Users assigned this role are able to run rhpctl verb client commands.

  • GH_SERIES_ADMIN: An administrative role for everything related to image series. Users assigned this role are able to run rhpctl verb series commands.

  • GH_SERIES_CONTRIB: Users assigned this role can add images to a series using the rhpctl insertimage series command, or delete images from a series using the rhpctl deleteimage series command.

  • GH_WC_ADMIN: An administrative role for everything related to working copies of gold images. Users assigned this role are able to run rhpctl verb workingcopy commands.

  • GH_WC_OPER: A role that enables users to create a working copy of a gold image for themselves or others using the rhpctl add workingcopy command with the -user option (when creating for others). Users assigned this role do not have administrative privileges and can only administer the working copies of gold images that they create.

  • GH_WC_USER: A role that enables users to create a working copy of a gold image using the rhpctl add workingcopy command. Users assigned this role do not have administrative privileges and can only delete working copies that they create.

  • GH_IMG_ADMIN: An administrative role for everything related to images. Users assigned this role are able to run rhpctl verb image commands.

  • GH_IMG_USER: A role that enables users to create an image using the rhpctl add | import image commands. Users assigned this role do not have administrative privileges and can only delete images that they create.

  • GH_IMG_TESTABLE: A role that enables users to add a working copy of an image that is in the TESTABLE state. Users assigned this role must also be assigned either the GH_WC_ADMIN role or the GH_WC_USER role to add a working copy.

  • GH_IMG_RESTRICT: A role that enables users to add a working copy from an image that is in the RESTRICTED state. Users assigned this role must also be assigned either the GH_WC_ADMIN role or the GH_WC_USER role to add a working copy.

  • GH_IMG_PUBLISH: Users assigned this role can promote an image to another state or retract an image from the PUBLISHED state to either the TESTABLE or RESTRICTED state.

  • GH_IMG_VISIBILITY: Users assigned this role can modify access to promoted or published images using the rhpctl allow | disallow image commands.

  • GH_AUTHENTICATED_USER: Users assigned to this role can execute any operation in an Oracle Fleet Patching and Provisioning Client.

  • GH_CLIENT_ACCESS: Any user created automatically inherits this role. The GH_CLIENT_ACCESS role includes the GH_AUTHENTICATED_USER built-in role.

Composite Built-In Roles

The composite built-in roles and their functions are:

  • GH_SA: The Oracle Grid Infrastructure user on an Oracle Fleet Patching and Provisioning Server automatically inherits this role.

    The GH_SA role includes the following basic built-in roles: GH_ROLE_ADMIN, GH_SITE_ADMIN, GH_SERIES_ADMIN, GH_SERIES_CONTRIB, GH_WC_ADMIN, GH_IMG_ADMIN, GH_IMG_TESTABLE, GH_IMG_RESTRICT, GH_IMG_PUBLISH, and GH_IMG_VISIBILITY.

  • GH_CA: The Oracle Grid Infrastructure user on an Oracle Fleet Patching and Provisioning Client automatically inherits this role.

    The GH_CA role includes the following basic built-in roles: GH_SERIES_ADMIN, GH_SERIES_CONTRIB, GH_WC_ADMIN, GH_IMG_ADMIN, GH_IMG_TESTABLE, GH_IMG_RESTRICT, GH_IMG_PUBLISH, and GH_IMG_VISIBILITY.

  • GH_OPER: This role includes the following built-in roles: GH_WC_OPER, GH_SERIES_ADMIN, GH_IMG_TESTABLE, GH_IMG_RESTRICT, and GH_IMG_USER. Users assigned this role can delete only images that they have created.

Consider a gold image called G1 that is available on the Oracle Fleet Patching and Provisioning Server.

Further consider that a user, U1, on an Oracle Fleet Patching and Provisioning Client, Cl1, has the GH_WC_USER role. If U1 requests to provision an Oracle home based on the gold image G1, then U1 can do so, because of the permissions granted by the GH_WC_USER role. If U1 requests to delete G1, however, then that request would be denied because the GH_WC_USER role does not have the necessary permissions.

The Oracle Fleet Patching and Provisioning Server can associate user-role mappings to the Oracle Fleet Patching and Provisioning Client. After the Oracle Fleet Patching and Provisioning Server delegates user-role mappings, the Oracle Fleet Patching and Provisioning Client can then modify user-role mappings on the Oracle Fleet Patching and Provisioning Server for all users that belong to the Oracle Fleet Patching and Provisioning Client. This is implied by the fact that only the Oracle Fleet Patching and Provisioning Server qualifies user IDs from an Oracle Fleet Patching and Provisioning Client site with the client cluster name of that site. Thus, the Oracle Fleet Patching and Provisioning Client CL1 will not be able to update user mappings of a user on CL2, where CL2 is the cluster name of a different Oracle Fleet Patching and Provisioning Client.

Basic Built-In Roles

The basic built-in roles and their functions are:

  • GH_ROLE_ADMIN: An administrative role for everything related to roles. Users assigned this role are able to run rhpctl verb role commands.

  • GH_SITE_ADMIN: An administrative role for everything related to Fleet Patching and Provisioning Clients. Users assigned this role are able to run rhpctl verb client commands.

  • GH_SERIES_ADMIN: An administrative role for everything related to image series. Users assigned this role are able to run rhpctl verb series commands.

  • GH_SERIES_CONTRIB: Users assigned this role can add images to a series using the rhpctl insertimage series command, or delete images from a series using the rhpctl deleteimage series command.

  • GH_WC_ADMIN: An administrative role for everything related to working copies of gold images. Users assigned this role are able to run rhpctl verb workingcopy commands.

  • GH_WC_OPER: A role that enables users to create a working copy of a gold image for themselves or others using the rhpctl add workingcopy command with the -user option (when creating for others). Users assigned this role do not have administrative privileges and can only administer the working copies of gold images that they create.

  • GH_WC_USER: A role that enables users to create a working copy of a gold image using the rhpctl add workingcopy command. Users assigned this role do not have administrative privileges and can only delete working copies that they create.

  • GH_IMG_ADMIN: An administrative role for everything related to images. Users assigned this role are able to run rhpctl verb image commands.

  • GH_IMG_USER: A role that enables users to create an image using the rhpctl add | import image commands. Users assigned this role do not have administrative privileges and can only delete images that they create.

  • GH_IMG_TESTABLE: A role that enables users to add a working copy of an image that is in the TESTABLE state. Users assigned this role must also be assigned either the GH_WC_ADMIN role or the GH_WC_USER role to add a working copy.

  • GH_IMG_RESTRICT: A role that enables users to add a working copy from an image that is in the RESTRICTED state. Users assigned this role must also be assigned either the GH_WC_ADMIN role or the GH_WC_USER role to add a working copy.

  • GH_IMG_PUBLISH: Users assigned this role can promote an image to another state or retract an image from the PUBLISHED state to either the TESTABLE or RESTRICTED state.

  • GH_IMG_VISIBILITY: Users assigned this role can modify access to promoted or published images using the rhpctl allow | disallow image commands.

Composite Built-In Roles

The composite built-in roles and their functions are:

  • GH_SA: The Oracle Grid Infrastructure user on a Fleet Patching and Provisioning Server automatically inherits this role.

    The GH_SA role includes the following basic built-in roles: GH_ROLE_ADMIN, GH_SITE_ADMIN, GH_SERIES_ADMIN, GH_SERIES_CONTRIB, GH_WC_ADMIN, GH_IMG_ADMIN, GH_IMG_TESTABLE, GH_IMG_RESTRICT, GH_IMG_PUBLISH, and GH_IMG_VISIBILITY.

  • GH_CA: The Oracle Grid Infrastructure user on a Fleet Patching and Provisioning Client automatically inherits this role.

    The GH_CA role includes the following basic built-in roles: GH_SERIES_ADMIN, GH_SERIES_CONTRIB, GH_WC_ADMIN, GH_IMG_ADMIN, GH_IMG_TESTABLE, GH_IMG_RESTRICT, GH_IMG_PUBLISH, and GH_IMG_VISIBILITY.

  • GH_OPER: This role includes the following built-in roles: GH_WC_OPER, GH_SERIES_ADMIN, GH_IMG_TESTABLE, GH_IMG_RESTRICT, and GH_IMG_USER. Users assigned this role can delete only images that they have created.

Consider a gold image called G1 that is available on the Fleet Patching and Provisioning Server.

Further consider that a user, U1, on a Fleet Patching and Provisioning Client, Cl1, has the GH_WC_USER role. If U1 requests to provision an Oracle home based on the gold image G1, then U1 can do so, because of the permissions granted by the GH_WC_USER role. If U1 requests to delete G1, however, then that request would be denied because the GH_WC_USER role does not have the necessary permissions.

The Fleet Patching and Provisioning Server can associate user-role mappings to the Fleet Patching and Provisioning Client. After the Fleet Patching and Provisioning Server delegates user-role mappings, the Fleet Patching and Provisioning Client can then modify user-role mappings on the Fleet Patching and Provisioning Server for all users that belong to the Fleet Patching and Provisioning Client. This is implied by the fact that only the Fleet Patching and Provisioning Server qualifies user IDs from a Fleet Patching and Provisioning Client site with the client cluster name of that site. Thus, the Fleet Patching and Provisioning Client CL1 will not be able to update user mappings of a user on CL2, where CL2 is the cluster name of a different Fleet Patching and Provisioning Client.

Oracle Fleet Patching and Provisioning Images

You can easily copy an image of an Oracle home to a new host on a new file system to serve as an active usable Oracle home.

By default, when you create a gold image using either rhpctl import image or rhpctl add image, the image is ready to provision new homes, called working copies. However, under certain conditions, you may want to restrict access to images and require someone to test or validate the image before making it available for general use.

You can also create a set of gold images on the Oracle Fleet Patching and Provisioning Server that can be collectively categorized as a gold image series which relate to each other, such as identical release versions, gold images published by a particular user, or images for a particular department within an organization.

Gold Image Distribution Among Oracle Fleet Patching and Provisioning Servers

Oracle Fleet Patching and Provisioning can automatically share and synchronize gold images between Oracle Fleet Patching and Provisioning Servers.

In the Oracle Fleet Patching and Provisioning architecture, one Oracle Fleet Patching and Provisioning Server manages a set of Oracle Fleet Patching and Provisioning Clients and targets within a given data center or network segment of a data center. If you have more than one data center or a segmented data center, you must have more than one Oracle Fleet Patching and Provisioning Server.

In the Oracle Fleet Patching and Provisioning architecture, one Oracle Fleet Patching and Provisioning Server manages a set of Oracle Fleet Patching and Provisioning Clients and targets within a given data center or network segment of a data center. If you have more than one data center or a segmented data center, then you must have more than one Oracle Fleet Patching and Provisioning Server to facilitate large-scale standardization across multiple estates.

Oracle Fleet Patching and Provisioning Servers retain the ability to create and manage gold images private to their scope, so local customizations are seamlessly supported.

You must first establish a peer relationship between two Oracle Fleet Patching and Provisioning Servers. Registration uses the names of the Oracle Fleet Patching and Provisioning Server clusters. The names of the two clusters can be the same but there is one naming restriction: an Oracle Fleet Patching and Provisioning Server, such as FPPS_1, cannot register a peer Oracle Fleet Patching and Provisioning Server if that peer has the same name as an Oracle Fleet Patching and Provisioning Client or target within the management domain of FPPS_1.

The following steps show how you can establish a peer relationship between two Oracle Fleet Patching and Provisioning Servers. Note that super user or root credentials are not required in this process.

  1. On the first Oracle Fleet Patching and Provisioning Server (FPPS_1), create a file containing the server configuration information.
    $ rhpctl export server -serverdata file_path
  2. Copy the server configuration file created on FPPS_1 to a second Oracle Fleet Patching and Provisioning Server (FPPS_2).
  3. On the second Oracle Fleet Patching and Provisioning Server (FPPS_2), complete the registration of FPPS_2.
    $ rhpctl register server -server FPPS_1_cluster_name 
        -serverdata server_cfg_file_copied_from_FPPS_1 
  4. On FPPS_2, create a file containing the server configuration information.
    $ rhpctl export server -serverdata file_path
  5. Copy the server configuration file created on FPPS_2 to FPPS_1.
  6. On the first Oracle Fleet Patching and Provisioning Server (FPPS_1), complete the registration ofFPPS_1.
    $ rhpctl register server -server FPPS_2_cluster_name 
        -serverdata server_cfg_file_copied_from_FPPS_2 
After you register an Oracle Fleet Patching and Provisioning Server as a peer, the following command displays the peer (or peers) of the server:
$ rhpctl query peerserver
You can inspect the images on a peer Oracle Fleet Patching and Provisioning Server, as follows:
$ rhpctl query image -server server_cluster_name

The preceding command displays all images on a specific peer Oracle Fleet Patching and Provisioning Server. Additionally, you can specify a peer server along with the -image image_name parameter to display details of a specific image on a specific peer server.

An Oracle Fleet Patching and Provisioning Server can have multiple peers. Oracle does not support chained relationships between peers, however, such as, if FPPS_1 is a peer of FPPS_2, and FPPS_2 is also a peer of FPPS_3, then no relationship is established or implied between FPPS_1 and FPPS_3, although you can make them peers if you want.

Retrieve a copy or copies of gold images from a peer Oracle Fleet Patching and Provisioning Server, as follows:
$ rhpctl instantiate image -server server_cluster_name
Running the rhpctl instantiate image command activates an auto-update mechanism. From that point on, when you create gold images on a peer Oracle Fleet Patching and Provisioning Server, such as FPPS_2, they are candidates for being automatically copied to the Oracle Fleet Patching and Provisioning Server that performed the instantiate operation, such as FPPS_1. Whether a new gold image is automatically copied depends on that relevance of the image to any instantiate parameters that you may include in the command:
  • -all: Creates an automatic push for all gold images created on FPPS_2 to FPPS_1

  • -image image_name: Creates an automatic push for all new descendant gold images of the named image created on FPPS_2 to FPPS_1. A descendant of the named image is an image that is created on FPPS_2 using the rhpctl add image command.

  • -series series_name: Creates an automatic push for all gold images added to the named series on FPPS_2 to FPPS_1

  • -imagetype image_type: Creates an automatic push for all gold images created of the named image type on FPPS_2 to FPPS_1

To stop receiving updates that were established by the rhpctl instantiate image command, run rhpctl uninstantiate image and specify the peer Oracle Fleet Patching and Provisioning Server and one of the following: all, image name, image series name, or image type.

End the peer relationship, as follows, on any one of the Oracle Fleet Patching and Provisioning Servers:
$ rhpctl unregister server -server server_cluster_name

Oracle Fleet Patching and Provisioning Server Auditing

The Oracle Fleet Patching and Provisioning Server records the execution of all Oracle Fleet Patching and Provisioning operations, and also records whether those operations succeeded or failed.

An audit mechanism enables administrators to query the audit log in a variety of dimensions, and also to manage its contents and size.

Oracle Fleet Patching and Provisioning Notifications

The Oracle Fleet Patching and Provisioning Server is the central repository for the software homes available to the data center. Therefore, it is essential for administrators throughout the data center to be aware of changes to the inventory that may impact their areas of responsibility.

You can create subscriptions to image series events. Oracle Fleet Patching and Provisioning notifies a subscribed role or number of users by email of any changes to the images available in the series, including addition or removal of an image. Each series may have a unique group of subscribers.

Also, when a working copy of a gold image is added to or deleted from a target, the owner of the working copy and any additional users can be notified by email. If you want to enable notifications for additional Oracle Fleet Patching and Provisioning events, you can create a user-defined action as described in the next section.

Fleet Patching and Provisioning Implementation

Implementing Fleet Patching and Provisioning involves creating a Fleet Patching and Provisioning Server, adding gold images to the server, and creating working copies of gold images to provision software.

After you install and configure Oracle Clusterware, you can configure and start using Fleet Patching and Provisioning. You must create a Fleet Patching and Provisioning Server where you create and store gold images of database and other software homes.

Server Configuration Checklist for Oracle Fleet Patching and Provisioning

Use this checklist to check minimum server configuration requirements for Oracle Fleet Patching and Provisioning (Oracle FPP).

Table 5-1 Server Configuration Checklist for Oracle Fleet Patching and Provisioning

Check Task
Oracle Grid Infrastructure installation

Install Oracle Grid Infrastructure on a new cluster on which you want to configure Oracle FPP.

Operating System Kernel version

Install or upgrade the operating system kernel to a version for which an Oracle ACFS kernel module is already built.

Grid Infrastructure Management Repository configuration

Make sure that the Grid Infrastructure Management Repository (GIMR) is configured and running on your cluster. If GIMR was not configured as part of the Oracle Grid Infrastructure installation, then add a new GIMR to your cluster as described in Oracle Grid Infrastructure Installation and Upgrade Guide.

Oracle FPP server storage Allocate a minimum of 100 GB additional disk space to the Oracle Automation Storage Management (Oracle ASM) disk group that is used by the Oracle FPP Server.
Oracle FPP server network

Create one Grid Naming Service Virtual IP Address (GNS VIP) without zone delegation.

Firewall Make sure that the ports used by Oracle FPP Server and Client are not filtered by firewalls. Please refer to Table 2-2 Fleet Patching and Provisioning Communication Ports

Creating a Fleet Patching and Provisioning Server

The Fleet Patching and Provisioning Server uses a repository that you create in an Oracle ACFS file system in which you store all the software homes that you want to make available to clients and targets.

Note:

When you install Oracle Grid Infrastructure, the Oracle Fleet Patching and Provisioning Server is configured, by default, in the local mode to support the local switch home capability. If you must configure the general Oracle Fleet Patching and Provisioning Server product, then you must remove the current local-mode Oracle Fleet Patching and Provisioning Server.
  1. Use the Oracle ASM configuration assistant (ASMCA) to create an Oracle ASM disk group on the Fleet Patching and Provisioning Server to store software.
    $ Grid_home/bin/asmca

    Because this disk group is used to store software, Oracle recommends a minimum of 100 GB for this disk group.

    Note:

    You must set Oracle ASM Dynamic Volume Manager (Oracle ADVM) compatibility settings for this disk group to 19.0.

  2. Provide a mount path that exists on all nodes of the cluster. The Fleet Patching and Provisioning Server uses this path to mount gold images.
    $ mkdir -p storage_path/images
  3. Check if Grid Infrastructure Management Repository (GIMR) is configured on your cluster.
    $ srvctl status mgmtdb 
    Database is enabled
    Instance -MGMTDB is running on node myhost01
  4. If GIMR is not configured on your cluster, then as the grid user, add a GIMR to your cluster.
    1. For Oracle Database 19c Release Update (19.6) or earlier releases:
      $ $ORACLE_HOME/bin/mgmtca createGIMRContainer [-storageDiskGroup disk_group_name]
    2. For Oracle Database 19c Release Update (19.7) or later releases:
      $ $ORACLE_HOME/bin/mgmtca createGIMRContainer [-storageDiskLocation disk_location]
  5. As the root user, add the Grid Naming Service Virtual IP Address (GNS VIP) without zone delegation.
    # srvctl add gns -vip myhost-gnsvip3
    # srvctl start gns
    # srvctl status gns
    GNS is running on node myhost01.
    GNS is enabled on node myhost01. 
  6. Remove any existing local automaton from your cluster.
    # srvctl stop rhpserver
    # srvctl remove rhpserver
  7. Create the Fleet Patching and Provisioning Server resource.
    # Grid_home/bin/srvctl add rhpserver -storage storage_path
        -diskgroup disk_group_name
  8. Start the Fleet Patching and Provisioning Server.
    $ Grid_home/bin/srvctl start rhpserver

After you start the Fleet Patching and Provisioning Server, use the Fleet Patching and Provisioning Control (RHPCTL) utility to further manage Fleet Patching and Provisioning.

Adding Gold Images to the Fleet Patching and Provisioning Server

Use RHPCTL to add gold images for later provisioning of software.

The Fleet Patching and Provisioning Server stores and serves gold images of software homes. These images must be instantiated on the Fleet Patching and Provisioning Server.

Note:

Images are read-only, and you cannot run programs from them. To create a usable software home from an image, you must create a working copy of a gold image. You cannot directly use images as software homes. You can, however, use images to create working copies (software homes).

You can import software to the Fleet Patching and Provisioning Server using any one of the following methods:

  • You can import an image from an installed home on the Fleet Patching and Provisioning Server using the following command:

    rhpctl import image -image image_name -path path_to_installed_home
      [-imagetype ORACLEDBSOFTWARE | ORACLEGISOFTWARE | ORACLEGGSOFTWARE | SOFTWARE]
    
  • You can import an image from an installed home on a Fleet Patching and Provisioning Client, using the following command run from the Fleet Patching and Provisioning Client:

    rhpctl import image -image image_name -path path_to_installed_home
    
  • You can create an image from an existing working copy using the following command:

    rhpctl add image –image image_name -workingcopy working_copy_name
    

Use the first two commands in the preceding list to seed the image repository, and to add additional images over time. Use the third command on the Fleet Patching and Provisioning Server as part of the workflow for creating a gold image that includes patches applied to a pre-existing gold image.

The preceding three commands also create an Oracle ACFS file system in the Fleet Patching and Provisioning root directory, similar to the following:

/u01/rhp/images/images/RDBMS_121020617524
Image State

Am image state is a way to restrict provisioning of an image for users with specified roles.

You can set the state of an image to TESTABLE or RESTRICTED so that only users with the GH_IMG_TESTABLE or GH_IMG_RESTRICT roles can provision working copies from this image. Once the image has been tested or validated, you can change the state and make the image available for general use by running the rhpctl promote image -image image_name -state PUBLISHED command. The default image state is PUBLISHED when you add a new gold image, but you can optionally specify a different state with the rhpctl add image and rhpctl import image commands.

Image Series

An image series is a convenient way to group different gold images into a logical sequence.

Fleet Patching and Provisioning treats each image as an independent entity with respect to other images. No relationship is assumed between images, even if they follow some specific nomenclature. The image administrator may choose to name images in a logical manner that makes sense to the user community, but this does not create any management grouping within the Fleet Patching and Provisioning framework.

Use the rhpctl add series command to create an image series and associate one or more images to this series. The list of images in an image series is an ordered list. Use the rhpctl insertimage series and rhpctl deleteimage series to add and delete images in an image series. You can also change the order of images in a series using these commands.

The insertimage and deleteimage commands do not instantiate or delete actual gold images but only change the list. Also, an image can belong to more than one series (or no series at all).

Image Type

When you add or import a gold image, you must specify an image type.

Oracle Clusterware provides the following built-in base image types:
  • ORACLEDBSOFTWARE
  • ORACLEGISOFTWARE
  • ORACLEGGSOFTWARE
  • SOFTWARE

Every gold image must have an image type, and you can create your own image types. A new image type must be based on one of the built-in types. The image type directs Fleet Patching and Provisioning to apply its capabilities for managing Oracle Grid Infrastructure and Oracle Database homes. Fleet Patching and Provisioning also uses image type to organize the custom workflow support framework.

Creating a Custom Image Type

Use the rhpctl add imagetype command to create custom image types.

For example, to create an image type called DBTEST, which is based on the ORACLEDBSOFTWARE image type:

$ rhpctl add imagetype -imagetype DBTEST -basetype ORACLEDBSOFTWARE

Note:

When you create an image type that is based on an existing image type, the new image type does not inherit any user actions (for custom workflow support) from the base type.

Provisioning Copies of Gold Images

Use RHPCTL to provision copies of gold images to Fleet Patching and Provisioning Servers, Clients, and targets.

After you create and import a gold image, you can provision software by adding a copy of the gold image (called a working copy) on the Fleet Patching and Provisioning Server, on a Fleet Patching and Provisioning Client, or a target. You can run the software provisioning command on either the Server or a Client.

  • To create a working copy on the Fleet Patching and Provisioning Server:
    $ rhpctl add workingcopy -workingcopy working_copy_name -image image_name
    
  • To create a working copy in a local file system on a Fleet Patching and Provisioning Client:
    $ rhpctl add workingcopy -workingcopy working_copy_name -image image_name
       -storagetype LOCAL -path path_to_software_home
  • To create a working copy on a Fleet Patching and Provisioning Client from the Fleet Patching and Provisioning Server:
    $ rhpctl add workingcopy -workingcopy working_copy_name -image image_name
       -client client_cluster_name

Note:

  • The directory you specify in the -path parameter must be empty.

  • You can re-run the provisioning command in case of an interruption or failure due to system or user errors. After you fix the reported errors, re-run the command and it will resume from the point of failure.

User Group Management in Fleet Patching and Provisioning

When you create a working copy of a gold image as part of a move or upgrade operation, Fleet Patching and Provisioning configures the operating system groups in the new working copy to match those of the source software home (either the unmanaged or the managed home from which you move or upgrade).

When you create a gold image of SOFTWARE image type, any user groups in the source are not inherited and images of this type never contain user group information. When you provision a working copy from a SOFTWARE gold image using the rhpctl add workingcopy command, you can, optionally, configure user groups in the working copy using the -groups parameter.

The rhpctl move database, rhpctl move gihome, rhpctl upgrade database, and rhpctl upgrade gihome commands all require you to specify a source home (either an unmanaged home or a managed home (working copy) that you provisioned using Fleet Patching and Provisioning), and a destination home (which must be a working copy).

When you have provisioned the destination home using the rhpctl add workingcopy command, prior to performing a move or upgrade operation, you must ensure that the groups configured in the source home match those in the destination home. Fleet Patching and Provisioning configures the groups as part of the add operation.

When you create a gold image of either the ORACLEGISOFTWARE or the ORACLEDBSOFTWARE image type from a source software home (using the rhpctl import image command) or from a working copy (using the rhpctl add image command), the gold image inherits the Oracle user groups that were configured in the source. You cannot override this feature.

You can define user groups for ORACLEGISOFTWARE and ORACLEDBSOFTWARE working copies using the rhpctl add workingcopy command, depending on the image type and user group, as discussed in the subsequent sections.

This section describes how Fleet Patching and Provisioning manages user group configuration, and how the -groups command-line option of rhpctl add workingcopy functions.

ORACLEGISOFTWARE (Oracle Grid Infrastructure 11g release 2 (11.2), and 12c release 1 (12.1) and release 2 (12.2))

When you provision an Oracle Grid Infrastructure working copy of a gold image, the groups are set in the working copy according to the type of provisioning (whether regular provisioning or software only, and with or without the -local parameter), and whether you specify the -groups parameter with rhpctl add workingcopy. You can define OSDBA and OSASM user groups in Oracle Grid Infrastructure software with either the -softwareonly command parameter or by using a response file with the rhpctl add workingcopy command.

If you are provisioning only the Oracle Grid Infrastructure software using the -softwareonly command parameter, then you cannot use the -groups parameter, and Fleet Patching and Provisioning obtains OSDBA and OSASM user group information from the active Grid home.

If you use the -local command parameter (which is only valid when you use the -softwareonly command parameter) with rhpctl add workingcopy, then Fleet Patching and Provisioning takes the values of the groups from the command line (using the -groups parameter) or uses the default values, which Fleet Patching and Provisioning obtains from the osdbagrp binary of the gold image.

If none of the preceding applies, then Fleet Patching and Provisioning uses the installer default user group.

If you are provisioning and configuring a working copy using information from a response file, then Fleet Patching and Provisioning:
  1. Uses the value of the user group from the command line, if provided, for OSDBA or OSASM, or both.

  2. If you provide no value on the command line, then Fleet Patching and Provisioning retrieves the user group information defined in the response file.

If you are defining the OSOPER Oracle group, then, again, you can either use the -softwareonly command parameter or use a response file with the rhpctl add workingcopy command.

If you use the -softwareonly command parameter, then you can provide the value on the command line (using the -groups parameter) or leave the user group undefined.

If you are provisioning and configuring a working copy of a gold image using information from a response file, then you can provide the value on the command line, use the information contained in the response file, or leave the OSOPER Oracle group undefined.

ORACLEDBSOFTWARE (Oracle Database 11g release 2 (11.2), and 12c release 1 (12.1) and release 2 (12.2))

If you are provisioning a working copy of Oracle Database software and you want to define Oracle groups, then use the -groups command parameter with the rhpctl add workingcopy command. Oracle groups available in the various Oracle Database releases are as follows:

  • Oracle Database 11g release 2 (11.2)

    • OSDBA
    • OSOPER
  • Oracle Database 12c release 1 (12.1)

    • OSDBA
    • OSOPER
    • OSBACKUP
    • OSDG
    • OSKM
  • Oracle Database 12c release 2 (12.2)

    • OSDBA
    • OSOPER
    • OSBACKUP
    • OSDG
    • OSKM
    • OSRAC

Regardless of which of the preceding groups you are defining (except for OSOPER), Fleet Patching and Provisioning takes the values of the groups from the command line (using the -groups parameter) or uses the default values, which Fleet Patching and Provisioning obtains from the osdbagrp binary of the gold image.

If any group picked up from the osdbagrp binary is not in the list of groups to which the database user belongs (given by the id command), then Fleet Patching and Provisioning uses the installer default user group. Otherwise, the database user is the user running the rhpctl add workingcopy command.

Storage Options for Provisioned Software

Choose one of two storage options where Fleet Patching and Provisioning stores working copies of gold images.

When you provision software using the rhpctl add workingcopy command, you can choose from two storage options where Fleet Patching and Provisioning places that software:

  • In an Oracle ACFS shared file system managed by Fleet Patching and Provisioning (for database homes only)

  • In a local file system not managed by Fleet Patching and Provisioning

Using the rhpctl add workingcopy command with the –storagetype and –path parameters, you can choose where you store provisioned working copies. The applicability of the parameters depends on whether the node (or nodes) to which you are provisioning the working copy is a Fleet Patching and Provisioning Server, Fleet Patching and Provisioning Client, or a non-Fleet Patching and Provisioning client. You can choose from the following values for the –stroragetype parameter:

  • RHP_MANAGED: Choosing this value, which is available for Fleet Patching and Provisioning Servers and Fleet Patching and Provisioning Clients, stores working copies in an Oracle ACFS shared file system. The –path parameter is not used with this option because Fleet Patching and Provisioning manages the storage option.

    Notes:

    • You cannot store Oracle Grid Infrastructure homes in RHP_MANAGED storage.

    • Oracle recommends using the RHP_MANAGED storage type, which is available on Fleet Patching and Provisioning Servers, and on Clients configured with an Oracle ASM disk group.

    • If you provision working copies on a Fleet Patching and Provisioning Server, then you do not need to specify the -storagetype option because it will default to RHP_MANAGED.

    • If you choose to provision working copies on a Fleet Patching and Provisioning Client, and you do not specify the -path parameter, then the storage type defaults to RHP_MANAGED only if there is an Oracle ASM disk group on the client. Otherwise the command will fail. If you specify a location on the client for the -path parameter, then the storage type defaults to LOCAL with or without an Oracle ASM disk group.

  • LOCAL: Choosing this value stores working copies in a local file system that is not managed by Fleet Patching and Provisioning. You must specify a path to the file system on the Fleet Patching and Provisioning Server, Fleet Patching and Provisioning Client, or non-Fleet Patching and Provisioning client, or to the Oracle ASM disk group on the Fleet Patching and Provisioning Client.

In cases where you specify the –path parameter, if the file system is shared among all of the nodes in the cluster, then the working copy gets created on this shared storage. If the file system is not shared, then the working copy gets created in the location of the given path on every node in the cluster.

Note:

The directory you specify in the -path parameter must be empty.

Related Topics

Provisioning for a Different User

If you want a different user to provision software other than the user running the command, then use the -user parameter of the rhpctl add workingcopy command.

When the provisioning is completed, all files and directories of the provisioned software are owned by the user you specified. Permissions on files on the remotely provisioned software are the same as the permissions that existed on the gold image from where you provisioned the application software.

Propagating Images Between Fleet Patching and Provisioning Servers

With automatic image propagation, you can set up automated copies of software images across different peer Fleet Patching and Provisioning Servers. Gold images that you register at one site are copied to peer Fleet Patching and Provisioning Servers.

In a peer-to-peer relationship between two Fleet Patching and Provisioning Servers, one server is the source for software images and the other is the destination for software images.
The criteria for copying software images between servers can be based on certain policies, such as software image type, image series, or all software images. When you register an image that meets the criteria for software image propagation, a copy of this image propagates to all registered peer servers.
The following example procedure establishes a relationship between two Fleet Patching and Provisioning Server sites, RHPS-A and RHPS-B, where RHPS-A is the source and RHPS-B is the destination for software images.
  1. Run the following command on RHPS-B:
    $ rhpctl export server -server RHPS-A -serverdata file_path

    The preceding command creates a file named RHPS-B.xml in the directory path you specify for the -serverdata parameter.

  2. Run the following command to register a peer server to the current Fleet Patching and Provisioning Server:
    $ rhpctl register server -server RHPS-B -serverdata /tmp/RHPS-B.xml

    The preceding command copies the RHPS-B.xml file that was created in the previous step to a location on the server where you run the command, which is /tmp/RHPS-B.xml, in this case.

    The cluster in which you run the preceding command is the source site, and the server that you specify on the command line is the destination site to which software images are copied.

    Use the rhpctl unregister server command to remove the peer-to-peer relationship.

  3. Run the following command on RHPS-B to propagate software images from RHPS-A to RHPS-B:
    $ rhpctl instantiate image -server RHPS-A  -all
    The preceding command propagates all images from RHPS-A to RHPS-B. If an image already exists on RHPS-B, then it is not propagated again.

    Note:

    Propagation of images is based on the image name. RHPCTL does not compare the content of the images, themselves, to determine whether they are different. If an image you want to propagate has a non-unique name, then RHPCTL assumes that the images are identical and does not propagate the image.

    Use the rhpctl uninstantiate image command to cancel the propagation of a particular image. Any propagation already in progress is not affected by this command, and will continue until complete. The -all parameter removes any values for the -imagetype and -series parameters.

Oracle Grid Infrastructure Management

The Oracle Fleet Patching and Provisioning Server provides an efficient and secure platform for the distribution of Oracle Grid Infrastructure homes to targets and Oracle Fleet Patching and Provisioning Clients.

This section discusses the following topics:

About Deploying Oracle Grid Infrastructure Using Oracle Fleet Patching and Provisioning

You can use Oracle Fleet Patching and Provisioning to provision and maintain your Oracle Grid Infrastructure homes.

Oracle Fleet Patching and Provisioning enables mass deployment and maintenance of standard operating environments for databases, clusters, and user-defined software types.

Oracle FPP enables you to install clusters, and provision, patch, scale, and upgrade Oracle Grid Infrastructure, Oracle Restart, and Oracle Database homes. The supported releases are 11.2.0.4, 12.1, 12.2, 18c, and later releases. You can also provision applications and middleware using Oracle Fleet Patching and Provisioning.

Oracle Fleet Patching and Provisioning is a service in Oracle Grid Infrastructure that you can use in either of the following modes:

  • Central Oracle Fleet Patching and Provisioning Server

    The Oracle Fleet Patching and Provisioning Server stores and manages standardized images, called gold images. Gold images can be deployed to any number of nodes across the data center. You can create new clusters and databases on the deployed homes and can use them to patch, upgrade, and scale existing installations.

    The Oracle Fleet Patching and Provisioning Server can manage the following types of installations:
    • Software homes on the cluster hosting the Oracle Fleet Patching and Provisioning Server itself.
    • Installations running Oracle Grid Infrastructure 11g Release 2 (11.2.0.4) and later releases.
    • Oracle Fleet Patching and Provisioning Clients running Oracle Grid Infrastructure 12c Release 2 (12.2) and later releases.
    • Installations running without Oracle Grid Infrastructure.

    The Oracle Fleet Patching and Provisioning Server can provision new installations, and manage existing installations, without requiring any changes to the existing installations. The Oracle Fleet Patching and Provisioning Server can automatically share gold images among peer servers to support enterprises with geographically distributed data centers.

  • Oracle Fleet Patching and Provisioning Client

    The Oracle Fleet Patching and Provisioning Client can be managed from the Oracle Fleet Patching and Provisioning Server, or directly by executing commands on the client itself. The Oracle Fleet Patching and Provisioning Client is a service built into the Oracle Grid Infrastructure and is available in Oracle Grid Infrastructure 12c Release 2 (12.2) and later releases. The Oracle Fleet Patching and Provisioning Client can retrieve gold images from the Oracle Fleet Patching and Provisioning Server, upload new images based on the policy, and apply maintenance operations to itself.

Provisioning Oracle Grid Infrastructure Software

Fleet Patching and Provisioning has several methods to provision and, optionally, configure Oracle Grid Infrastructure and Oracle Restart grid infrastructure homes.

Fleet Patching and Provisioning can provision and configure Oracle Grid Infrastructure on one or more nodes that do not currently have a Grid home, and then configure Oracle Grid Infrastructure to form a single-node or multi-node Oracle Grid Infrastructure installation.
Use the rhpctl add workingcopy command to install and configure Oracle Grid Infrastructure, and to enable simple and repeatable creation of standardized deployments.
The Fleet Patching and Provisioning Server can also provision an Oracle Grid Infrastructure home to a node or cluster that is currently running Oracle Grid Infrastructure. The currently running Grid home can be a home that Fleet Patching and Provisioning did not provision (an unmanaged home) or a home that Fleet Patching and Provisioning did provision (a managed home).
You can also provision an Oracle Restart grid infrastructure to a node in the cluster.
In either case, use the -softwareonly parameter of the rhpctl add workingcopy command. This provisions but does not activate the new Grid home, so that when you are ready to switch to that new home, you can do so with a single command.
  • To inform Fleet Patching and Provisioning the nodes on which to install Oracle Grid Infrastructure, and to configure Oracle Grid Infrastructure, you provide directions in a response file, as in the following example:
    $ rhpctl add workingcopy -workingcopy GI_HOME_11204_WCPY -image GI_HOME_11204 –responsefile /u01/app/rhpinfo/GI_11204_install.txt
      {authentication_option}
    The preceding command provisions the GI_HOME_11204_WCPY working copy based on the GI_HOME_11204 gold image to a target specified in the GI_11204_install.txt response file. In addition to identifying the target nodes, the response file specifies information about the Oracle Grid Infrastructure configuration, such as Oracle ASM and GNS parameters.

    Note:

    The oracle.install.crs.rootconfig.executeRootScript=xxx response file parameter is overridden and always set to false for Fleet Patching and Provisioning, regardless of what you specify in the response file.
  • To provision an Oracle Grid Infrastructure home to a node or cluster that is currently running Oracle Grid Infrastructure:
    $ rhpctl add workingcopy -workingcopy GI_HOME_12201_PATCHED_WCPY -image GI_HOME_12201_PSU1 –client CLUST_002 -softwareonly

    The preceding command provisions a new working copy based on the GI_HOME_12201_PSU1 gold image to the Fleet Patching and Provisioning Client (that is running Oracle Grid Infrastructure 12c release 2 (12.2)) named CLUST_002. When you provision to a target that is not running Oracle Grid Infrastructure 12c release 2 (12.2) (such as, a target running Oracle Grid Infrastructure 12c release 1 (12.1) or Oracle Grid Infrastructure 11g release 2 (11.2)), use the -targetnode parameter instead of -client.

  • Specify a target node on which you want to provision an Oracle Restart grid infrastructure, as follows:
    $ rhpctl add workingcopy -workingcopy SIHA_GI -image goldimage -targetnode remote_node_name -responsefile Oracle_Restart_response_file {authentication_option}

Patching Oracle Grid Infrastructure

Fleet Patching and Provisioning provides three methods to patch Oracle Grid Infrastructure software homes: rolling, non-rolling, and in batches.

Patching Oracle Grid Infrastructure software involves moving the Grid home to a patched version of the current Grid home. When the patching operation is initiated by a Fleet Patching and Provisioning Server or Client, the patched version must be a working copy of a gold image. The working copy to which you are moving the Grid home can be at a lower patch level than the current home. This facilitates rollback if any problems occur after moving to the higher-level patched home.

You can also perform this operation using the independent automaton in an environment where no Fleet Patching and Provisioning Server is present. In this case, the source and destination homes are not working copies of gold images, but are two installed homes that you deployed with some method other than using Fleet Patching and Provisioning.

Patching Oracle Grid Infrastructure Using the Rolling Method

The rolling method for patching Oracle Grid Infrastructure is the default method.

You use the rhpctl move gihome command (an atomic operation), which returns after the Oracle Grid Infrastructure stack on each node has been restarted on the new home. Nodes are restarted sequentially, so that only one node at a time will be offline, while all other nodes in the cluster remain online.
  • Move the Oracle Grid Infrastructure home to a working copy of the same release level, as follows:
    $ rhpctl move gihome –sourcewc Grid_home_1 –destwc Grid_home_2

    The preceding command moves the running Oracle Grid Infrastructure home from the current managed home (the sourcewc) to the patched home (destwc) on the specific client cluster. The patched home must be provisioned on the client.

  • If the move operation fails at some point before completing, then you can rerun the operation by running the command again and the operation will resume where it left off. This enables you to fix whatever problem caused the failure and resume processing from the point of failure. Or you can undo the partially completed operation and return the configuration to its initial state, as follows:
    $ rhpctl move gihome -destwc destination_workingcopy_name -revert [authentication_option]

    You cannot use the -revert parameter with an un-managed home.

Notes:

  • You cannot move the Grid home to a home that Fleet Patching and Provisioning does not manage. Therefore, rollback (to the original home) applies only to moves between two working copies. This restriction does not apply when using the independent automaton since it operates on unmanaged homes only.

  • You can delete the source working copy at any time after moving a Grid home. Once you delete the working copy, however, you cannot perform a rollback. Also, use the rhpctl delete workingcopy command (as opposed to rm, for example) to remove the source working copy to keep the Fleet Patching and Provisioning inventory correct.

  • If you use the -abort parameter to terminate the patching operation, then Fleet Patching and Provisioning does not clean up or undo any of the patching steps. The cluster, databases, or both may be in an inconsistent state because all nodes are not patched.

Patching Oracle Grid Infrastructure Using the Non-Rolling Method

You can use the -nonrolling parameter with the rhpctl move gihome command, which restarts the Oracle Grid Infrastructure stack on all nodes in parallel.

As with the rolling method, this is an atomic command which returns after all nodes are online.
  • Use the following command to patch Oracle Grid Infrastructure in an non-rolling fashion:
    $ rhpctl move gihome –sourcewc Grid_home_1 –destwc Grid_home_2 -nonrolling
Patching Oracle Grid Infrastructure Using Batches

The third patching method is to sequentially process batches of nodes, with a number of nodes in each batch being restarted in parallel.

This method maximizes service availability during the patching process. When you patch Oracle Grid Infrastructure 12c release 2 (12.2.x) software homes, you can define the batches on the command line or choose to have Fleet Patching and Provisioning generate the list of batches based on its analysis of the database services running in the cluster.
There are two methods for defining batches:

User-Defined Batches

When you use this method of patching, the first time you run the rhpctl move gihome command, you must specify the source home, the destination home, the batches, and other options, as needed. The command terminates after the first node restarts.

To patch Oracle Grid Infrastructure using batches that you define:

  1. Define a list of batches on the command line and begin the patching process, as in the following example:

    $ rhpctl move gihome -sourcewc wc1 -destwc wc2 -batches "(n1),(n2,n3),(n4)"

    The preceding command example initiates the move operation, and terminates and reports successful when the Oracle Grid Infrastructure stack restarts in the first batch. Oracle Grid Infrastructure restarts the batches in the order you specified in the -batches parameter.

    In the command example, node n1 forms the first batch, nodes n2 and n3 form the second batch, and node n4 forms the last batch. The command defines the source working copy as wc1 and the patched (destination) working copy as wc2.

    Notes:

    You can specify batches such that singleton services (policy-managed singleton services or administrator-managed services running on one instance) are relocated between batches and non-singleton services remain partially available during the patching process.

  2. You must process the next batch by running the rhpctl move gihome command, again, as follows:

    $ rhpctl move gihome -destwc wc2 -continue

    The preceding command example restarts the Oracle Grid Infrastructure stack on the second batch (nodes n2 and n3). The command terminates by reporting that the second batch was successfully patched.

  3. Repeat the previous step until you have processed the last batch of nodes. If you attempt to run the command with the -continue parameter after the last batch has been processed, then the command returns an error.

    If the rhpctl move gihome command fails at any time during the above sequence, then, after determining and fixing the cause of the failure, rerun the command with the -continue option to attempt to patch the failed batch. If you want to skip the failed batch and continue with the next batch, use the -continue and -skip parameters. If you attempt to skip over the last batch, then the move operation is terminated.

    Alternatively, you can reissue the command using the -revert parameter to undo the changes that have been made and return the configuration to its initial state.

    You can use the -abort parameter instead of the -continue parameter at any point in the preceding procedure to terminate the patching process and leave the cluster in its current state.

    Notes:

    • Policy-managed services hosted on a server pool with one active server, and administrator-managed services with one preferred instance and no available instances cannot be relocated and will go OFFLINE while instances are being restarted.

    • If a move operation is in progress, then you cannot initiate another move operation from the same source home or to the same destination home.

    • After the move operation has ended, services may be running on nodes different from the ones they were running on before the move and you will have to manually relocate them back to the original instances, if necessary.

    • If you use the -abort parameter to terminate the patching operation, then Fleet Patching and Provisioning does not clean up or undo any of the patching steps. The cluster, databases, or both may be in an inconsistent state because all nodes are not patched.

    • Depending on the start dependencies, services that were offline before the move began could come online during the move.

Fleet Patching and Provisioning-Defined Batches

Using Fleet Patching and Provisioning to define and patch batches of nodes means that you need only run one command, as shown in the following command example, where the source working is wc1 and the destination working copy is wc2:

$ rhpctl move gihome -sourcewc wc1 -destwc wc2 -smartmove -saf Z+ [-eval]

There is no need for you to do anything else unless you used the -separate parameter with the command. In that case, the move operation waits for user intervention to proceed to the next batch, and then you will have to run the command with the -continue parameter after each batch completes.

If the move operation fails at some point before completing, then you can either rerun the operation by running the command again, or you can undo the partially completed operation, as follows:
$ rhpctl move gihome -destwc destination_workingcopy_name -revert [authentication_option]
You can use the -revert parameter with an un-managed home.

The parameters used in the preceding example are as follows:

  • -smartmove: This parameter restarts the Oracle Grid Infrastructure stack on disjoint sets of nodes so that singleton resources are relocated before Oracle Grid Infrastructure starts.

    Note:

    If the server pool to which a resource belongs contains only one active server, then that resource will go offline as relocation cannot take place.

    The -smartmove parameter:

    • Creates a map of services and nodes on which they are running.

    • Creates batches of nodes. The first batch will contain only the Hub node, if the configuration is an Oracle Flex Cluster. For additional batches, a node can be merged into a batch if:

      • The availability of any non-singleton service, running on this node, does not go below the specified service availability factor (or the default of 50%).

      • There is a singleton service running on this node and the batch does not contain any of the relocation target nodes for the service.

    • Restarts the Oracle Grid Infrastructure stack batch by batch.

  • Service availability factor (-saf Z+): You can specify a positive number, as a percentage, that will indicate the minimum number of database instances on which a database service must be running. For example:

    • If you specify -saf 50 for a service running on two instances, then only one instance can go offline at a time.

    • If you specify -saf 50 for a service running on three instances, then only one instance can go offline at a time.

    • If you specify -saf 75 for a service running on two instances, then an error occurs because the target can never be met.

    • The service availability factor is applicable for services running on at least two instances. As such, the service availability factor can be 0% to indicate a non-rolling move, but not 100%. The default is 50%.

    • If you specify a service availability factor for singleton services, then the parameter will be ignored because the availability of such services is 100% and the services will be relocated.

  • -eval: You can optionally use this parameter to view the auto-generated batches. This parameter also shows the sequence of the move operation without actually patching the software.

Related Topics

Combined Oracle Grid Infrastructure and Oracle Database Patching

When you patch an Oracle Grid Infrastructure deployment, Fleet Patching and Provisioning enables you to simultaneously patch the Oracle Database homes on the cluster, so you can patch both types of software homes in a single maintenance operation.

Note:

You cannot patch both Oracle Grid Infrastructure and Oracle Database in combination, with the independent automaton.

The following optional parameters of the rhpctl move gihome command are relevant to the combined Oracle Grid Infrastructure and Oracle Database patching use case:

  • -auto: Automatically patch databases along with patching Oracle Grid Infrastructure

  • -dbhomes mapping_of_Oracle_homes: Mapping of source and destination working copies in the following format:
    sourcewc1=destwc1,...,source_oracle_home_path=destwcN
  • -dblist db_name_list: Patch only the specified databases

  • -excludedblist db_name_list: Patch all databases except the specified databases

  • -nodatapatch: Indicates that datapatch is not be run for databases being moved

As an example, assume that a Fleet Patching and Provisioning Server with Oracle Grid Infrastructure 12c release 2 (12.2) has provisioned the following working copies on an Oracle Grid Infrastructure 12c release 1 (12.1.0.2) target cluster which includes the node test_749:

  • GI121WC1: The active Grid home on the Oracle Grid Infrastructure 12c release 1 (12.1.0.2) cluster

  • GI121WC2: A software-only Grid home on the Oracle Grid Infrastructure 12c release 1 (12.1.0.2) cluster

  • DB121WC1: An Oracle RAC 12c release 1 (12.1.0.2.0) database home running database instances

  • DB121025WC1: An Oracle RAC 12c release 1 (12.1.0.2.5) database home with no database instances (this is the patched home)

  • DB112WC1: An Oracle RAC 11g release 2 (11.2.0.4.0) database home running database instances

  • DB112045WC1: An Oracle RAC 11g release 2 (11.2.0.4.5) database home with no database instances (this is the patched home)

Further assume that you want to simultaneously move

  • Oracle Grid Infrastructure from working copy GI121WC1 to working copy GI121WC2

  • Oracle RAC Database db1 from working copy DB121WC1 to working copy DB121025WC1

  • Oracle RAC Database db2 in working copy DB112WC1 to working copy DB112045WC1

The following single command accomplishes the moves:

$ rhpctl move gihome -sourcewc GI121WC1 -destwc GI121WC2 -auto
  -dbhomes DB121WC1=DB121025WC1,DB112WC1=DB112045WC1 -targetnode test_749 {authentication_option}

Notes:

  • If you have an existing Oracle home that is not currently a working copy, then specify the Oracle home path instead of the working copy name for the source home. In the preceding example, if the Oracle home path for an existing 12.1.0.2 home is /u01/app/prod/12.1.0.2/dbhome1, then replace DB121WC1=DB121025WC1 with /u01/app/prod/12.1.0.2/dbhome1=DB121025WC1.

  • If the move operation fails at some point before completing, then you can either resolve the cause of the failure and resume the operation by rerunning the command, or you can undo the partially completed operation by issuing the following command, which restores the configuration to its initial state:
    $ rhpctl move gihome -destwc GI121WC2 -revert {authentication_option}

In the preceding command example, the Oracle Grid Infrastructure 12c release 1 (12.1.0.2) Grid home moves from working copy GI121WC1 to working copy GI121WC2, databases running on working copy DB121WC1 move to working copy DB121025WC1, and databases running on working copy DB112WC1 move to working copy DB112045WC1.

For each node in the client cluster, RHPCTL:

  1. Runs any configured pre-operation user actions for moving the Oracle Grid Infrastructure (move gihome).

  2. Runs any configured pre-operation user actions for moving the database working copies (move database).

  3. Stops services running on the node, and applies drain and disconnect options.

  4. Performs the relevant patching operations for Oracle Clusterware and Oracle Database.

  5. Runs any configured post-operation user actions for moving the database working copies (move database).

  6. Runs any configured post-operation user actions for moving the Oracle Grid Infrastructure working copy (move gihome).

Related Topics

Zero-Downtime Oracle Grid Infrastructure Patching

Use Fleet Patching and Provisioning to patch Oracle Grid Infrastructure without bringing down Oracle RAC database instances.

Current methods of patching the Oracle Grid Infrastructure require that you bring down all Oracle RAC database instances on the node where you are patching the Oracle Grid Infrastructure home. This issue is addressed in the Grid Infrastructure layer where by the database instances can continue to run during the grid infrastructure patching.

To enable zero-downtime Oracle Grid Infrastructure patching, use the rhpctl move gihome command in a manner similar to the following:

rhpctl move gihome -tgip -sourcewc source_workingcopy_name -destwc destination_workingcopy_name

Patching System Software Binaries

When using Zero Downtime Patching, only the binaries in the Oracle Grid Infrastructure user space are patched. Additional Oracle Grid Infrastructure OS system software, kernel modules and system commands including ACFS, AFD, OLFS, and OKA, are not updated. These commands continue to run the version previous to the patch version. After patching, the OPatch inventory displays the new patch number in the inventory; however, the running OS system software does not contain these changes. Only the OS system software that is available in the Grid Infrastructure home has been patched.

To determine the OS system software that is available in the Grid Infrastructure home, you can run the crsctl query driver activeversion -all command. To determine what OS system software is running on the system, use crsctl query driver softwareversion -all.

To update the Grid Infrastructure OS system software on a single node, you must completely stop the Grid Infrastructure software. To stop the Grid Infrastructure software, you must stop the Oracle RAC databases on the single node. After stopping the Oracle RAC databases, run root.sh -updateosfiles to update all the Grid Infrastructure OS system software on the single node.

See Also:

Patching Oracle Grid Infrastructure Using Local-Mode Configuration

When you install Oracle Grid Infrastructure or when you upgrade an older version to this current version, the Fleet Patching and Provisioning Server is configured automatically in local mode.

Note:

You must enable and start the Fleet Patching and Provisioning Server using the following commands before you can use the local-mode patching operation:
$ srvctl enable rhpserver
$ srvctl start rhpserver
To switch the Fleet Patching and Provisioning Server from local mode to the regular, central mode (to manage remote targets), you must delete the current Fleet Patching and Provisioning Server in local mode, as follows:
$ srvctl stop rhpserver
$ srvctl remove rhpserver

Proceed with the steps described in "Creating a Fleet Patching and Provisioning Server" to create the central-mode Fleet Patching and Provisioning Server.

  • The independent automaton for patching Oracle Grid Infrastructure performs all of the steps necessary to switch from one home to another. Because the automaton is not aware of gold images, moving the database requires two home paths, as follows:
    $ rhpctl move gihome –sourcehome Oracle_home_path -destinationhome Oracle_home_path
Use the following rhpctl move gihome command parameters for the patching operation:
  • -node: If the home you are moving is an Oracle Grid Infrastructure home installed on more than one node, then the default operation is a rolling update on all nodes. To apply a patch to just one node, specify the name of that node with this parameter.

  • -nonrolling: If the home you are moving is an Oracle Grid Infrastructure home installed on more than one node, then the default operation is a rolling update on all nodes. To patch all nodes in a nonrolling manner, use this parameter instead of the -node parameter.

  • -ignorewcpatches: By default, Fleet Patching and Provisioning will not perform the move operation if the destination home is missing any patches present in the source home. You can override this functionality by using this parameter, for example, to move back to a previous source home if you must undo an update.

Error Prevention and Automated Recovery Options

Fleet Patching and Provisioning has error prevention and automated recovery options to assist you during maintenance operations.

During maintenance operations, errors must be avoided whenever possible and, when they occur, you must have automated recovery paths to avoid service disruption.

Error Prevention

Many RHPCTL commands include the -eval parameter, which you can use to run the command and evaluate the current configuration without making any changes to determine if the command can be successfully run and how running the command will impact the configuration. Commands that you run using the -eval parameter run as many prerequisite checks as possible without changing the configuration. If errors are encountered, then RHPCTL reports them in the command output. After you correct any errors, you can run the command again using -eval to validate the corrections. Running the command successfully using –eval provides a high degree of confidence that running the actual command will succeed.

You can test commands with the -eval parameter outside of any maintenance window, so the full window is available for the maintenance procedure, itself.

Automated Recovery Options

During maintenance operations, errors can occur either in-flight (for example, partway through either an rhpctl move database or rhpctl move gihome command) or after a successful operation (for example, after an rhpctl move database command, you encounter performance or behavior issues).

In-Flight Errors

Should in-flight errors occur during move operations:
  • Correct any errors that RHPCTL reports and rerun the command, which will resume running at the point of failure.

    If rerunning the command succeeds and the move operation has a post-operation user action associated with it, then the user action is run. If there is a pre-operation user action, however, then RHPCTL does not rerun the command.

  • Run a new move command, specifying only the destination from the failed move (working copy or unmanaged home), an authentication option, if required, and use the -revert parameter. This will restore the configuration to its initial state.

    No user actions associated with the operation are run.

  • Run a new move command, specifying only the destination from the failed move (working copy or unmanaged home), an authentication option if required, and the -abort parameter. This leaves the configuration in its current state. Manual intervention is required at this point to place the configuration in a final state.

    No user actions associated with the operation are run.

Post-Update Issues

Even after a successful move operation to a new database or Oracle Grid Infrastructure home, you still may need to undo the change and roll back to the prior home. You can do this by rerunning the command with the source and destination homes reversed. This is, effectively, a fresh move operation performed without reference to the previous move operation.

Note:

For the independent automatons, the source and destination homes are always unmanaged homes (those homes not provisioned by Fleet Patching and Provisioning). When the move operation is run on a Fleet Patching and Provisioning Server or Fleet Patching and Provisioning Client, the destination home must be a managed home that was provisioned by Fleet Patching and Provisioning.

Upgrading Oracle Grid Infrastructure

If you are using Fleet Patching and Provisioning, then you can use a single command to upgrade an Oracle Grid Infrastructure home.

Fleet Patching and Provisioning supports upgrades to Oracle Grid Infrastructure 12c release 1 (12.1.0.2) from 11g release 2 (11.2.0.3 and 11.2.0.4). Upgrading to Oracle Grid Infrastructure 12c release 2 (12.2.0.1) is supported from 11g release 2 (11.2.0.3 and 11.2.0.4) and 12c release 1 (12.1.0.2). The destination for the upgrade can be a working copy of a gold image already provisioned or you can choose to create the working copy as part of this operation.

As an example, assume that a target cluster is running Oracle Grid Infrastructure on an Oracle Grid Infrastructure home that was provisioned by Fleet Patching and Provisioning. This Oracle Grid Infrastructure home is 11g release 2 (11.2.0.4) and the working copy is named accordingly.

After provisioning a working copy version of Oracle Grid Infrastructure 12c release 2 (12.2.0.1) (named GIOH12201 in this example), you can upgrade to that working copy with this single command:

$ rhpctl upgrade gihome -sourcewc GIOH11204 -destwc GIOH12201
Fleet Patching and Provisioning is able to identify the cluster to upgrade based on the name of the source working copy. If the target cluster was running on an unmanaged Oracle Grid Infrastructure home, then you would specify the path of the source home rather than providing a source working copy name, and you must also specify the target cluster.

Note:

You can delete the source working copy at any time after completing an upgrade. Once you delete the working copy, however, you cannot perform a rollback. Also, use the rhpctl delete workingcopy command (as opposed to rm, for example) to remove the source working copy to keep the Fleet Patching and Provisioning inventory correct.

Oracle Database Software Management

The Oracle Fleet Patching and Provisioning Server provides an efficient and secure platform for the distribution of Oracle Database Homes to targets and Oracle Fleet Patching and Provisioning Clients.

Also, Oracle Fleet Patching and Provisioning Clients have the ability to fetch database homes from the Oracle Fleet Patching and Provisioning Server.

Oracle Database homes are distributed in the form of working copies of gold images. Database instances (one or more) can then be created on the working copy.

Oracle Fleet Patching and Provisioning also has commands for managing existing databases, such as switching to a patched home or upgrading to a new database version. These are both single commands which orchestrate the numerous steps involved. Reverting to the original home is just as simple.

This section contains the following topics:

Provisioning Oracle Database Homes

Use the rhpctl add workingcopy command to provision a working copy of a database home on a Fleet Patching and Provisioning Server, Client, or target.

  • Run the rhpctl add workingcopy command on a Fleet Patching and Provisioning Server, similar to the following example:
    $ rhpctl add workingcopy -image db12c -path /u01/app/dbusr/product/12.2.0/db12201
      -client client_007 -oraclebase /u01/app/dbusr/ -workingcopy wc_db122_1

    The preceding command example creates a working copy named wc_db122_1 on all nodes of the Fleet Patching and Provisioning Client cluster named client_007. The gold image db12c is the source of the workingcopy. The directory path locations that you specify in the command must be empty.

Related Topics

Creating an Oracle Database

Create an Oracle Database on a working copy.

The Fleet Patching and Provisioning Server can add a database on a working copy that is on the Fleet Patching and Provisioning Server, itself, a Fleet Patching and Provisioning Client, or a non-Fleet Patching and Provisioning Client target. A Fleet Patching and Provisioning Client can create a database on a working copy that is running on the Fleet Patching and Provisioning Client, itself.

  • After you create a working copy of a gold image and provision that working copy to a target, you can create an Oracle Database on the working copy using the rhpctl add database command, similar to the following command example, which creates an Oracle Real Application Clusters (Oracle RAC) database called db12201 on a working copy called wc_db122_1:
    $ rhpctl add database –workingcopy wc_db122_1 –dbname db12201 -node client_007_node1,client_007_node2 -dbtype RAC -datafileDestination DATA007_DG
The preceding example creates an administrator-managed Oracle RAC database on two nodes in a client cluster. The data file destination is an Oracle ASM disk group that was created prior to running the command. Additionally, you can create Oracle RAC One Node and non-cluster databases.

Note:

When you create a database using Fleet Patching and Provisioning, the feature uses random passwords for both the SYS and SYSTEM schemas in the database and you cannot retrieve these passwords. A user with the DBA or operator role must connect to the database, locally, on the node where it is running and reset the passwords to these two accounts.

Patching Oracle Database

To patch an Oracle database, you move the database home to a new home, which includes the patches you want to implement.

Use the rhpctl move database command to move one or more database homes to a working copy of the same database release level. The databases may be running on a working copy, or on an Oracle Database home that is not managed by Fleet Patching and Provisioning.
When the move operation is initiated by a Fleet Patching and Provisioning Server or Client, the version moved to must be a working copy of a gold image. You can also perform this operation using the independent automaton in an environment where no Fleet Patching and Provisioning Server is present. In this case, the source and destination homes are not working copies of gold images, but are two installed homes that you deployed with some method other than using Fleet Patching and Provisioning.
The working copy to which you are moving the database can be at a lower patch level than the current database home. This facilitates rollback in the event that you encounter any problems after moving to the higher level patched home.
The working copy to which you are moving the database home can be at the same patch level as the original working copy. This is useful if you are moving a database home from one storage location to another, or if you wish to convert an unmanaged home to a managed home while staying at the same patch level.
Fleet Patching and Provisioning applies all patches out-of-place, minimizing the downtime necessary for maintenance. Fleet Patching and Provisioning also preserves the current configuration, enabling the rollback capability previously described. By default, Fleet Patching and Provisioning applies patches in a rolling manner, which reduces, and in many cases eliminates, service downtime. Use the -nonrolling option to perform patching in non-rolling mode. The database is then completely stopped on the old ORACLE_HOME, and then restarted to make it run from the newly patched ORACLE_HOME.

Note:

Part of the patching process includes applying Datapatch. When you move an Oracle Database 12c release 1 (12.1) or higher, Fleet Patching and Provisioning completes this step for you. When you move to a version previous to Oracle Database 12c release 1 (12.1), however, you must run Datapatch manually. Fleet Patching and Provisioning is Oracle Data Guard-aware, and will not apply Datapatch to Oracle Data Guard standbys.

Workflow for Database Patching

Assume that a database named myorcldb is running on a working copy that was created from an Oracle Database 12c release 2 (12.2) gold image named DB122. The typical workflow for patching an Oracle Database home is as follows:
  1. Create a working copy of the Oracle Database that you want to patch, in this case DB122.
  2. Apply the patch to the working copy you created.
  3. Test and validate the patched working copy.
  4. Use the rhpctl add image command to create a gold image (for example, DB122_PATCH) from the patched working copy.

    Note:

    The working copy you specify in the preceding command must be hosted on the Fleet Patching and Provisioning Server in Fleet Patching and Provisioning-managed storage.
  5. Delete the patched working copy with the patched Oracle Database using the rhpctl delete workingcopy command.

    Note:

    Do not remove directly using the rm command or some other method, because this does not update the Fleet Patching and Provisioning inventory information.
  6. Create a working copy from the patched gold image, (DB122_PATCH).
  7. Move myorcldb to the working copy you just created.
  8. When you are confident that you will not need to roll back to the working copy on which the database was running at the beginning of the procedure, delete that working copy using the rhpctl delete workingcopy command.

Patching Oracle Database in a Data Guard Environment

Oracle Fleet Patching and Provisioning Server checks if the role of the database allows the execution of datapatch and acts accordingly. For example, if the database role is primary, Oracle FPP runs datapatch at the end of the moving process, and does not run datapatch if the database role is physical standby.

However, Oracle FPP is not aware of the standby topology and does not check for the patching level of the working copy on the standby locations. You must ensure that the standby database is always moved to a patched working copy before moving the primary database. Also, if you move the standby database to a patched working copy and a switchover or a failover occurs before moving the primary database to the patched working copy, it is possible that datapatch has not been executed on the primary database. This is because both sites have been moved to the patched working copy when the role was physical standby. In this case, you must run datapatch manually on the primary database.

Patching Oracle Database Using Batches

During database patching, Fleet Patching and Provisioning can sequentially process batches of nodes, with a number of nodes in each batch being restarted in parallel. This method maximizes service availability during the patching process. You can define the batches on the command line or choose to have Fleet Patching and Provisioning generate the list of batches based on its analysis of the database services running in the cluster.

Adaptive Oracle RAC-Rolling Patching for OJVM Deployments

In a clustered environment, the default approach for applying database maintenance with Fleet Patching and Provisioning is Oracle RAC rolling. However, non-rolling may be required if the new (patched) database home contains OJVM patches. In this case, Fleet Patching and Provisioning determines whether the rolling approach is possible, and rolls when applicable. (See MOS Note 2217053.1 for details.)

Patching Oracle Database with the Independent Automaton

The independent local-mode automaton updates Oracle Database homes, including Oracle Database single-instance databases in a cluster or standalone (with no Oracle Grid Infrastructure), an Oracle RAC database, or an Oracle RAC One Node database.

  • The independent automaton for Oracle Database patching performs all of the steps necessary to switch from one home to another. Because the automaton is not aware of gold images, moving the database requires two home paths, as follows:
    $ rhpctl move database -sourcehome Oracle_home_path -desthome destination_oracle_home_path
Use the following rhpctl move database command parameters for any of the patching scenarios:
  • -dbname: If the database home is hosting more than one database, you can move specific databases by specifying a comma-delimited list with this parameter. Databases not specified are not moved. If you do not use this parameter, then RHPCTL moves all databases.

    Note:

    If you are moving a non-clustered (single-instance) database, then, for the value of the -dbname parameter, you must specify the SID of the database instead of the database name.
  • -ignorewcpatches: By default, Oracle Fleet Patching and Provisioning will not perform the move operation if the destination home is missing any patches present in the source home. You can override this functionality by using this parameter, for example, to move back to a previous source home if you must undo an update.

The following parameters apply only to clustered environments:
  • -node: If the home you are moving is a database home installed on more than one node, then the default operation is a rolling update on all nodes. To apply a patch to just one node, specify the name of that node with this parameter.

  • -nonrolling: If the home you are moving is a database home installed on more than one node, then the default operation is a rolling update on all nodes. To patch all nodes in a nonrolling manner, use this parameter instead of the -node parameter.

-disconnect and -noreplay: Applies to single-instance Oracle Databases, and Oracle RAC, and Oracle RAC One Node database. Use the -disconnect parameter to disconnect all sessions before stopping or relocating services. If you choose to use -disconnect, then you can choose to use the -noreplay parameter to disable session replay during disconnection.

-drain_timeout: Applies to single-instance Oracle Databases, Oracle RAC, and Oracle RAC One Node database. Use this parameter to specify the time, in seconds, allowed for session draining to be completed from each node. Accepted values are an empty string (""), 0, or any positive integer. The default value is an empty string, which means that this parameter is not set. This is applicable to older versions to maintain traditional behavior. If it is set to 0, then the stop option is applied immediately.

The draining period is intended for planned maintenance operations. During the draining period, on each node in succession, all current client requests are processed, but new requests are not accepted.

-stopoption: Applies to single-instance Oracle Databases, Oracle RAC, and Oracle RAC One Node database. Specify a stop option for the database. Stop options include: ABORT, IMMEDIATE, NORMAL, TRANSACTIONAL, and TRANSACTIONAL_LOCAL.

Note:

The rhpctl move database command is Oracle Data Guard-aware, and will not run Datapatch if the database is an Oracle Data Guard standby.

Related Topics

Patching Oracle Exadata Software

In addition to Oracle Grid Infrastructure and Oracle Database homes, Oracle Fleet Patching and Provisioning supports patching the Oracle Exadata components: database nodes, storage cells, and InfiniBand switches.

The first time you patch an Oracle Exadata system using Oracle Fleet Patching and Provisioning, run the rhpctl add workingcopy command, which stores the Oracle Exadata system information (list of nodes and the images with which they were last patched) on the Oracle Fleet Patching and Provisioning Server, before patching the desired Oracle Exadata nodes.
For subsequent patching, run the rhpctl update workingcopy command. After patching, Oracle Fleet Patching and Provisioning updates the images of the nodes.
When you run the rhpctl query workingcopy command for a working copy based on the EXAPATCHSOFTWARE image type, the command returns a list of nodes and their images.
To use Oracle Fleet Patching and Provisioning to patch Oracle Exadata:
  1. Import an image of the EXAPATCHSOFTWARE image type, using the rhpctl import image command on an Oracle Fleet Patching and Provisioning Server, similar to the following:
    $ rhpctl import image -image EXA1 -imagetype EXAPATCHSOFTWARE -path /tmp/ExadataPatchBundle
      -version 12.1.2.2.3.160720

    Note:

    • You can only run the rhpctl import image command on an Oracle Fleet Patching and Provisioning Server.

    • You must rename the patch zip files to include the words storage, database and, iso, so that Oracle Fleet Patching and Provisioning can distinguish among them.

  2. If this is the first time you are using Oracle Fleet Patching and Provisioning to patch Oracle Exadata, then use the rhpctl add workingcopy command, as follows:
    $ rhpctl add workingcopy -image image_name -root {[-dbnodes dbnode_list]
      [-cells cell_list] [-ibswitches ibswitch_list]} [-fromnode node_name]
      [-unkey] [-smtpfrom "address"] [-smtpto "addresses"] [-precheckonly]
      [-modifyatprereq] [-resetforce] [-force]

    The preceding command stores the list of nodes (database nodes, cells, and InfiniBand switches) and the version of each node in Oracle Fleet Patching and Provisioning on the working copy, in addition to the type of node and the type of image.

  3. For subsequent patching operations, after you create a gold image with updated database, storage cell, and switch files, run the rhpctl update workingcopy command to patch one, two, or all three Oracle Exadata components, as follows:
    $ rhpctl update workingcopy -image image_name -root {[-dbnodes dbnode_list] [-cells cell_list]
      [-ibswitches ibswitch_list]} [-fromnode node_name] [-unkey] [-smtpfrom "address"]
      [-smtpto "addresses"] [-precheckonly] [-modifyatprereq] [-resetforce] [-force]

    Note:

    The name of the working copy remains the same throughout the life cycle of patching the given Oracle Exadata target.

    You can choose to patch only the database nodes, cells, or InfiniBand switches or any combination of the three. Patching occurs in the following order: InfiniBand switches, cells, database nodes.

  4. Display the list of nodes and their images, as follows:
    $ rhpctl query workingcopy -workingcopy working_copy_name
  5. Delete the working copy, as follows:
    rhpctl delete workingcopy -workingcopy working_copy_name

Upgrading Oracle Database Software

Fleet Patching and Provisioning provides two options for upgrading Oracle Database. Both options are performed with a single command.

The rhpctl upgrade database command performs a traditional upgrade incurring downtime. The rhpctl zdtupgrade database command performs an Oracle RAC or Oracle RAC One Node upgrade with minimal or no downtime.

You can use Fleet Patching and Provisioning to provision, scale, and patch Oracle Database 11g release 2 (11.2.0.4) and later releases. You can also upgrade Oracle Databases from 11g release 2 (11.2.0.4), 12c release 1 (12.1.0.2), 12c release 2 (12.2), and 18c to Oracle Database 19c. Refer to Oracle Database Upgrade Guide for information about Oracle Database direct upgrade paths.

Note:

The version of Oracle Grid Infrastructure on which the pre-upgrade database is running must be the same version or higher than the version of the database to which you are upgrading.
The destination for the upgrade can be a working copy already provisioned, or you can choose to create the working copy of gold image as part of this operation.
The pre-upgrade database can be running on a working copy (a managed home that was provisioned by Fleet Patching and Provisioning) or on an unmanaged home. In the first case, you can roll back the upgrade process with a single RHPCTL command.

Note:

You can delete the source working copy at any time after completing an upgrade. Once you delete the working copy, however, you cannot perform a rollback. Also, use the rhpctl delete workingcopy command (as opposed to rm, for example) to remove the source working copy to keep the Fleet Patching and Provisioning inventory correct.

Zero-Downtime Upgrade

Using Fleet Patching and Provisioning, which automates and orchestrates database upgrades, you can upgrade an Oracle RAC or Oracle RAC One Node database with no disruption in service.

The zero-downtime upgrade process is resumable, restartable, and recoverable should any errors interrupt the process. You can fix the issue then re-run the command, and Fleet Patching and Provisioning continues from the error point. Oracle also provides hooks at the beginning and end of the zero-downtime upgrade process, allowing call outs to user-defined scripts, so you can customize the process.

Note:

You can use zero-downtime patching only for out-of-place patching of Oracle Grid Infrastructure 19c Release Update (RU) 19.8 or later releases with Oracle RAC or Oracle RAC One Node databases of 19c or later releases. If your Oracle RAC or Oracle RAC One Node database release is older than 19c, then the database instances stop during zero-downtime patching.
You can use the zero-downtime upgrade process to upgrade databases that meet the following criteria:
  • Database upgrade targets: Oracle RAC and Oracle RAC One Node, with the following upgrade paths:
    • 11.2.0.4 to 12.1.0.2
    • 11.2.0.4 to 12.2
    • 11.2.0.4 to 18c
    • 12.1.0.2 to 12.2
    • 12.1.0.2 to 18c
    • 12.1.0.2 to 19c
    • 12.2 to 18c
    • 12.2 to 19c
    • 18c to 19c
  • Fleet Patching and Provisioning management: The source database home can either be unmanaged (not provisioned by Fleet Patching and Provisioning service) or managed (provisioned by Fleet Patching and Provisioning service)

  • Database state: The source database must be in archive log mode

Upgrading Container Databases

You can use Fleet Patching and Provisioning to upgrade CDBs but Fleet Patching and Provisioning does not support converting a non-CDB to a CDB during upgrade. To prepare for a zero-downtime upgrade, you complete configuration steps and validation checks. When you run a zero-downtime upgrade using Fleet Patching and Provisioning, you can stop the upgrade and resume it, if necessary. You can recover from any upgrade errors, and you can restart the upgrade. You also have the ability to insert calls to your own scripts during the upgrade, so you can customize your upgrade procedure.

Zero-Downtime Upgrade Environment Prerequisites

  • Server environment: Oracle Grid Infrastructure 18c with Fleet Patching and Provisioning

  • Database hosts: Databases hosted on one of the following platforms:
    • Oracle Grid Infrastructure 18c Fleet Patching and Provisioning Client

    • Oracle Grid Infrastructure 18c Fleet Patching and Provisioning Server

    • Oracle Grid Infrastructure 12c (12.2.0.1) Fleet Patching and Provisioning Client

    • Oracle Grid Infrastructure 12c (12.1.0.2) target cluster

  • Database-specific prerequisites for the environment: During an upgrade, Fleet Patching and Provisioning manages replication to a local data file to preserve transactions applied to the new database when it is ready. There are two possibilities for the local data file:
    • Snap clone, which is available if the database data files and redo and archive redo logs are on Oracle ACFS file systems
    • Full copy, for all other cases
  • Fleet Patching and Provisioning requires either Oracle GoldenGate or Oracle Data Guard during a zero-downtime database upgrade. As part of the upgrade procedure, Fleet Patching and Provisioning configures and manages the Oracle GoldenGate deployment.

Running a Zero-Downtime Upgrade Using Oracle GoldenGate for Replication

Run a zero-downtime upgrade using Oracle GoldenGate for replication.

  1. Prepare the Fleet Patching and Provisioning Server.

    Create gold images of the Oracle GoldenGate software in the image library of the Fleet Patching and Provisioning Server.

    Note:

    You can download the Oracle GoldenGate software for your platform from Oracle eDelivery. The Oracle GoldenGate 12.3 installable kit contains the required software for both Oracle Database 11g and Oracle Database 12c databases.

    If you download the Oracle GoldenGate software, then extract the software home and perform a software only installation on the Fleet Patching and Provisioning Server.

    Create gold images of the Oracle GoldenGate software for both databases, as follows:
    $ rhpctl import image -image 112ggimage -path path -imagetype ORACLEGGSOFTWARE
    $ rhpctl import image -image 12ggimage -path path -imagetype ORACLEGGSOFTWARE
    In both of the preceding commands, path refers to the location of the Oracle GoldenGate software home on the Fleet Patching and Provisioning Server for each release of the database.
  2. Prepare the target database.

    Provision working copies of the Oracle GoldenGate software to the cluster hosting the database, as follows:
    $ rhpctl add workingcopy -workingcopy GG_Wcopy_11g -image 112ggimage -user
      user_name -node 12102_cluster_node -path path {-root | -sudouser user_name
      -sudopath sudo_bin_path}
    $ rhpctl add workingcopy -workingcopy GG_Wcopy_12c -image 12ggimage -user
      user_name -node 12102_cluster_node -path path {-root | -sudouser user_name
      -sudopath sudo_bin_path}
    If the database is hosted on the Fleet Patching and Provisioning Server, itself, then neither the -targetnode nor -client parameters are required.

    Note:

    Working copy names must be unique, therefore you must use a different working copy name on subsequent targets. You can create unique working copy names by including the name of the target/client cluster name in the working copy name.
  3. Provision a working copy of the Oracle Database 12c software home to the target cluster.

Note:

You can do this preparation ahead of the maintenance window without disrupting any operations taking place on the target.
  • You can run the upgrade command on the Fleet Patching and Provisioning Server to upgrade a database hosted on the server, an Oracle Database 12c release 1 (12.1.0.2) target cluster, or a database hosted on a Fleet Patching and Provisioning Client 12c release 2 (12.2.0.1) or 18c. You can also run the command a Fleet Patching and Provisioning Client 18c to upgrade a database hosted on the client, itself.
    Use the upgrade command similar to the following:
    $ rhpctl zdtupgrade database -dbname sierra -destwc DB_Wcopy_121 -ggsrcwc
      GG_Wcopy_11g -ggdstwc GG_Wcopy_12c -targetnode 12102_cluster_node -root

    In the preceding command, 12102_cluster_node refers to the Oracle Grid Infrastructure 12c release 1 (12.1.0.2) cluster hosting the database you want to upgrade.

Running a Zero-Downtime Upgrade Using Oracle Data Guard for Replication

Run a zero-downtime upgrade using Oracle Data Guard for replication.

You can run the zero-downtime upgrade command using Oracle Data Guard’s transient logical standby (TLS) feature. All of the steps involved are orchestrated by the zero-downtime upgrade command.
After you provision the destination database Home, the following prerequisites must be met:
  • Data Guard Broker is not enabled

  • Flash recovery area (FRA) is configured

  • The following example of a zero-downtime upgrade using Oracle Data Guard upgrades an Oracle Database 11g release 2 (11.2.0.4), sierra, running on the target cluster, which includes a node, targetclust003, to an Oracle Database 12c release 1 (12.1.0.2) (the destination working copy, which was provisioned from a Gold Image stored on the Fleet Patching and Provisioning Server named rhps.example.com):
    $ rhpctl zdtupgrade database -dbname sierra -destwc WC121DB4344 -clonedatadg DBDATA -targetnode node90743 -root
    
    Enter user "root" password:
    node90753.example.com: starting zero downtime upgrade operation ...
    node90753.example.com: verifying patches applied to Oracle homes ...
    node90753.example.com: verifying if database "sierra" can be upgraded with zero downtime ...
    node90743: 15:09:10.459: Verifying whether database "sierra" can be cloned ...
    node90743: 15:09:10.462: Verifying that database "sierra" is a primary database ...
    node90743: 15:09:14.672: Verifying that connections can be created to database "sierra" ...
    < ... >
    node90743: 15:14:58.015: Starting redo apply ...
    node90743: 15:15:07.133: Configuring primary database "sierra" ...
    ####################################################################
    node90753.example.com: retrieving information about database "xmvotkvd" ...
    node90753.example.com: creating services for snapshot database "xmvotkvd" ...
    ####################################################################
    node90743: 15:15:33.640: Macro step 1: Getting information and validating setup ...
    < ... >
    node90743: 15:16:02.844: Macro step 2: Backing up user environment in case upgrade is aborted  ...
    node90743: 15:16:02.848: Stopping media recovery for database "xmvotkvd" ...
    node90743: 15:16:05.858: Creating initial restore point "NZDRU_0000_0001" ...
    < ... >
    node90743: 15:16:17.611: Macro step 3: Creating transient logical standby from existing physical standby ...
    node90743: 15:16:18.719: Stopping instance "xmvotkvd2" of database "xmvotkvd" ...
    node90743: 15:16:43.187: Verifying that database "sierra" is a primary database ...
    < ... >
    node90743: 15:19:27.158: Macro step 4: Upgrading transient logical standby database ...
    node90743: 15:20:27.272: Disabling service "sierrasvc" of database "xmvotkvd" ...
    node90743: 16:36:54.684: Macro step 5: Validating upgraded transient logical standby database ...
    node90743: 16:37:09.576: Creating checkpoint "NZDRU_0301" for database "xmvotkvd" during stage "3" and task "1" ...
    node90743: 16:37:09.579: Stopping media recovery for database "xmvotkvd" ...
    node90743: 16:37:10.792: Creating restore point "NZDRU_0301" for database "xmvotkvd" ...
    node90743: 16:37:11.998: Macro step 6: Switching role of transient logical standby database ...
    node90743: 16:37:12.002: Verifying that database "sierra" is a primary database ...
    < ... >
    node90743: 16:39:07.425: Macro step 7: Flashback former primary database to pre-upgrade restore point and convert to physical standby ...
    node90743: 16:39:08.833: Stopping instance "sierra2" of database "sierra" ...
    < ... >
    node90743: 16:41:17.138: Macro step 8: Recovering former primary database ...
    node90743: 16:41:19.045: Verifying that database "sierra" is mounted ...
    < ... >
    node90743: 17:20:21.378: Macro step 9: Switching back ...
    < ... >
    ####################################################################
    node90753.example.com: deleting snapshot database "xmvotkvd" ...

Customizing Zero-Downtime Upgrades

You can customize zero-downtime upgrades using the user-action framework of Fleet Patching and Provisioning.

To use the user-action framework, you can provide a separate script for any or all of the points listed in the overall process.

Table 5-2 Zero-Downtime Upgrade Plugins

Plugin Type Pre or Post Plugin runs...
ZDTUPGRADE_DATABASE

Pre

Before Fleet Patching and Provisioning starts zero-downtime upgrade.

Post

After Fleet Patching and Provisioning completes zero-downtime upgrade.

ZDTUPGRADE_DATABASE_SNAPDB

Pre

Before creating the snapshot or full-clone database.

Post

After starting the snapshot or full-clone database (but before switching over).

ZDTUPGRADE_DATABASE_DBUA

Pre

Before running DBUA (after switching over).

Post

After DBUA completes.

ZDTUPGRADE_DATABASE_SWITCHBACK

Pre

Before switching back users to the upgraded source database.

Post

After switching back users to the upgraded source database (before deleting snapshot or full-clone database).

  • To register a plugin to be run during a zero-downtime upgrade, run the following command:
    $ rhpctl add useraction -useraction user_action_name -actionscript script_name
      {-pre | -post} -optype {ZDTUPGRADE_DATABASE | ZDTUPGRADE_DATABASE_SNAPDB |
       ZDTUPGRADE_DATABASE_DBUA | ZDTUPGRADE_DATABASE_SWITCHBACK}

    You can specify run-time input to the plugins using the -useractiondata option of the rhpctl zdtupgrade database command.

Persistent Home Path During Patching

Oracle recommends out-of-place patching when applying updates.

Out-of-place patching involves deploying the patched environment in a new directory path and then switching the software home to the new path. This approach allows for non-disruptive software distribution because the existing home remains active while the new home is provisioned, and also facilitates rollback because the unpatched software home is available should any issues arise after the switch. Additionally, out-of-place patching for databases enables you to choose to move a subset of instances to the new home if more than one instance is running on the home, whereas with in-place patching, you must patch all instances at the same time.

A potential impediment to traditional out-of-place patching is that the software home path changes. While Oracle Fleet Patching and Provisioning manages this internally and transparently for Oracle Database and Oracle Grid Infrastructure software, some users have developed scripts which depend on the path. To address this, Oracle Fleet Patching and Provisioning uses a file system feature that enables separation of gold image software from the site-specific configuration changes, so the software home path is persistent throughout updates.

This feature is available with Oracle Database 12c release 2 (12.2) and Oracle Grid Infrastructure 12c release 2 (12.2) working copies provisioned in local storage. Also, if you provision an Oracle Database 12c release 2 (12.2) or an Oracle Grid Infrastructure 12c release 2 (12.2) home without using this feature, then, during a patching operation using either the rhpctl move database or rhpctl move gihome command, you can convert to this configuration and take advantage of the feature.

Note:

You can only patch Oracle Grid Infrastructure on an Oracle Fleet Patching and Provisioning Client with a home that is based on a persistent home path from an Oracle Fleet Patching and Provisioning Server.

Managing Fleet Patching and Provisioning Clients

Management tasks for Fleet Patching and Provisioning Clients include creation, enabling and disabling, creating users and assigning roles to those users, and managing passwords.

Using SRVCTL and RHPCTL, you can perform all management tasks for a Fleet Patching and Provisioning Client.

Creating a Fleet Patching and Provisioning Client

Users operate on a Fleet Patching and Provisioning Client to perform tasks such as requesting deployment of Oracle homes and querying gold images.

To create a Fleet Patching and Provisioning Client:

  1. If there is no highly available VIP (HAVIP) on the Fleet Patching and Provisioning Server, then, as the root user, create an HAVIP, as follows:
    # srvctl add havip -id id -address {host_name | ip_address}
    

    You can specify either a host name or IPv4 or IPv6 IP address. The IP address that you specify for HAVIP or the address that is resolved from the specified host name must not be in use when you run this command.

    Note:

    The highly available VIP must be in the same subnet as the default network configured in the Fleet Patching and Provisioning Server cluster. You can obtain the subnet by running the following command:

    $ srvctl config network -netnum 1
  2. On the Fleet Patching and Provisioning Server as the Grid home owner, create the client data file, as follows:
    $ rhpctl add client -client client_cluster_name -toclientdata path
    

    RHPCTL creates the client data file in the directory path you specify after the -toclientdata flag. The name of the client data file is client_cluster_name.xml.

    Note:

    The client_cluster_name must be unique and it must match the cluster name of the client cluster where you run step 4.

  3. Copy the client data file that you created in the previous step to a directory on the client cluster that has read/write permissions to the Grid home owner on the Fleet Patching and Provisioning Client.
  4. Create the Fleet Patching and Provisioning Client by running the following command as root on the client cluster:
    # srvctl add rhpclient -clientdata path_to_client_data
       [-diskgroup disk_group_name -storage base_path]

    If you want to provision working copies to Oracle ACFS storage on this cluster, and you have already created a disk group for this purpose, then specify this disk group in the preceding command. In this case, also specify a storage path which will be used as a base path for all mount points when creating Oracle ACFS file systems for storing working copies.

    Note:

    Once you configure a disk group on a Fleet Patching and Provisioning Client, you cannot remove it from or change it in the Fleet Patching and Provisioning Client configuration. The only way you can do either (change or remove) is to completely remove the Fleet Patching and Provisioning Client using the srvctl remove client command, and then add it back with a different disk group, if necessary. Before you remove a Fleet Patching and Provisioning Client, ensure that you remove all registered users from this cluster and all working copies provisioned on this cluster.

  5. Start the Fleet Patching and Provisioning Client, as follows:
    $ srvctl start rhpclient
  6. Check the status of the Fleet Patching and Provisioning Client, as follows:
    $ srvctl status rhpclient

Enabling and Disabling Fleet Patching and Provisioning Clients

On the Fleet Patching and Provisioning Server, you can enable or disable a Fleet Patching and Provisioning Client.

Fleet Patching and Provisioning Clients communicate with the Fleet Patching and Provisioning Server for all actions. You cannot run any RHPCTL commands without a connection to a Fleet Patching and Provisioning Server.

To enable or disable a Fleet Patching and Provisioning Client, run the following command from the Fleet Patching and Provisioning Server cluster:

$ rhpctl modify client -client client_name -enabled TRUE | FALSE

To enable a Fleet Patching and Provisioning Client, specify -enabled TRUE. Conversely, specify -enabled FALSE to disable the client. When you disable a Fleet Patching and Provisioning Client cluster, all RHPCTL commands from that client cluster will be rejected by the Fleet Patching and Provisioning Server, unless and until you re-enable the client.

Note:

Disabling a Fleet Patching and Provisioning Client cluster does not disable any existing working copies on the client cluster. The working copies will continue to function and any databases in those working copies will continue to run.

Deleting a Fleet Patching and Provisioning Client

Use the following procedure to delete a Fleet Patching and Provisioning Client.

  1. Before deleting the Fleet Patching and Provisioning Client, you must first delete the working copies and users on the Fleet Patching and Provisioning Server, as follows:
    1. Query the list of working copies that have been provisioned on the Fleet Patching and Provisioning Client cluster.

      Run the following command:

      $ rhpctl query workingcopy -client client_name
    2. Delete each of the working copies listed in the output of the preceding command.

      Run the following command for each working copy and specify the name of the working copy you want to delete:

      $ rhpctl delete workingcopy -workingcopy working_copy_name
    3. Query the list of users from the Fleet Patching and Provisioning Client cluster.

      Run the following command:

      $ rhpctl query user -client client_name
    4. Delete the users listed in the output of the preceding command, as follows:

      Run the following command and specify the name of the user you want to delete and the name of the client:

      $ rhpctl delete user -user user_name —client client_name
  2. On the Fleet Patching and Provisioning Client cluster, delete the client, as follows:
    1. Stop the Fleet Patching and Provisioning Client daemon.

      Run the following command:

      $ srvctl stop rhpclient
    2. Delete the Fleet Patching and Provisioning Client configuration.

      Run the following command:

      $ srvctl remove rhpclient
  3. Delete the client site configuration on the Fleet Patching and Provisioning Server cluster.

    Run the following command and specify the name of the client:

    $ rhpctl delete client -client client_name

Creating Users and Assigning Roles for Fleet Patching and Provisioning Client Cluster Users

Oracle Fleet Patching and Provisioning (Oracle FPP) enables you to create users and assign roles to them when you create an Oracle FPP client.

When you create a Fleet Patching and Provisioning Client with the rhpctl add client command, you can use the -maproles parameter to create users and assign roles to them. You can associate multiple users with roles, or you can assign a single user multiple roles with this command.

After the client has been created, you can add and remove roles for users using the rhpctl grant role command and the rhpctl revoke role, respectively.

Managing the Fleet Patching and Provisioning Client Password

The Oracle Fleet Patching and Provisioning (Oracle FPP) Client uses a password stored internally to authenticate itself with the RHP server.

You cannot query the Oracle FPP Client password, however, if for some reason, you are required to reset this password, then you can do so, as follows, on the RHP server cluster:

  1. Run the following command on the Fleet Patching and Provisioning Server cluster to generate a new password and store it in the client credential:
    $ rhpctl modify client -client client_name -password
    
  2. Run the following command on the Fleet Patching and Provisioning Server cluster to generate a credential file:
    $ rhpctl export client -client client_name -clientdata file_path
    

    For example, to generate a credential file for a Fleet Patching and Provisioning Client named mjk9394:

    $ rhpctl export client -client mjk9394 -clientdata /tmp/mjk9394.xml
    
  3. Continuing with the preceding example, transport the generated credential file securely to the Fleet Patching and Provisioning Client cluster and then run the following command on any node in the Fleet Patching and Provisioning Client cluster:
    $ srvctl modify rhpclient -clientdata path_to_mjk9394.xml
    
  4. Restart the Fleet Patching and Provisioning Client daemon by running the following commands on the Fleet Patching and Provisioning Client cluster:
    $ srvctl stop rhpclient
    $ srvctl start rhpclient

User-Defined Actions

You can create actions for various Oracle Fleet Patching and Provisioning operations, such as import image, add and delete working copy, and add, delete, move, and upgrade a software home.

You can create actions for various Oracle Fleet Patching and Provisioning operations, such as import image, add and delete working copy of a gold image, and add, delete, move, and upgrade a software home. You can define different actions for each operation, which can be further differentiated by the type of image to which the operation applies. User-defined actions can be run before or after a given operation, and are run on the deployment on which the operation is run, whether it be an Oracle Fleet Patching and Provisioning Server, an Oracle Fleet Patching and Provisioning Client (12c release 2 (12.2), or later), or a target that is not running an Oracle Fleet Patching and Provisioning Client.

User-defined actions are shell scripts which are stored on the Oracle Fleet Patching and Provisioning Server. When a script runs, it is given relevant information about the operation on the command line. Also, you can associate a file with the script. The Oracle Fleet Patching and Provisioning Server will copy that file to the same location on the Client or target where the script is run.

For example, perhaps you want to create user-defined actions that are run after a database upgrade, and you want to define different actions for Oracle Database 11g and 12c. This requires you to define new image types, as in the following example procedure.

  1. Create a new image type, (DB11IMAGE, for example), based on the ORACLEDBSOFTWARE image type, as follows:

    $ rhpctl add imagetype -imagetype DB11IMAGE -basetype ORACLEDBSOFTWARE

    When you add or import an Oracle Database 11g gold image, you specify the image type as DB11IMAGE.

  2. Define a user action and associate it with the DB11IMAGE image type and the upgrade operation. You can have different actions that are run before or after upgrade.

  3. To define an action for Oracle Database 12c, create a new image type (DB12IMAGE, for example) that is based on the ORACLEDBSOFTWARE image type, as in the preceding step, but with the DB12IMAGE image type.

    Note:

    If you define user actions for the base type of a user-defined image type (in this case the base type is ORACLEDBSOFTWARE), then Oracle Fleet Patching and Provisioning performs those actions before the actions for the user-defined image type.

You can modify the image type of an image using the rhpctl modify image command. Additionally, you can modify, add, and delete other actions. The following two tables, Table 5-3 and Table 5-4, list the operations you can customize and the parameters you can use to define those operations, respectively.

Table 5-3 Oracle Fleet Patching and Provisioning User-Defined Operations

Operation Parameter List
IMPORT_IMAGE

RHP_OPTYPE, RHP_PHASE, RHP_PATH, RHP_PATHOWNER, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_IMAGE, RHP_IMAGETYPE, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

ADD_WORKINGCOPY

RHP_OPTYPE, RHP_PHASE, RHP_WORKINGCOPY, RHP_PATH, RHP_STORAGETYPE, RHP_USER, RHP_NODES, RHP_ORACLEBASE, RHP_DBNAME, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_IMAGE, RHP_IMAGETYPE, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

ADD_DATABASE

RHP_OPTYPE, RHP_PHASE, RHP_WORKINGCOPY, RHP_CLIENT, RHP_DBNAME, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_IMAGE, RHP_IMAGETYPE, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

DELETE_WORKINGCOPY

RHP_OPTYPE, RHP_PHASE, RHP_WORKINGCOPY, RHP_CLIENT, RHP_PATH, RHP_PROGRESSLISTENERHOS, RHP_PROGRESSLISTENERPORT, RHP_IMAGE, RHP_IMAGETYPE, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

DELETE_DATABASE

RHP_OPTYPE, RHP_PHASE, RHP_WORKINGCOPY, RHP_CLIENT, RHP_DBNAME, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_IMAGE, RHP_IMAGETYPE, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

MOVE_GIHOME

RHP_OPTYPE, RHP_PHASE, RHP_SOURCEWC, RHP_SOURCEPATH, RHP_DESTINATIONWC, RHP_DESTINATIONPATH, RHP_IMAGE, RHP_IMAGETYPE, RHP_PROGRESSLISTENERHOS, RHP_PROGRESSLISTENERPORT, RHP_VERSION, RHP_CL, RHP_USERACTIONDATA

MOVE_DATABASE

This user action is run for each database involved in a patching operation.

If the run scope is set to ALLNODES, then the script is run for each database on every cluster node.

If the run scope is set to ONENODE, then the script is run for each database on the node on which the patch was applied to the database.

RHP_OPTYPE, RHP_PHASE, RHP_SOURCEWC, RHP_SOURCEPATH, RHP_DESTINATIONWC, RHP_DESTINATIONPATH, RHP_DBNAME, RHP_IMAGE, RHP_IMAGETYPE, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_VERSION, RHP_CLI, RHP_DATAPATCH, RHP_USERACTIONDATA

UPGRADE_GIHOME

RHP_OPTYPE, RHP_PHASE, RHP_SOURCEWC, RHP_SOURCEPATH, RHP_DESTINATIONWC, RHP_DESTINATIONPATH, RHP_IMAGE, RHP_IMAGETYPE, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

UPGRADE_DATABASE

RHP_OPTYPE, RHP_PHASE, RHP_SOURCEWC, RHP_SOURCEPATH, RHP_DESTINATIONWC, RHP_DESTINATIONPATH, RHP_DBNAME, RHP_IMAGE, RHP_IMAGETYPE, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

ADDNODE_DATABASE

RHP_OPTYPE, RHP_PHASE, RHP_WORKINGCOPY, RHP_CLIENT, RHP_DBNAME, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_IMAGE, RHP_IMAGETYPE, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

DELETENODE_DATABASE

RHP_OPTYPE, RHP_PHASE, RHP_WORKINGCOPY, RHP_CLIENT, RHP_DBNAME, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_IMAGE, RHP_IMAGETYPE, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

ADDNODE_GIHOME

RHP_OPTYPE, RHP_PHASE, RHP_WORKINGCOPY, RHP_CLIENT, RHP_PATH, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_IMAGE, RHP_IMAGETYPE, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

DELETENODE_GIHOME

RHP_OPTYPE, RHP_PHASE, RHP_WORKINGCOPY, RHP_CLIENT, RHP_PATH, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_IMAGE, RHP_IMAGETYPE, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

ADDNODE_WORKINGCOPY

RHP_OPTYPE, RHP_PHASE, RHP_WORKINGCOPY, RHP_CLIENT, RHP_PATH, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_IMAGE, RHP_IMAGETYPE, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

ZDTUPGRADE_DATABASE

RHP_OPTYPE, RHP_PHASE, RHP_SOURCEWC, RHP_SOURCEPATH, RHP_DESTINATIONWC, RHP_DESTINATIONPATH, RHP_SRCGGWC, RHP_SRCGGPATH, RHP_DSTGGWC, RHP_DSTGGPATH, RHP_DBNAME, RHP_IMAGE, RHP_IMAGETYPE, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

ZDTUPGRADE_DATABASE_SNAPDB

RHP_OPTYPE, RHP_PHASE, RHP_SOURCEWC, RHP_SOURCEPATH, RHP_DESTINATIONWC, RHP_DESTINATIONPATH, RHP_SRCGGWC, RHP_SRCGGPATH, RHP_DSTGGWC, RHP_DSTGGPATH, RHP_DBNAME, RHP_IMAGE, RHP_IMAGETYPE, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

ZDTUPGRADE_DATABASE_DBUA

RHP_OPTYPE, RHP_PHASE, RHP_SOURCEWC, RHP_SOURCEPATH, RHP_DESTINATIONWC, RHP_DESTINATIONPATH, RHP_SRCGGWC, RHP_SRCGGPATH, RHP_DSTGGWC, RHP_DSTGGPATH, RHP_DBNAME, RHP_IMAGE, RHP_IMAGETYPE, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

ZDTUPGRADE_DATABASE_SWITCHBACK

RHP_OPTYPE, RHP_PHASE, RHP_SOURCEWC, RHP_SOURCEPATH, RHP_DESTINATIONWC, RHP_DESTINATIONPATH, RHP_SRCGGWC, RHP_SRCGGPATH, RHP_DSTGGWC, RHP_DSTGGPATH, RHP_DBNAME, RHP_IMAGE, RHP_IMAGETYPE, RHP_PROGRESSLISTENERHOST, RHP_PROGRESSLISTENERPORT, RHP_VERSION, RHP_CLI, RHP_USERACTIONDATA

Table 5-4 User-Defined Operations Parameters

Parameter Description
RHP_OPTYPE

The operation type for which the user action is being executed, as listed in the previous table.

RHP_PHASE

This parameter indicates whether the user action is executed before or after the operation (is either PRE or POST).

RHP_SOURCEWC

The source working copy name for a patch of upgrade operation.

RHP_SOURCEPATH

The path of the source working copy home.

RHP_DESTINATIONWC

The destination working copy name for a patch or upgrade operation.

RHP_DESTINATIONPATH

The path of the destination working copy home.

RHP_SRCGGWC

The name of the version of the Oracle GoldenGate working copy from which you want to upgrade.

RHP_SRCGGPATH

The absolute path of the version of the Oracle GoldenGate software home from which you want to upgrade.

RHP_DESTGGWC

The name of the version of the Oracle GoldenGate working copy to which you want to upgrade.

RHP_DESTGGPATH

The absolute path of the version of the Oracle GoldenGate software home to which you want to upgrade.

RHP_PATH

This is the path to the location of the software home. This parameter represents the path on the local node from where the RHPCTL command is being run for an IMPORT_IMAGE operation. For all other operations, this path is present on the site where the operation is taking place.

RHP_PATHOWNER

The owner of the path for the gold image that is being imported.

RHP_PROGRESSLISTENERHOST

The host on which the progress listener is listening. You can use this parameter, together with a progress listener port, to create a TCP connection to print output to the console on which the RHPCTL command is being run.

RHP_PROGRESSLISTENERPORT

The port on which the progress listener host is listening. You can use this parameter, together with a progress listener host name, to create a TCP connection to print output to the console on which the RHPCTL command is being run.

RHP_IMAGE

The image associated with the operation. In the case of a move operation, it will reflect the name of the destination image.

RHP_IMAGETYPE

The image type of the image associated with the operation. In the case of a move operation, it will reflect the name of the destination image.

RHP_VERSION

The version of the Oracle Grid Infrastructure software running on the Oracle Fleet Patching and Provisioning Server.

RHP_CLI

The exact command that was run to invoke the operation.

RHP_STORAGETYPE

The type of storage for the home (either LOCAL or RHP_MANAGED)

RHP_USER

The user for whom the operation is being performed.

RHP_NODES

The nodes on which a database will be created.

RHP_ORACLEBASE

The Oracle base location for the provisioned home.

RHP_DBNAME

The name of the database to be created.

RHP_CLIENT

The name of the client cluster.

RHP_DATAPATCH

This parameter is set to TRUE at the conclusion of the user action on the node where the SQL patch will be run after the move database operation is complete.

RHP_USERACTIONDATA

This parameter is present in all of the operations and is used to pass user-defined items to the user action as an argument during runtime.

Example of User-Defined Action

Suppose there is an image type, APACHESW, to use for provisioning and managing Apache deployments. Suppose, too, that there is a Gold Image of Apache named apacheinstall. The following example shows how to create a user action that will run prior to provisioning any copy of our Apache Gold Image.

The following is a sample user action script named addapache_useraction.sh:

$ cat /scratch/apacheadmin/addapache_useraction.sh
#!/bin/sh

#refer to arguments using argument names 
touch /tmp/SAMPLEOUT.txt;
for i in "$@"
do
    export $i
done

echo "OPTYPE = $RHP_OPTYPE" >> /tmp/SAMPLEOUT.txt;
echo "PHASE = $RHP_PHASE" >> /tmp/SAMPLEOUT.txt;
echo "WORKINGCOPY = $RHP_WORKINGCOPY" >> /tmp/SAMPLEOUT.txt;
echo "PATH = $RHP_PATH" >> /tmp/SAMPLEOUT.txt;
echo "STORAGETYPE = $RHP_STORAGETYPE" >> /tmp/SAMPLEOUT.txt;
echo "USER = $RHP_USER" >> /tmp/SAMPLEOUT.txt;
echo "NODES = $RHP_NODES" >> /tmp/SAMPLEOUT.txt;
echo "ORACLEBASE = $RHP_ORACLEBASE" >> /tmp/SAMPLEOUT.txt;
echo "DBNAME = $RHP_DBNAME" >> /tmp/SAMPLEOUT.txt;
echo "PROGRESSLISTENERHOST = $RHP_PROGRESSLISTENERHOST" >> /tmp/SAMPLEOUT.txt;
echo "PROGRESSLISTENERPORT = $RHP_PROGRESSLISTENERPORT" >> /tmp/SAMPLEOUT.txt;
echo "IMAGE = $RHP_IMAGE" >> /tmp/SAMPLEOUT.txt;
echo "IMAGETYPE = $RHP_IMAGETYPE" >> /tmp/SAMPLEOUT.txt;
echo "RHPVERSION = $RHP_VERSION" >> /tmp/SAMPLEOUT.txt;
echo "CLI = $RHP_CLI" >> /tmp/SAMPLEOUT.txt;
echo "USERACTIONDATA = $RHP_USERACTIONDATA" >> /tmp/SAMPLEOUT.txt;
$

The script is registered to run at the start of rhpctl add workingcopy commands. The add working copy operation aborts if the script fails.

The following command creates a user action called addapachepre:

$ rhpctl add useraction -optype ADD_WORKINGCOPY -pre -onerror ABORT -useraction
  addapachepre -actionscript /scratch/apacheadmin/addapache_useraction.sh
  -runscope ONENODE

The following command registers the user action for the APACHESW image type:

$ rhpctl modify imagetype -imagetype APACHESW -useractions addapachepre

The registered user action is invoked automatically at the start of commands that deploy a working copy of any image of the APACHESW type, such as the following:

$ rhpctl add workingcopy -workingcopy apachecopy001 -image apacheinstall 
  -path /scratch/apacheadmin/apacheinstallloc -sudouser apacheadmin -sudopath
  /usr/local/bin/sudo -node targetnode003 -user apacheadmin -useractiondata "sample"

The sample script creates the /tmp/SAMPLEOUT.txt output file. Based on the example command, the output file contains:

$ cat /tmp/SAMPLEOUT.txt
OPTYPE = ADD_WORKINGCOPY
PHASE = PRE
WORKINGCOPY = apachecopy001
PATH = /scratch/apacheadmin/apacheinstallloc
STORAGETYPE =
USER = apacheadmin
NODES = targetnode003
ORACLEBASE =
DBNAME =
PROGRESSLISTENERHOST = mds11042003.my.example.com
PROGRESSLISTENERPORT = 58068
IMAGE = apacheinstall
IMAGETYPE = APACHESW
RHPVERSION = 12.2.0.1.0
CLI = rhpctl__add__workingcopy__-image__apacheinstall__-path__/scratch/apacheadmin
 /apacheinstallloc__-node__targetnode003__-useractiondata__sample__
 -sudopath__/usr/local/bin/sudo__-workingcopy__apachecopy__-user__apacheadmin__
 -sudouser__apacheadmin__USERACTIONDATA = sample
$

Notes:

  • In the preceding output example empty values terminate with an equals sign (=).

  • The spaces in the command-line value of the RHP_CLI parameter are replaced by two underscore characters (__) to differentiate this from other parameters.

Job Scheduler for Operations

The Oracle Fleet Patching and Provisioning job scheduler provides you with a mechanism to submit operations at a scheduled time instead of running the command immediately, querying the metadata of the job, and then deleting the job from the repository.

The Oracle Fleet Patching and Provisioning job scheduler includes the following features:
  • Enables you to schedule a command to run at a specific point in time by providing the time value

  • Performs the job and stores the metadata for the job, along with the current status of the job

  • Stores the logs for each of the jobs that have run or are running

  • Enables you to query job details ( for all jobs or for specific jobs, based on the user roles)

  • Deletes jobs

  • Authorizes the running, querying, and deleting of jobs, based on role-based access for users

Use the -schedule timer_value command parameter with any of the following RHPCTL commands to schedule certain Rapid Hope Provisioning operations:
  • rhpctl add workingcopy

  • rhpctl import image

  • rhpctl delete image

  • rhpctl add database

  • rhpctl move gihome

  • rhpctl move database

  • rhpctl upgrade database

  • rhpctl addnode database

  • rhpctl deletenode database

  • rhpctl delete workingcopy

For example:
$ rhpctl add workingcopy -workingcopy 18_3 -image 18_3_Base -oraclebase /u01/app/oracle -schedule 2016-12-21T19:13:17+05

All commands are run in reference with the time zone of the server, according to the ISO-8601 value, and RHPCTL displays the command result by specifying the same time zone.

Command Results

RHPCTL stores any command that is run from the command queue on the Oracle Fleet Patching and Provisioning Server. When you query a command result by specifying the command identifier, then RHPCTL returns the path to the job output file, along with the results.

Job Operation

When you run an RHPCTL command with the -schedule parameter, the operation creates a job with a unique job ID that you can query to obtain the status of the job.

Job Status

At any point in time, a job can be in any of the following states:
EXECUTED | TIMER_RUNNING | EXECUTING | UNKNOWN | TERMINATED
  • EXECUTED: The job is complete.

  • TIMER_RUNNING: The timer for the job is still running.

  • EXECUTING: The timer for the job has expired and the job is running.

  • UNKNOWN: There is an unexpected failure due to issues such as a target going down, nodes going down, or any resource failures.

  • TERMINATED: There is an abrupt failure or the operation has stopped.

Oracle Restart Patching and Upgrading

You can use Oracle Fleet Patching and Provisioning to patch and upgrade Oracle Restart using gold images.

You can move the target of single-node Oracle Restart to an Oracle home that you provision from a gold image that includes any patches. Oracle Fleet Patching and Provisioning ensures the copying of the configuration files, such as listener.ora, to the new Oracle home.

You can also use Oracle Fleet Patching and Provisioning to upgrade Oracle Restart using gold images. Upgrade the Oracle Restart environment by upgrading the Oracle home on the target Oracle home that you provision from a higher-level gold image. Oracle Fleet Patching and Provisioning updates the configuration and inventory settings.

Use the RHPCTL utility similar to the following to patch Oracle Restart:
rhpctl move gihome -sourcewc Oracle_Restart_home_1 -destwc Oracle_Restart_home_2 
-targetnode Oracle_Restart_node {superuser credentials}
Use the RHPCTL utility similar to the following to upgrade Oracle Restart:
rhpctl upgrade gihome -sourcewc source_Oracle_Restart_home -destwc higher_version_Oracle_Restart_home 
-targetnode Oracle_Restart_node {superuser credentials}

Oracle Fleet Patching and Provisioning Use Cases

Review these topics for step-by-step procedures to provision, patch, and upgrade your software using Oracle Fleet Patching and Provisioning.

Oracle Fleet Patching and Provisioning is a software lifecycle management solution and helps standardize patching, provisioning, and upgrade of your standard operating environment.

Creating an Oracle Grid Infrastructure 19c Deployment

Provision Oracle Grid Infrastructure software on two nodes that do not currently have a Grid home, and then configure Oracle Grid Infrastructure to form a multi-node Oracle Grid Infrastructure installation.

Before You Begin

Provide configuration details for storage, network, users and groups, and node information for installing Oracle Grid Infrastructure in a response file. You can store the response file in any location on the Fleet Patching and Provisioning Server.

You can provision an Oracle Standalone Cluster, Oracle Application Clusters, Oracle Domain Services Cluster, or Oracle Member Clusters. Ensure that the response file has the required cluster configuration details.

Ensure that you have storage, network, and operating system requirements configured as stated in the Oracle Grid Infrastructure Installation Guide.

Procedure

  • From the Fleet Patching and Provisioning Server, run the command:
    $ rhpctl add workingcopy -workingcopy GI19c -image GI_HOME_19c –responsefile /u01/app/rhpinfo/GI_19c_install.rsp {authentication_option}

    GI19c is the working copy based on the image GI_HOME_19c

    /u01/app/rhpinfo/GI_19c_install.rsp is the response file location.

    Cluster Verification Utility checks for preinstallation configuration as per requirements. Fleet Patching and Provisioning configures Oracle Grid Infrastructure.

Oracle Grid Infrastructure 19c is provisioned as per the settings in the same response file.

During provisioning, if an error occurs, the procedure stops and allows you to fix the error. After fixing the error, you can resume the provisioning operation from where it last stopped.

Watch a video

Provisioning an Oracle Database Home and Creating a Database

This procedure provisions Oracle Database 19c software and creates Oracle Database instances.

Procedure

  1. From the Fleet Patching and Provisioning Server, provision the Oracle Database home software:
    $ rhpctl add workingcopy -image db19c -path /u01/app/dbusr/product/19.0.0/db19c
      -client client_001 -oraclebase /u01/app/dbusr/ -workingcopy db19wc 
    The command provisions the working copy db19wc to the specified path on the cluster client_001, from the image db19c.
  2. Create the database instance:
    $ rhpctl add database -workingcopy db19wc -dbname db -dbtype RAC
    The command creates an Oracle RAC database instance db . You can use the add database command repeatedly to create more instances on the working copy.

Watch a video

Provisioning a Pluggable Database

You can provision a pluggable database (PDB) on an existing container database (CDB) running in a provisioned database working copy.

After you create a working copy of a gold image, provision that working copy to a target, and create a database as a multitenant CDB, you can add a PDB to the CDB using the rhpctl addpdb database command.
  • The following command example creates a PDB called pdb19c on a CDB called raccdb19c, which is on a working copy called wc_db19c:
    $ rhpctl addpdb database -workingcopy wc_db19c -cdbname raccdb19c -pdbname pdb19c
  • Use the rhpctl deletepdb database command to delete a PDB from an existing CDB on a working copy.
    The following command example deletes a PDB called pdb19c on a CDB called raccdb19c, which is on a working copy called wc_db19c:
    $ rhpctl deletepdb database -workingcopy wc_db19c -cdbname raccdb19c -pdbname pdb19c

Upgrading to Oracle Grid Infrastructure 19c

This procedure uses Fleet Patching and Provisioning to upgrade your Oracle Grid Infrastructure cluster from 18c to 19c.

Before You Begin

To upgrade to Oracle Grid Infrastructure 19c, your source must be Oracle Grid Infrastructure 11g release 2 (11.2.0.3 or 11.2.0.4), Oracle Grid Infrastructure 12c release 2 (12.1.0.2 or 12.2.0.1), or Oracle Grid Infrastructure 18c.

Ensure that groups configured in the source home match those in the destination home.

Ensure that you have an image GI_HOME_19c of the Oracle Grid Infrastructure 19c software to provision your working copy.

GI_18c is the active Grid Infrastructure home on the cluster being upgraded. It is a working copy because in this example, Fleet Patching and Provisioning provisioned the cluster. Fleet Patching and Provisioning can also upgrade clusters whose Grid Infrastructure homes are unmanaged that is, homes that Fleet Patching and Provisioning did not provision.

Procedure

  1. Provision a working copy of the Oracle Grid Infrastructure 19c software:
    $ rhpctl add workingcopy -workingcopy GI9c -image GI_HOME_19c {authentication_option}

    GI19c is the working copy based on the image GI_HOME_19c.

  2. Upgrade your target cluster to the GI19c working copy:
    rhpctl upgrade gihome -sourcewc GI18c -destwc GI19c
    Rapid Home Provisioning identifies the cluster to upgrade based on the name of the source working copy, and upgrades to the working copy GI19c.

Patching Oracle Grid Infrastructure Without Changing the Grid Home Path

This procedure explains how to patch Oracle Grid Infrastructure without changing the Grid home path.

Before You Begin

  • Ensure that the gold image containing the Grid home is imported and exists on the Fleet Patching and Provisioning Server.

  • Ensure that the directory you provide in the ­path option is not an existing directory.

  • The source Grid home must be a managed home (provisioned by Fleet Patching and Provisioning). It does not need to be an Oracle Layered File System (OLFS)-compliant home.

  • The Grid home must be Oracle Grid Infrastructure 19.1 or later.

Procedure for Patching

  • To move a non-OLFS-compliant Grid home to an OLFS Grid home, from the Fleet Patching and Provisioning Server, run two commands similar to the following:
    $ rhpctl add workingcopy -workingcopy GI_HOME_19c_PSU1 -aupath
      /u01/app/orabase/product/19.1.0/aupath -image image_name
      -client client_name -softwareonly
    $ rhpctl move gihome -srcwc GI_HOME_19c -destwc GI_HOME_19c_PSU1 -agpath
      /u01/app/orabase/product/19.1.0/agpath -path
      /u01/app/orabase/product/19.1.0/grid
  • To move an OLFS-compliant Grid home to a patched version, run two commands similar to the following:
    $ rhpctl add workingcopy -workingcopy gihome19c_psu1_lpm -aupath
      /u01/app/orabase/product/19.1.0/aupath -image image_name -client
      client_name -softwareonly
    $ rhpctl move gihome -sourcewc gihome19c_lpm -destwc gihome19c_psu1_lpm

Patching Oracle Grid Infrastructure and Oracle Databases Simultaneously

This procedure patches Oracle Grid Infrastructure and Oracle Databases on the cluster to the latest patch level without cluster downtime.

Before You Begin

In this procedure, Oracle Grid Infrastructure 19c is running on the target cluster. Working copy GI_HOME_19c_WCPY is the active Grid home on this cluster. Working copy DB_HOME_19c_WCPY runs an Oracle RAC 19c Database with running database instance db1. Working copy DB_HOME_19c_WCPY runs an Oracle RAC 18c Database with running database instance db2

Ensure that you have images GI_HOME_19c_PSU1, DB_HOME_19c_PSU1, DB_HOME_18c_PSU5 with the required patches for Oracle Grid Infrastructure and Oracle RAC Database on the Fleet Patching and Provisioning Server.

The groups configured in the source home must match with those in the destination home.

Procedure

  1. Prepare target Oracle homes as follows:
    1. Provision software-only Grid home on the cluster to be patched:
      $ rhpctl add workingcopy -workingcopy GI_HOME_19c_PATCHED_WCPY 
        -image GI_HOME_19c_PSU1 –client CLUSTER_005 -softwareonly
    2. Provision each release Database home, without database instances, to be patched:
      $ rhpctl add workingcopy -workingcopy DB_HOME_19c_PATCHED_WCPY 
        -image DB_HOME_19c_PSU1
      $ rhpctl add workingcopy -workingcopy DB_HOME_18c_PATCHED_WCPY 
        -image DB_HOME_18c_PSU5
  2. Patch Oracle Grid Infrastructure and all Oracle RAC Databases on node1 as follows:
    $ rhpctl move gihome -sourcewc GI_HOME_19c_WCPY -destwc GI_HOME_19c_PATCHED_WCPY -auto
      -dbhomes DB_HOME_18c_WCPY=DB_HOME_18c_PATCHED_WCPY,DB_HOME_19c_WCPY=DB_HOME_19c_PATCHED_WCPY  -targetnode node1 {authentication_option}

    When you run the command, you move your active Oracle Grid Infrastructure from working copy GI_HOME_19c_WCPY to GI_HOME_19c_PATCHED_WCPY, Oracle RAC Database db1 from DB_HOME_19c_WCPY to DB_HOME_19c_PATCHED_WCPY, and Oracle RAC Database db2 from DB_HOME_18c_WCPY to DB_HOME_18c_PATCHED_WCPY.

Patching Oracle Database 19c Without Downtime

This procedure explains how to patch Oracle Database 19c with the latest patching without bringing down the database.

Before You Begin

You have an Oracle Database db19c that you want to patch to the latest patch level.

Ensure that the working copy db19c_psu based on the image DB19c_PSU contains the latest patches and is available.

Procedure

From the Fleet Patching and Provisioning Server, run one of the following commands as per your source and destination database:

  1. To patch an Oracle Database home managed by Fleet Patching and Provisioning, and there exist working copies of the source and destination databases, run:
    rhpctl move database -sourcewc db19c -patchedwc db19c_psu

    db19c is the source working copy of the database being patched

    db19c_psu is the working copy of the Oracle Database software with patches applied, based on the image DB19c_PSU.

  2. To patch an unmanaged Oracle Database home (source working copy does not exist because the Oracle home is not managed by Fleet Patching and Provisioning), run:
    rhpctl move database -sourcehome /u01/app/orabase/product/19.0.0/dbhome_1
     -patchedwc db19c_psu -targetnode node1

    targetnode specifies the node on which the database to be upgraded is running, because the source Oracle Database is on a 19c cluster.

    /u01/app/orabase/product/19.0.0/dbhome_1 is the path of the database being patched

    db19c_psu is the working copy of the Oracle Database software with patches applied, based on the image DB19c_PSU.

    Use the saved gold image for standardized patching of all your databases of release 19c to the same patch level.
  3. If for some reason, you want to rollback the patches applied to a managed Oracle Database home, run:
    rhpctl move database -sourcewc db19c_psu 
    -patchedwc db19c -ignorewcpatches

    db19c is the working copy of the unpatched database to which you want to roll back.

    db19c_psu is the working copy of the Oracle Database software with patches applied, based on the image DB19c_PSU.

For all Oracle Databases, you can also specify these additional options with the move database command:

  • -keepplacement: For admin-managed Oracle RAC Databases (not Oracle RAC One Node Database), Fleet Patching and Provisioning retains the services on the same nodes after the move.

  • -disconnect: Disconnects all sessions before stopping or relocating services.

  • -drain_timeout: Specify the time, in seconds, allowed for resource draining to be completed for planned maintenance operations. During the draining period, all current client requests are processed, but new requests are not accepted. This option is available only with Oracle Database 12c release 2 (12.2) or later.

  • -stopoption: Stops the database.

  • -nodatapatch: Ensures datapatch is not run for databases you are moving.

Watch a video

Upgrading to Oracle Database 19c

This procedure describes how to upgrade an Oracle database from Oracle Database 18c to 19c with a single command, using Fleet Patching and Provisioning, both for managed and unmanaged Oracle homes.

Before you Begin

  • To upgrade to Oracle Database 19c, your source database must be either Oracle Database 11g release 2 (11.2.0.3 or 11.2.0.4), Oracle Database 12c release 1 (12.1.0.2 or 12.2.0.1), or Oracle Database 18c.

  • Oracle Grid Infrastructure on which the pre-upgrade database is running must be of the same release or later than the database release to which you are upgrading.

  • The source Oracle home to be upgraded can be a managed working copy, that is an Oracle home provisioned using Fleet Patching and Provisioning, or an unmanaged home, that is, an Oracle home not provisioned using Fleet Patching and Provisioning. If you are upgrading an unmanaged Oracle home, provide the complete path of the database for upgrade.

Procedure to Upgrade Oracle Database using Fleet Patching and Provisioning

  1. From the Fleet Patching and Provisioning Server, run one of the following commands as per your source and destination database:
    1. To upgrade an Oracle home managed by Fleet Patching and Provisioning, and there exist working copies of the source and destination databases, run:
      $ rhpctl upgrade database -dbname test_database -sourcewc db18c -destwc db19c
        {authentication_option}

      test_database is the name of the database being upgraded.

      db18c is the source working copy of the pre-upgrade database.

      db19c is the working copy of the upgraded Oracle Database software.

    2. To upgrade an unmanaged Oracle home (source working copy does not exist because the Oracle home is not managed by Fleet Patching and Provisioning), run:
      $ rhpctl move database -sourcehome /u01/app/orabase/product/18.0.0/dbhome_1
        -destwc db19c -targetnode node1 {authentication_option}

      /u01/app/orabase/product/18.0.0/dbhome_1 is the path of the database being upgraded.

      db19c is the working copy of the upgraded Oracle Database software.

      targetnode specifies the node on which the database to be upgraded is running, because the source Oracle Database is on a 18c cluster.

The upgraded database is now managed by Fleet Patching and Provisioning. You can ensure that your database is patched to the latest level, using Fleet Patching and Provisioning.

Note:

During upgrade, if an error occurs, the procedure stops and allows you to fix the error. After fixing the error, you can resume the upgrade operation from where it last stopped.

Watch a video

Adding a Node to a Cluster and Scaling an Oracle RAC Database to the Node

You can add a node to your two-node cluster by using Fleet Patching and Provisioning to add the node, and then extend an Oracle RAC database to the new node.

Before You Begin

In this procedure, Oracle Grid Infrastructure 19c is running on the cluster. Working copy GI_HOME_19c_WCPY is the active Grid home on this cluster.

The Oracle RAC database home runs on the working copy DB_HOME_19c_WCPY.

Ensure that you have storage, network, and operating system requirements configured for the new node as stated in Oracle Grid Infrastructure Installation Guide.

Procedure

  1. From the Fleet Patching and Provisioning Server, run the following command to add a node to the existing Oracle Grid Infrastructure working copy:
    rhpctl addnode gihome -workingcopy GI_HOME_19c_WCPY -newnodes n3:n3-vip {authentication_option}
    The command extends the cluster by adding node3.
  2. Add instances to the administrator-managed Oracle RAC database on the new node:
    rhpctl addnode database -workingcopy DB_HOME_19c_WCPY -dbname db321 -node n3 {authentication_option}
    The command extends the database home on the node3 and creates database db321 on this node.

Adding Gold Images for Fleet Patching and Provisioning

Create gold images of software home and store them on the Fleet Patching and Provisioning Server, to use later to provision Oracle homes.

Before You Begin

The Oracle home to be used for creating the gold image can be on the Fleet Patching and Provisioning Server, or Fleet Patching and Provisioning Client, or any target machine that the Fleet Patching and Provisioning Server can communicate with.

Procedure

Create gold images of Oracle homes in any of the following ways and store them on the Fleet Patching and Provisioning server:

  1. Import an image from an installed Oracle home on the Fleet Patching and Provisioning Server:
    rhpctl import image -image db19c -path /share/software/19c/dbhome -imagetype ORACLEDBSOFTWARE 

    The gold image of imagetype Oracle Database 19c software is created and stored on the Fleet Patching and Provisioning Server.

    You can also create gold images of Oracle Grid Infrastructure or any other software by specifying -imagetype as ORACLEGISOFTWARE, ORACLEGGSOFTWARE, or SOFTWARE respectively.

  2. Import an image from an installed Oracle home on a Fleet Patching and Provisioning Client by running the following command from the Fleet Patching and Provisioning Client:
    rhpctl import image -image db19c -path /u01/app/dbusr/product/19.0.0/

    The command creates and adds the image db19c based on the local Oracle home installed in the specified path.

Note:

You cannot directly use images as software homes. Use images to create working copies of software homes.

User Actions for Common Fleet Patching and Provisioning Tasks

You can use Fleet Patching and Provisioning user actions to perform many tasks, such as installing and configuring any type of software and running scripts.

Deploying a Web Server

The following procedure demonstrates automated deployment of Apache Web Server using Fleet Patching and Provisioning:

  1. Create a script to install Apache Web server, as follows:
    1. On the Fleet Patching and Provisioning Server, download and extract the Apache Web server installation kit.
    2. Create the script to install, configure, and start the Apache Web server.
  2. Register the script as a user action with Fleet Patching and Provisioning by running the following command on the Fleet Patching and Provisioning Server:
    rhpctl useraction -useraction apachestart 
    -actionscript /user1/useractions/apacheinstall.sh 
    -post -optype ADD_WORKINGCOPY -onerror ABORT

    The preceding command adds the apachestart user action for the action script stored in the specified directory. As per the specified properties, the user action runs after the ADD_WORKINGCOPY operation and aborts if there is any error.

  3. Create an image type and associate the user action with the image type, as follows:
    rhpctl add imagetype -imagetype apachetype -basetype SOFTWARE 
    -useraction "apachestart"

    The preceding command creates a new image type called apachetype, a derivative of the basic image type, SOFTWARE, with an associated user action apachestart.

  4. Create a gold image of the image type, as follows:
    rhpctl import image -image apacheinstall -path /user1/apache2219_kit/ 
    -imagetype apachetype

    The preceding command creates a gold image, apacheinstall, with the script for Apache Web server installation, in the specified path, based on the imagetype you created earlier.

    To view the properties of this image, run the rhpctl query image -image apacheinstall command.

  5. Deploy a working copy of the gold image on the target, as follows:
    rhpctl add workingcopy -workingcopy apachecopy -image apacheinstall 
    -path /user1/apacheinstallloc -sudouser user1 
    -sudopath /usr/local/bin/sudo -node node1 -user user1 
    -useractiondata "/user1/apachehome:1080:2.2.19"

    Fleet Patching and Provisioning provisions the software to the target and runs the apachestart script specified in the user action. You can provide the Apache Web server configuration details such as port number with the useractiondata option. If the target is a Fleet Patching and Provisioning Client, then you need not specify sudo credentials.

Registering Multiple Scripts Using a Single User Action

Run multiple scripts as part of a user action plug-in by registering a wrapper script and bundled custom scripts. The wrapper script extracts the bundled scripts, which are copied under the directory of the wrapper script, and then runs those extracted scripts as necessary, similar to the following procedure:

  1. The following command creates a user action called ohadd_ua, and associates a wrapper script, wc_add.sh, with a zip file containing other scripts:
    rhpctl add useraction -useraction ohadd_ua -actionscript
    /scratch/crsusr/wc_add.sh -actionfile /scratch/crsusr/pack.zip -pre -runscope
    ALLNODES -optype ADD_WORKINGCOPY

    The wrapper script, wc_add.sh, extracts the pack.zip file into the script path, a temporary path to which the user action scripts are copied. The wrapper script can invoke any scripts contained in the file.

  2. The following command creates an image type, sw_ua, for the ohadd_ua user action:
    rhpctl add imagetype -imagetype sw_ua -useractions ohadd_ua -basetype SOFTWARE
  3. The following command creates an image called swimgua from the software specified in the path:
    rhpctl import image -image swimgua -path /tmp/custom_sw -imagetype sw_ua
  4. The following command adds a working copy called wcua and runs the wc_add.sh script:
    rhpctl add workingcopy -workingcopy wcua -image swimgua -client targetcluster