143 PRVE-00004 to PRVE-10269
- PRVE-00004: The given operator is not supported.
-
Cause: This is an internal error.
- PRVE-00005: The string does not represent a numeric value.
-
Cause: This is an internal error.
- PRVE-00008: Could not find command name in description of executable
-
Cause: This is an internal error.
- PRVE-00009: Failed to resolve variable "{0}"
-
Cause: This is an internal error.
- PRVE-00010: Properly formatted RESULT tags are not found in the command output: "{0}".
-
Cause: This is an internal error.
- PRVE-00011: Properly formatted COLLECTED tags are not found in the command output.
-
Cause: This is an internal error.
- PRVE-00012: Execution of verification executable failed.
-
Cause: An error was encountered while executing verification executable.
- PRVE-00013: Execution of analyzer failed.
-
Cause: An error was encountered while executing analyzer.
- PRVE-00016: Unable to write data to file "{0}".
-
Cause: The path specified is not writeable.
- PRVE-00018: Encountered a NULL executable argument.
-
Cause: This is an internal error.
- PRVE-00021: HugePages feature is not enabled on nodes "{0}"
-
Cause: Available memory is greater than 4GB, but OS HugePages feature is not enabled.
- PRVE-00022: Could not get available memory on node "{0}"
-
Cause: An error occurred accessing /proc/meminfo to determine available system memory.
- PRVE-00023: HugePages feature is not supported on node "{0}"
-
Cause: HugePages feature of Linux operating system was not found supported on the indicated node.
- PRVE-00024: Transparent HugePages feature is enabled on node "{0}"
-
Cause: Transparent HugePages feature was found always enabled on the indicated node.
- PRVE-00027: Hardware clock synchronization could not be determined on node "{0}"
-
Cause: The hwclock command is used in the shutdown script, but it is not possible to establish that the --systohc option is enabled.
- PRVE-00028: Hardware clock synchronization could not be established as halt script does not exist on node "{0}"
-
Cause: The shutdown or halt script /etc/rc.d/init.d/halt does not exist or is missing.
- PRVE-00029: Hardware clock synchronization check could not run on node "{0}"
-
Cause: The shutdown or halt script may not be accessible or a command may have failed.
- PRVE-00033: Core files are not enabled on node "{0}"
-
Cause: The core file setting currently prevents the creation of a core file for process aborts and exceptions.
- PRVE-00034: Error encountered while trying to obtain the core file setting on node "{0}"
-
Cause: An error occurred while attempting to determine the core file setting.
- PRVE-00038: The SSH LoginGraceTime setting on node "{0}" may result in users being disconnected before login is completed
-
Cause: The LoginGraceTime timeout value is too low which is causing users to be disconnected before their login completes.
- PRVE-00039: Error encountered while trying to obtain the LoginGraceTime setting on node "{0}"
-
Cause: An error occurred while attempting to obtain the LoginGraceTime setting.
- PRVE-00042: Maximum locked memory "{0}" limit when automatic memory management is enabled is less than the recommended value in the file "{1}" [Expected = "{2}", Retrieved="{3}"] on node "{4}"
-
Cause: The value of maximum locked memory is less than the recommended value for automatic memory management.
- PRVE-00043: Error encountered when checking maximum locked memory limit on node "{0}"
-
Cause: An error was encountered when retrieving the value of maximum locked memory.
- PRVE-00044: No entry in configuration file when checking locked memory limit on node "{0}"
-
Cause: No entry for maximum locked memory limit was found in /etc/security/limits.conf.
- PRVE-00047: Error when checking IP port availability
-
Cause: Command failed when checking port availability.
- PRVE-00048: Check for IP port "{0}" availability failed
-
Cause: IP Port 8888 not available.
- PRVE-00052: The syslog.conf log file sync setting on node "{0}" may result in users being disconnected before logins are completed
-
Cause: Not all log file specifications in /etc/syslog.conf are prefixed by the '-' character which will cause log messages being written to disk before control is released. This may cause users to be disconnected before their login completes.
- PRVE-00053: Error encountered while trying to obtain the syslog.conf log file sync settings on node "{0}"
-
Cause: An error occurred while attempting to read the log file sync settings specified in file: /etc/syslog.conf.
- PRVE-00054: File '/etc/syslog.conf' not found on node "{0}"
-
Cause: An error occurred while attempting to read the log file sync settings specified in file: '/etc/syslog.conf'. The file '/etc/syslog.conf' was not found on the system.
- PRVE-00055: Cannot read the file '/etc/syslog.conf' on node "{0}"
-
Cause: The user did not have permissions to read the '/etc/syslog.conf' file on the system.
- PRVE-00059: no default entry or entry specific to user "{0}" was found in the configuration file "{1}" when checking the maximum locked memory "{2}" limit on node "{3}"
-
Cause: There was no default or user-specific entry for maximum locked memory limit found in the indicated configuration file.
- PRVE-00060: Cannot read the shutdown script file "{0}"
-
Cause: The current user did not have read access to the indicated file.
- PRVE-00067: Maximum locked memory "{0}" limit setting is less than the recommended value in the file "{1}" [Expected = "{2}", Actual="{3}"] on node "{4}".
-
Cause: A check of maximum locked memory settings determined that the value of maximum locked memory was less than the recommended value of 3 GB in the indicated file for the current user.
- PRVE-00068: Maximum locked memory "{0}" limit setting is less than the recommended value in the file "{1}" [Expected = "{2}", Actual="{3}"] when huge pages are enabled on node "{4}".
-
Cause: A check of maximum locked memory settings determined that the value of maximum locked memory specified for the current user in the indicated file was less than the recommended value of maximum locked memory when huge pages are enabled on the indicated node.
- PRVE-00075: The verification for device special file '/dev/ofsctl' failed, the file is not present on node "{0}".
-
Cause: The device special file '/dev/ofsctl' was expected to be present on the node indicated as the ACFS drivers are installed and the file was missing.
- PRVE-00079: The UDEV rule for "ofsctl" was not found in the rule file '55-usm.rules' on node "{0}".
-
Cause: The ACFS verification found that the UDEV rule specification 'KERNEL=="ofsctl"' was not found in the rule file '55-usm.rules'.
- PRVE-00080: failed to execute the command 'osdbagrp -a' successfully on node "{0}"
-
Cause: The ACFS verification encountered a problem in attempting to execute the command '$CRS_HOME/bin/osdbagrp -a' to obtain the ASM admin group name.
- PRVE-00082: The device special file attributes did not meet the expected requirements on node "{0}".\n[permissions: Expected="{1}"; Found="{2}"] [owner: Expected="{3}"; Found="{4}"] [group: Expected="{5}"; Found="{6}"]
-
Cause: The file attributes for the device special file '/dev/ofsctl' did not match the expected values.
- PRVE-00083: The UDEV rule specified in the '55-usm.rules' file does not meet the expected requirements on node "{0}".\n[group: Expected="{1}"; Found="{2}"] [mode: Expected="{3}"; Found="{4}"]
-
Cause: The ACFS verification encountered the problem that the UDEV rule defined in the rule file for "ofsctl" did not match the expected values.
- PRVE-00084: Current '/dev/shm/' mount options do not contain one or more required options. [ Found: "{0}"; Missing: "{1}" ].
-
Cause: Required '/dev/shm' mount options were missing.
- PRVE-00085: Configured '/dev/shm/' mount options do not contain one or more required options. [ Found: "{0}"; Missing: "{1}" ].
-
Cause: Required '/dev/shm' mount options were missing.
- PRVE-00086: Current '/dev/shm/' mount options include one or more invalid options. [ Found: "{0}"; Invalid: "{1}" ].
-
Cause: One or more invalid '/dev/shm' mount options were found.
- PRVE-00087: Configured '/dev/shm/' mount options include one or more invalid options. [ Found: "{0}"; Invalid: "{1}" ].
-
Cause: One or more invalid '/dev/shm' mount options were found.
- PRVE-00088: '/dev/shm/' mount options did not satisfy the requirements on node "{0}".
-
Cause: '/dev/shm/' mount options mismatch found. The reasons for the mismatch could be the following: 1. One or more required mount options were missing in current and configured mount options 2. One or more invalid mount options were found in current and configured mount options
- PRVE-00093: The 'DefaultTasksMax' parameter is set to an incorrect value in file '/etc/systemd/system.conf' on node "{0}". [ Found: "{1}"; Expected: "{2}" ].
-
Cause: The DefaultTasksMax parameter was set to an incorrect value in /etc/systemd/system.conf file on the specified node.
- PRVE-00234: Error encountered while trying to obtain the hangcheck_timer setting on node "{0}"
-
Cause: An error occurred while attempting to determine the hangcheck_timer setting.
- PRVE-00243: CSS diagwait is not set to recommended value of 13 on node "{0}"
-
Cause: CSS diagwait does not meet the recommendation
- PRVE-00244: Error encountered while trying to obtain the CSS diagwait setting on node "{0}"
-
Cause: An error occurred while attempting to determine the CSS diagwait setting.
- PRVE-00253: CSS misscount is not set to recommended on node "{0}"
-
Cause: CSS misscount does not meet the requirement
- PRVE-00254: Error encountered while trying to obtain the CSS misscount setting on node "{0}"
-
Cause: An error occurred while attempting to determine the CSS misscount setting.
- PRVE-00263: CSS reboottime is not set to recommended value of 3 seconds on node "{0}"
-
Cause: CSS reboottime does not meet the requirement
- PRVE-00264: Error encountered while trying to obtain the CSS reboottime setting on node "{0}"
-
Cause: An error occurred while attempting to determine the CSS reboottime setting.
- PRVE-00273: The value of network parameter "{0}" for interface "{4}" is not configured to the expected value on node "{1}".[Expected="{2}"; Found="{3}"]
-
Cause: The indicated parameter of the indicated interface on the indicated node was not configured to the expected value.
- PRVE-00274: Error encountered while trying to obtain the network parameter setting on node "{0}"
-
Cause: An error occurred while attempting to retrieve network parameter setting.
- PRVE-00284: Error encountered while trying to obtain the virtual memory parameter setting on node "{0}"
-
Cause: An error occurred while attempting to retrieve virtual memory parameter setting.
- PRVE-00294: Error trying to obtain the MTU setting on node "{0}"
-
Cause: An error occurred while attempting to retrieve MTU setting.
- PRVE-00296: Error retrieving cluster interfaces on node "{0}"
-
Cause: Cluster interfaces could not be retrieved on the specified node using 'oifcfg getif'.
- PRVE-00304: Error while checking flow control settings in the E1000 on node "{0}"
-
Cause: An error occurred while attempting to retrieve E1000 flow control settings.
- PRVE-00314: Error while checking default gateway subnet on node "{0}"
-
Cause: An error occurred while attempting to retrieve subnet of default gateway.
- PRVE-00315: Error while checking VIP subnet on node "{0}"
-
Cause: An error occurred while attempting to retrieve subnet of VIP.
- PRVE-00324: Error while checking VIP restart configuration on node "{0}"
-
Cause: An error occurred while attempting to retrieve VIP restart configuration.
- PRVE-00334: Error while checking TCP packet re-transmit rate on node "{0}"
-
Cause: An error occurred while attempting to retrieve TCP packet re-transmit rate.
- PRVE-00343: Network packet reassembly occurring on node "{1}".
-
Cause: A possible cause is the difference in the MTU size across network
- PRVE-00344: Error while checking network packet reassembly on node "{0}"
-
Cause: An error occurred while attempting to check network packet reassembly.
- PRVE-00354: Error encountered while trying to obtain the CSS disktimeout setting
-
Cause: An error occurred while attempting to determine the CSS disktimeout setting.
- PRVE-00384: Error encountered while trying to obtain the hangcheck reboot setting on node "{0}"
-
Cause: An error occurred while attempting to determine the hangcheck reboot setting.
- PRVE-00394: Error encountered while trying to obtain the hangcheck tick setting on node "{0}"
-
Cause: An error occurred while attempting to determine the hangcheck tick setting.
- PRVE-00404: Error encountered while trying to obtain the hangcheck margin setting on node "{0}"
-
Cause: An error occurred while attempting to determine the hangcheck margin setting.
- PRVE-00414: Error encountered while trying to obtain the listener name on node "{0}"
-
Cause: An error occurred while attempting to determine the listener name.
- PRVE-00420: /dev/shm is not found mounted on node "{0}"
-
Cause: During database installation it is recommended to have /dev/shm mounted as a RAM file system.
- PRVE-00421: No entry exists in /etc/fstab for mounting /dev/shm
-
Cause: The file /etc/fstab did not have an entry specifying /dev/shm and its size to be mounted.
- PRVE-00422: The size of in-memory file system mounted at /dev/shm is "{0}" megabytes which does not match the size in /etc/fstab as "{1}" megabytes
-
Cause: The size of the mounted RAM file system did not equal the value configured for system startup.
- PRVE-00423: The file /etc/fstab does not exist on node "{0}"
-
Cause: The file /etc/fstab should exist on the node.
- PRVE-00426: The size of in-memory file system mounted as /dev/shm is "{0}" megabytes which is less than the required size of "{1}" megabytes on node "{2}"
-
Cause: The in-memory file system was found mounted a smaller size than required size on the identified node.
- PRVE-00427: Failed to retrieve the size of in-memory file system mounted as /dev/shm on node "{0}"
-
Cause: An attempt to retrieve the in-memory file system size failed on the identified node.
- PRVE-00428: No entry exists in /proc/mounts for temporary file system /dev/shm.
-
Cause: A CVU pre-install check for Linux containers failed because it determined that the file /proc/mounts did not have an entry for the temporary file system /dev/shm.
- PRVE-00438: Oracle Solaris Support Repository Updates version "{0}" is older than minimum supported version "{1}" on node "{2}".
-
Cause: A CVU pre-install check found that the indicated version of Oracle Solaris Support Repository Updates was older than the minimum supported version as indicated on the identified node.
- PRVE-00439: Command "{0}" issued on node "{1}" to retrieve the operating system version failed with error "{2}".
-
Cause: A CVU check for minimum SRU version could not complete because an error orccurred while attempting to determine the current operating system version on the indicated node using the indicated command.
- PRVE-00453: Reverse path filter parameter "rp_filter" for private interconnect network interfaces "{0}" is not set to 0 or 2 on node "{1}".
-
Cause: Reverse path filter parameter 'rp_filter' was not set to 0 or 2 for identified private interconnect network interfaces on specified node.
- PRVE-00454: Error encountered while trying to retrieve the value of "rp_filter" parameter of "{0}" network interfaces on node "{1}"
-
Cause: An error occurred while attempting to retrieve reverse path filter parameter 'rp_filter' value on specified node.
- PRVE-00456: Reverse path filter parameter "rp_filter" for private interconnect network interfaces "{0}" is not configured to the value of 0 or 2 in file /etc/sysctl.conf on node "{1}".
-
Cause: Reverse path filter parameter 'rp_filter' was not configured to the value 0 or 2 for identified private interconnect network interfaces on the specified node.
- PRVE-00463: Network bonding feature is enabled on node "{0}" with bonding mode "{1}" which is not an allowed value of 0 "balance-rr" or 1 "active-backup" for private interconnect network interfaces "{2}"
-
Cause: The bonding mode specified for the network interfaces used as private cluster interconnect was not an allowed value.
- PRVE-00464: Network bonding feature is enabled on nodes "{0}" with a bonding mode that conflicts with cluster private interconnect usage.
-
Cause: An incorrect bonding mode was used for the network bonds in which private interconnect network interfaces participate on the indicated nodes.
- PRVE-00465: Network bonding mode used on interfaces classified for private cluster interconnect on nodes "{0}" is inconsistent.\nNetwork bonding details are as follows:\n{1}
-
Cause: An inconsistent network bonding mode was used for the network bonds in which cluster interconnect interfaces participate on the indicated nodes.
- PRVE-00466: Private interconnect network interface list for current network configuration was not available
-
Cause: An attempt to retrieve cluster private network classifications failed.
- PRVE-00468: Different MTU values "{0}" configured for the network interfaces "{1}" participate in the network bonding with mode "{2}" on node "{3}".
-
Cause: The Cluster Verification Utility (CVU) determined that the indicated network interfaces were not configured with the same maximum transmission units (MTU) value on the indicated node.
- PRVE-00473: Berkeley Packet Filter device "{0}" is created with major number "{1}" which is already in use by devices "{2}" on node "{3}".
-
Cause: The indicated Berkeley Packet Filter device was found to be using a major number which is also in use by the indicated devices
- PRVE-00474: Berkeley Packet Filter devices do not exist under directory /dev on nodes "{0}".
-
Cause: Berkeley Packet Filter devices /dev/bpf* were not found under the /dev directory on the indicated nodes.
- PRVE-00475: Berkeley Packet Filter devices "{0}" are using same major number "{1}" and minor number "{2}" on node "{3}"
-
Cause: The indicated devices were found using same major and minor number on the identified node.
- PRVE-00476: Failed to list the devices under directory /dev on node "{0}"
-
Cause: An attempt to read the attributes of all the devices under /dev directory failed on the identified node.
- PRVE-02503: FAST_START_MTTR_TARGET should be 0 when _FAST_START_INSTANCE_RECOVERY_TARGET > 0 on rac.
-
Cause: FAST_START_MTTR_TARGET > 0, when _FAST_START_INSTANCE_RECOVERY_TARGET > 0
- PRVE-02504: Error while checking FAST_START_MTTR_TARGET
-
Cause: An error occurred while attempting to retrieve FAST_START_MTTR_TARGET.
- PRVE-02734: Error while checking _FAST_START_INSTANCE_RECOVERY_TARGET
-
Cause: An error occurred while attempting to retrieve _FAST_START_INSTANCE_RECOVERY_TARGET.
- PRVE-02873: Core files older than "{2}" days found in the core files destination "{5}" on node "{0}". [Expected = "{4}" ; Found = "{3}"]
-
Cause: Too many old core files found in the database core files destination.
- PRVE-02874: An error occured while checking core files destination on node "{0}".
-
Cause: The check to verify the existence of old core files failed.
- PRVE-02883: ORA-00600 errors found in the alert log in alert log destination "{4}" on node "{0}".
-
Cause: ORA-00600 errors were found in the alert log.
- PRVE-02884: An error occured while checking for ORA-00600 errors in the alert log.
-
Cause: The check to verify the existence of ORA-00600 errors in the alert log failed.
- PRVE-02893: Alert log files greater than "{2}" found in the alert log destination "{5}" on node "{0}". [Expected = "{4}" ; Found = "{3}"]
-
Cause: Alert log files greater than the indicated size found in the alert log destination.
- PRVE-02894: Error while checking the size of alert log file
-
Cause: The check to verify presence of large alert logs in the alert log destination failed.
- PRVE-02913: Trace files older than "{2}" days found in the background dump destination "{5}" on node "{0}". [Expected = "{4}" ; Found = "{3}"]
-
Cause: Too many old trace files found in the background dump destination.
- PRVE-02914: Error while checking trace files in background dump destination
-
Cause: The check to verify the existence of old trace files failed.
- PRVE-02923: ORA-07445 errors found in the alert log in alert log destination "{4}" on node "{0}".
-
Cause: ORA-07445 errors were found in the alert log.
- PRVE-02924: An error occured while checking for ORA-07445 errors in the alert log.
-
Cause: The check to verify the existence of ORA-07445 errors in the alert log failed.
- PRVE-03073: Disks "{1}" are not part of any disk group.
-
Cause: The indicated disks were found not to be part of any disk group.
- PRVE-03142: One or more ASM disk rebalance operations found in WAIT status
-
Cause: A query on V$ASM_OPERATION showed one or more ASM disk rebalance operations in WAIT status.
- PRVE-03143: Error occurred while checking ASM disk rebalance operations in WAIT status
-
Cause: An ASM query to obtain details of the ASM disk rebalance operations failed. Accompanying error messages provide detailed failure information.
- PRVE-03149: ASM disk group files "{2}" are incorrectly owned by users "{3}" respectively.
-
Cause: A query showed that the indicated ASM disk group files were not owned by the Grid user.
- PRVE-03150: error occurred while checking for the correctness of ASM disk group files ownership
-
Cause: An ASM query failed unexpectedly.
- PRVE-03155: ASM discovery string is set to the value "{1}" that matches TTY devices.
-
Cause: The ASM discovery string parameter ASM_DISKSTRING was set to a value that matches TTY devices.
- PRVE-03156: error occurred while checking for the selectivity of ASM discovery string
-
Cause: An ASM query failed unexpectedly.
- PRVE-03163: Exadata cell nodes "{2}" contain more than one ASM failure group.
-
Cause: A query showed that the indicated Exadata cell nodes contain more than one ASM failure group.
- PRVE-03164: error occurred while checking the Exadata cell nodes for multiple ASM failure groups
-
Cause: An ASM query failed unexpectedly.
- PRVE-03170: ASM spare parameters "{2}" are set to values different from their default values.
-
Cause: A query showed that values of the indicated ASM spare parameters had been changed.
- PRVE-03171: An error occurred while checking ASM spare parameters.
-
Cause: An ASM query to obtain details of the spare parameters before upgrade failed unexpectedly. Accompanying error messages provide detailed failure information.
- PRVE-03175: ASM compatibility for ASM disk group "{1}" is set to "{2}", which is less than the minimum supported value "{3}".
-
Cause: A query showed that the ASM disk group attribute "compatible.asm" for the indicated disk group was set to a value less than the minimum supported value.
- PRVE-03176: An error occurred while checking ASM disk group compatibility attribute.
-
Cause: An ASM query to obtain details of the ASM compatibility disk group attribute failed. Accompanying error messages provide detailed failure information.
- PRVE-03180: RDBMS compatibility for ASM disk group "{1}" is set to "{2}", which is less than the minimum supported value "{3}".
-
Cause: A query showed that the ASM disk group attribute "compatible.rdbms" for the indicated disk group was set to a value less than the minimum supported value.
- PRVE-03181: An error occurred while checking ASM disk group RDBMS compatibility attribute.
-
Cause: An ASM query to obtain details of the RDBMS compatibility disk group attribute failed. Accompanying error messages provide detailed failure information.
- PRVE-03185: One or more ASM disk rebalance operations found in WAIT status
-
Cause: A query on V$ASM_OPERATION showed one or more ASM disk rebalance operations in WAIT status.
- PRVE-03186: Error occurred while checking ASM disk rebalance operations in WAIT status
-
Cause: An ASM query to obtain details of the ASM disk rebalance operations failed. Accompanying error messages provide detailed failure information.
- PRVE-03191: Free space on one or more ASM diskgroups is below the reccommended value of {3}.
-
Cause: A query on V$ASM_DISKGROUP showed that free space on one or more ASM disk groups is below the indicated value.
- PRVE-03192: Error occurred while checking ASM disk group free space.
-
Cause: An ASM query to obtain details of the ASM disk group failed. Accompanying error messages provide detailed failure information.
- PRVE-03202: User "{0}" does not have the operating system privilege "{1}" on node "{2}"
-
Cause: A Direct Access (DAX) device '/dev/dax' was mounted on the node indicated but the Oracle user does not have the required operating system privilege to access this device.
- PRVE-03206: The disks in the ASM disk group "{1}" have different sizes.
-
Cause: The Cluster Verification Utility (CVU) found that disk size was not consistent across the disks in the indicated ASM disk group.
- PRVE-03207: Error occurred while checking ASM disk size consistency.
-
Cause: An ASM query to obtain details of the ASM disk group failed. Accompanying error messages provide detailed failure information.
- PRVE-10073: Required /boot data is not available on node "{0}"
-
Cause: The file '/boot/symvers-<kernel_release>.gz', required for proper driver installation, was not found on the indicated node.
- PRVE-10077: NOZEROCONF parameter was not specified or was not set to 'yes' in file "/etc/sysconfig/network" on node "{0}"
-
Cause: During NOZEROCONF check, it was determined that NOZEROCONF parameter was not specified or was not set to 'yes' in /etc/sysconfig/network file.
- PRVE-10078: LINKLOCAL_INTERFACES network parameter was defined in the file "/etc/sysconfig/network/config" on node "{0}".
-
Cause: During LINKLOCAL_INTERFACES parameter check, it was determined that the LINKLOCAL_INTERFACES network parameter was defined in the /etc/sysconfig/network/config file.
- PRVE-10079: Parameter "{0}" value in the file "{1}" cannot be verified on node "{2}".
-
Cause: An error occurred while verifying the indicated parameter value in the indicated file on the indicated node. The accompanying messages provide detailed failure information.
- PRVE-10083: Java Virtual Machine is not installed properly
-
Cause: There were not enough JAVA objects in DBA_OBJECTS table.
- PRVE-10084: Error while checking JAVAVM installation in the database
-
Cause: An error occurred while performing the check.
- PRVE-10094: Error while checking Time Zone file
-
Cause: An error occurred while performing the check.
- PRVE-10104: Error while checking standby databases
-
Cause: An error occurred while checking standby databases.
- PRVE-10113: "multi-user-server" service is "{0}" on node "{1}"
-
Cause: The 'svcs svc:/milestone/multi-user-server' command reported that the multi-user-server was not online on the specified node.
- PRVE-10114: "multi-user" service is "{0}" on node "{1}"
-
Cause: The 'svcs svc:/milestone/multi-user' command reported that the multi-user was not online on the specified node.
- PRVE-10115: Error while checking multiuser service
-
Cause: An error occurred while checking multiuser service
- PRVE-10123: Selected "{0}" group "{1}" is not same as the currently configured group "{2}" for existing Oracle Clusterware home "{3}"
-
Cause: An attempt to upgrade the database was rejected because the selected group was not the group configured for the existing Oracle Clusterware installation.
- PRVE-10124: Current selection of the "{0}" group could not be retrieved
-
Cause: The indicated group was not found selected or set to any valid operating system group name.
- PRVE-10125: Error while checking privileged groups consistency. \nError: {0}
-
Cause: An error occurred while checking privileged groups consistency.
- PRVE-10126: Configured "{0}" group for Oracle Clusterware home "{1}" could not be retrieved
-
Cause: The indicated group could not be retrieved using the 'osdbagrp' utility from the identified Oracle Clusterware home.
- PRVE-10128: Selected "{0}" group "{1}" is not same as the currently configured group "{2}" for existing database home "{3}"
-
Cause: An attempt to upgrade the database was rejected because the selected group was not the group configured for the existing database installation.
- PRVE-10129: Configured "{0}" group for database home "{1}" could not be retrieved
-
Cause: The indicated group could not be retrieved using the 'osdbagrp' utility from the identified database home.
- PRVE-10138: FILESYSTEMIO_OPTIONS is not set to the recommended value of setall
-
Cause: An attempt to match the value of parameter FILESYSTEMIO_OPTIONS with the recommended value failed.
- PRVE-10139: Error while checking FIELSYSTEMIO_OPTIONS
-
Cause: An attempt to check the value of the parameter FILESYSTEMIO_OPTIONS failed because the database was not configured correctly.
- PRVE-10150: The current IP hostmodel configuration for both IPV4 and IPV6 protocols does not match the required configuration on node "{0}". [Expected = "{1}" ; Found = "{2}"]
-
Cause: The IP hostmodel configuration on the indicated node for the specified protocols was 'strong' and should have been 'weak'.
- PRVE-10151: The current IP hostmodel configuration for "{0}" protocol does not match the required configuration on node "{1}". [Expected = "{2}" ; Found = "{3}"]
-
Cause: The IP hostmodel configuration on the indicated node for the specified protocol was 'strong' and should have been 'weak'.
- PRVE-10155: GSD resource is running and enabled on node "{0}".
-
Cause: GSD was found to be running and enabled on the indicated node.
- PRVE-10156: GSD resource is enabled on node "{0}".
-
Cause: GSD was found to be enabled on the indicated node.
- PRVE-10167: I/O Completion Ports (IOCP) device status did not match the required value on node "{0}". [Expected = "Available"; Found = "{1}"]
-
Cause: IOCP device status was not 'Available' on the indicated node. The IOCP device status must be 'Available' in order to list the candidate disks when creating ASM disk group.
- PRVE-10168: Failed to obtain the I/O Completion Ports (IOCP) device status using command "{0}" on node "{1}"
-
Cause: An attempt to obtain the status of IOCP device failed on the indicated node.
- PRVE-10169: North America region (nam) is not installed on node "{0}".
-
Cause: The command 'localeadm -q nam' reported that the North America region (nam) was not installed on the specified node.
- PRVE-10170: an error ocurred when trying to determine if North America region (nam) is installed on node "{0}".
-
Cause: An error ocurred while executing the command 'localeadm -q nam' and failed to determine whether North America region (nam) was installed on the specified node.
- PRVE-10171: English locale is not installed on node "{0}".
-
Cause: The command 'pkg facet -H *locale.en*' reported that the English locale was not installed on the specified node.
- PRVE-10172: An error occurred when trying to determine if English locale is installed on node "{0}".
-
Cause: An error ocurred while executing the command 'pkg facet -H *locale.en*'. Installation of English locale on the node could not be verified.
- PRVE-10183: File system path "{0}" is mounted with 'nosuid' option on node "{1}".
-
Cause: The identified file system path is mounted with the 'nosuid' option on the indicated node. This mount option creates permission problems in the cluster.
- PRVE-10184: Could not find file system for the path "{0}" using command "{1}" on node "{2}"
-
Cause: An error occurred while determining the file system for the identified path on the indicated node.
- PRVE-10210: error writing to the output file "{0}" for verification type "{1}": "{2}"
-
Cause: An error was encountered while writing the indicated output file.
- PRVE-10211: An error occurred while writing the report.
-
Cause: An error was encountered while writing one or more output files.
- PRVE-10232: Systemd login manager parameter 'RemoveIPC' is enabled in the configuration file "{0}" on node "{1}". [Expected="no"; Found="{2}"]
-
Cause: The 'RemoveIPC' systemd login manager parameter was found to be enabled on the indicated node. Enabling this parameter causes termination of Automatic Storage Management (ASM) instances when the last oracle/grid user session logs out.
- PRVE-10233: Systemd login manager parameter 'RemoveIPC' entry does not exist or is commented out in the configuration file "{0}" on node "{1}". [Expected="no"]
-
Cause: The 'RemoveIPC' systemd login manager parameter entry was not found or was commented out in the identified configuration file on the indicated node. By default this parameter is enabled and it causes termination of Automatic Storage Management (ASM) instances when the last oracle/grid user session logs out.
- PRVE-10237: Existence of files "{1}" is not expected on node "{0}" before Clusterware installation or upgrade.
-
Cause: The indicated files were found on the specified node.
- PRVE-10238: Error occurred while running commands "{1}" on node "{0}" to check for ASM Filter Driver configuration
-
Cause: An attempt to check for ASM Filter Driver configuration by running the indicated commands failed.
- PRVE-10239: ASM Filter Driver "{1}" is not expected to be loaded on node "{0}" before Clusterware installation or upgrade.
-
Cause: An attempt to install or upgrade Clusterware on the indicated node was rejected because the indicated ASM Filter Driver was already loaded.
- PRVE-10243: Failed to mount "{0}" at location "{1}" with NFS mount options "{2}".
-
Cause: An attempt to mount the indicated file system at the indicated location with the indicated mount options failed because the 'insecure' NFS export option was not used. The 'insecure' option was required for Oracle Direct NFS to make connections using a non-privileged source port.
- PRVE-10248: The file "{0}" either does not exist or is not accessible on node "{1}".
-
Cause: A Cluster Verification Utility (CVU) operation could not complete, because the indicated file was not accessible on the node shown.
- PRVE-10253: The path "{0}" either does not exist or is not accessible on node "{1}".
-
Cause: The Cluster Verification Utility (CVU) determined that the indicated path was not accessible.
- PRVE-10254: The path "{0}" does not have read permission for the current user on node "{1}".
-
Cause: A check for access control attributes found that the indicated path did not have read permission for the current user on the indicated node.
- PRVE-10255: The path "{0}" does not have write permission for the current user on node "{1}".
-
Cause: A check for access control attributes found that the indicated path did not have write permission for the current user on the indicated node.
- PRVE-10256: The path "{0}" does not have execute permission for the current user on node "{1}".
-
Cause: A check for access control attributes found that the indicated path did not have execute permission for the current user on the indicated node.
- PRVE-10257: The path "{0}" permissions did not match the expected octal value on node "{1}". [Expected = "{2}" ; Found = "{3}"]
-
Cause: A check for access control attributes found that the permissions of the indicated path on the indicated node were different from the required permissions.
- PRVE-10258: The path "{0}" permissions for owner did not match the expected octal value on node "{1}". [Expected = "{2}" ; Found = "{3}"]
-
Cause: A check for access control attributes found that the owner permissions of the indicated path on the indicated node were different from the required permissions.
- PRVE-10259: The path "{0}" permissions for group did not match the expected octal value on node "{1}". [Expected = "{2}" ; Found = "{3}"]
-
Cause: A check for access control attributes found that the group permissions of the indicated path on the indicated node were different from the required permissions.
- PRVE-10260: The path "{0}" permissions for others did not match the expected octal value on node "{1}". [Expected = "{2}" ; Found = "{3}"]
-
Cause: A check for access control attributes found that the others permissions of the indicated path on the indicated node were different from the required permissions.
- PRVE-10261: The path "{0}" owner did not match the expected value on node "{1}". [Expected = "{2}" ; Found = "{3}"]
-
Cause: A check for access control attributes found that the owner of the indicated path on the indicated node was different from the required owner.
- PRVE-10262: The path "{0}" group did not match the expected value on node "{1}". [Expected = "{2}" ; Found = "{3}"]
-
Cause: A check for access control attributes found that the group of the indicated path on the indicated node was different from the required group.
- PRVE-10266: Error occurred while running command "{1}" on node "{0}" to check for logical partition capacity entitlement.
-
Cause: An attempt to check for logical partition entitled capacity by running the indicated command failed. The accompanying messages provide detailed failure information.
- PRVE-10269: Logical partition entitled processing capacity is configured with a value less than expected on node "{0}".[Expected = "{2}" ; Found = "{1}"]
-
Cause: A check for capacity entitlement on the indicated node found that the entitled processing capacity is less than the expected value.