IFS Middleware Server Backup

Overview

The backup supports two scenarios; Clone operation and Backup operation. Both are archives and can be restored in case something happens to the machine and the IFS Home needs to be restored.

Backup

The backup option is more limited than the clone operation and can only be restored on the same host using the same IFS Home location.

Clone

The clone operation is an archive that contains the minimum for what is needed to recreate the IFS Home it was created from. In short, a clone can create a new IFS Home based on an existing one. It can be used as a backup for an existing IFS Home or be used to create a replica of an existing IFS Home.

Perform Backup or Clone

Stop all Managed Servers, the Admin Server and the Node Managers. For more details about how to stop all servers see Administration scripts. There is a script in <ifs_home>/instance/<instance>/bin called 'perform_backup' that takes zero or more arguments where each argument means:


-clone                           : If a clone should be created. If this argument is left out a backup will be created.

-name <value>   (optional)       : The name to be used for the backup. This is purely cosmetical and if used will produce a zip called backup_<name>.zip 
                                   or clone_<name>.zip based on if a clone or a backup is created. See overwrite flags for information about name conflicts.
                                   Defaults to the instance name followed by timestamp, e.g. clone_myInstace_201406111407 or backup_myInstace_201406111407.

-dest <value>   (optional)       : The destination to write the zip to. The path should exist and be writable. UNC paths is supported. Defaults to IFS Home.

-overwrite true|false (optional) : If another zip with the same name exists it can be overwritten. The default value is false.
                                   If overwrite is false and there exists a zip with the same name, the current date will be appended to the file name.
                                   If overwrite is true and there exists a zip with the same name, it will be removed!

-nodoc          (optional)       : If Documentation is to be excluded. Defaults to include Documentation.

Example:

>perform_backup -clone -name testing -dest \\safe_place\clones -overwrite false
Creates a clone with the name clone_testing.zip in \\safe_place\clones. If a clone with the same name already exists the date will be appended to the file name.
 
>perform_backup -name testing -dest \\safe_place\backups -overwrite true
Creates a backup with the name backup_testing.zip in \\safe_place\backups. An existing file with the same name will be removed.

>perform_backup -clone
Creates a clone with default name in the default location. Will not overwrite any existing files.

>perform_backup
Creates a backup with default name in the default location. Will not overwrite any existing files.

Including user-defined files in clone

The major difference between a clone and a backup is what files that are included in the zip. While the backup contains all files in the IFS Home the clone only contains enough to recreate it from scratch. If you wish to include files into the clone that aren't included by default, you can create a file called 'clone_include_list' and place this under <ifs_home>/instance/<instance> and specify the files to include.

Example: Content of <ifs_home>/instance/<instance>/clone_include_list:

instance/${ifs.j2ee.instance}/bin/my_custom_script

It is possible (as in the above example) to use properties. The files defined in the clone_include_list will be added to a restored clone at the end of the restoration process.

Restore Backup

A backup can be restored by extracting the content of the zip in the same place and on the same host as the previous IFS Home. It cannot be used on another host or on another location. When extracted, the services need to be recreated for the Node Managers (assuming they aren't already installed).

Go to <ifs_home>/instance/<instance>/bin and run:
install_service_as and start the Service
install_service_http and start the Service
install_service_connectserver1 and start the Service
While still in the bin folder run:
start_all_servers
If Documentation is included (any documentation*.zip or f1doc.zip in <ifs_home>/ifsdoc folder) :
unzip all zip files in <ifs_home>/ifsdoc to the same location. Remove zip files when done.
Run <ifs_home>/instance/<instance>/bin/install_service_solr and start the Service.

The servers should start and the backup restoration should be complete. See troubleshoot section for problems that might occur.

Restore Clone

There are two different ways to restore a clone; an automatic restoration and a manual restoration. There is also a third option, semi-automatic, that can be used but it is only recommended to do so in cases where automatic or manual aren't sufficient. In order to restore a clone it must first be extracted. It can be extracted on any host and in any location.

Automatic

The clone can be restored on any host and in any location. Be aware that the same configuration will be used for the clone as for the original, including instance name, port configuration, database etc. Therefore it will connect to the database configured for the original IFS Home and will update Middleware Server related information. The restoration process will fail if done on the same machine as the original IFS Home unless that IFS Home is deleted (or at least completely halted) first!

Manual or semi-automatic scenario recommended if the clone needs to be recreated on the same host. In <ifs_home>/instance/<instance>/bin there is a script called 'restore_clone' that takes zero or one argument:

-instance      (optional): The instance name to used instead of the old one.

The main reason for using the instance parameter is when you want to restore a clone on the same machine as the original but don't intend to run them side by side. Note: The two installations will be using the same ports!

If the IFS Home location is not the same as the original (the host is irrelevant in this case) the Middleware installation will first be moved. This is not necessary if the location remains the same. This step is done automatically.

Example

>restore_clone 
Will move the installation if needed (in case the IFS Home location is not the same),
and create a new IFS Home with the same configuration as the original.

>restore_clone -instance myNewInstace
Will move the installation if needed (in case the IFS Home location is not the same),
 and create a new IFS Home with the same configuration as the original but with a new instance name.

The automatic restore process is a fresh install in silent mode where all the parameters are based on the original IFS Home. This also requires that any external files that have been specified in the original installation must exist when restoring. This means that keytab files or certificates and similar must exist in the originally specified places.

Review the output to make sure that everything has gone well. The process may in some cases report a positive result even though some operations failed.

Manual

A clone can be restored by running the installer from the extracted IFS Home and can then be treated as a fresh installation. However, if the IFS Home has been extracted in a location that is different from the original IFS Home location (the host is irrelevant) the Middleware installation must first be moved. Go to <ifs_home>/instance/<instance>/bin and run 'move_clone'. The script takes no arguments. After the installation is successfully moved, run the installer. The manual restore process is the same as a fresh install, with the only exception that it is based on an existing IFS Home.

Semi-Automatic

The semi-automatic scenario is for "Yes, I want exactly that, but can you make it green?"-type scenarios. It is possible to make changes to the configuration before restoring the clone automatically by manually modifying the <instance>_configuration.xml in <ifs_home>/instance/<instance> folder. Some changes are easier to accomplish that others, and this scenario requires some knowledge of the installer process. This scenario is not recommended, and has limited support. When the changes has been made to the configuration file, see 'Automatic' section on how to proceed.

Cluster

There is a zip file created in <ifs_home>/instance/<instance> called cluster_node. Copy this zip file to every node in the cluster. The archive contains an updated ifs.properties and and updated mws.properties. Copy these files to the following locations:
ifs.properties: <ifs_home>/instance/<instance>/conf
mws.properties: <ifs_home>/instance/<instance>/bin
The archive also contains a file called cluster_node.cmd/sh. Run this file from anywhere and give the password when prompted. Restart the Node Manager first and then restart the Managed Server. Verify that the Managed Server has contact with the Admin Server and that clustering works as expected.

Backups and Clusters

There is no need to backup cluster nodes. If a node goes down it can be recreated from the cluster zip according to the cluster guide.

Troubleshooting Guide

Extracting the zip encountered some problem:
It is probably nothing to worry about, it is usually files deep down the Middleware Server installation. If the IFS Home location is changed, the installation will be moved anyway and those files will be restored. If the IFS Home location remains the same, the move_clone script can be executed manually to be on the safe side.

Move operation is not working:
The new home location might be stored when moving an installation. If the home location has been previously used, the move operation will fail. Change the IFS Home location or go to C:\Program Files\Oracle\Inventory\ContentsXML\inventory.xml (on Windows) and remove the entry associated with the IFS Home to be reused.

The restore process ended with errors:

  1. Make sure that there are no conflicts. Check the ports and that the service names isn't already taken.
  2. Remove the half-restored clone and restore it from scratch.

Unable to start Managed Server:

  1. Check the Managed Server logs.
  2. Go to <ifs_home>/wls_domain/<instance>/servers/ManagedServerX and remove the cache folder and retry. If this doesn't help, remove the ManagedServerX folder completely and retry.
  3. Go to <ifs_home>/wls_domain/<instance>/servers/ManagedServerX/data/nodemanager and open the file boot.properties. Replace the encrypted values for the username and password with clear text values and save the file. Restart the Managed Server. The values in boot.properties will be encrypted once the server is attempting to start so there are no reasons for security concerns.
  4. If nothing seem to help, run a reconfigure and remove the server that doesn't start and create a new one to replace the first one.


Unable to start Managed Server on nodeX:

  1. Make sure that ifs.properties and mws.properties are overwritten with the new files. Also make sure that the cluster_node.cmd/sh has been executed.
  2. Restart the Node Manager and try to restart the managed server again.
  3. If nothing seem to help, remove the current IFS Home and the services from that node and run the cluster script on the master node and recreate the IFS Home on the node according to cluster documentation.