The backup supports two scenarios; Clone operation and Backup operation. Both are archives and can be restored in case something happens to the machine and the IFS Home needs to be restored.
The backup option is more limited than the clone operation and can only be restored on the same host using the same IFS Home location.
The clone operation is an archive that contains the minimum for what is needed to recreate the IFS Home it was created from. In short, a clone can create a new IFS Home based on an existing one. It can be used as a backup for an existing IFS Home or be used to create a replica of an existing IFS Home.
Stop all Managed Servers, the Admin Server and the Node Managers. For more details about how to stop all servers see Administration scripts. There is a script in <ifs_home>/instance/<instance>/bin called 'perform_backup' that takes zero or more arguments where each argument means:
-clone : If a clone should be created. If this argument is left out a backup will be created. -name <value> (optional) : The name to be used for the backup. This is purely cosmetical and if used will produce a zip called backup_<name>.zip or clone_<name>.zip based on if a clone or a backup is created. See overwrite flags for information about name conflicts. Defaults to the instance name followed by timestamp, e.g. clone_myInstace_201406111407 or backup_myInstace_201406111407. -dest <value> (optional) : The destination to write the zip to. The path should exist and be writable. UNC paths is supported. Defaults to IFS Home. -overwrite true|false (optional) : If another zip with the same name exists it can be overwritten. The default value is false. If overwrite is false and there exists a zip with the same name, the current date will be appended to the file name. If overwrite is true and there exists a zip with the same name, it will be removed! -nodoc (optional) : If Documentation is to be excluded. Defaults to include Documentation.
Example:
>perform_backup -clone -name testing -dest \\safe_place\clones -overwrite false Creates a clone with the name clone_testing.zip in \\safe_place\clones. If a clone with the same name already exists the date will be appended to the file name. >perform_backup -name testing -dest \\safe_place\backups -overwrite true Creates a backup with the name backup_testing.zip in \\safe_place\backups. An existing file with the same name will be removed. >perform_backup -clone Creates a clone with default name in the default location. Will not overwrite any existing files. >perform_backup Creates a backup with default name in the default location. Will not overwrite any existing files.
The major difference between a clone and a backup is what files that are included in the zip. While the backup contains all files in the IFS Home the clone only contains enough to recreate it from scratch. If you wish to include files into the clone that aren't included by default, you can create a file called 'clone_include_list' and place this under <ifs_home>/instance/<instance> and specify the files to include.
Example: Content of <ifs_home>/instance/<instance>/clone_include_list:
instance/${ifs.j2ee.instance}/bin/my_custom_script
It is possible (as in the above example) to use properties. The files defined in the clone_include_list will be added to a restored clone at the end of the restoration process.
A backup can be restored by extracting the content of the zip in the same place and on the same host as the previous IFS Home. It cannot be used on another host or on another location. When extracted, the services need to be recreated for the Node Managers (assuming they aren't already installed).
Go to <ifs_home>/instance/<instance>/bin and run:
install_service_as and start the Service
install_service_http and start the Service
install_service_connectserver1 and start the Service
While still in the bin folder run:
start_all_servers
If Documentation is included (any documentation*.zip or f1doc.zip in <ifs_home>/ifsdoc
folder) :
unzip all zip files in <ifs_home>/ifsdoc to the same location.
Remove zip files when done.
Run <ifs_home>/instance/<instance>/bin/install_service_solr
and start the Service.
The servers should start and the backup restoration should be complete. See troubleshoot section for problems that might occur.
There are two different ways to restore a clone; an automatic restoration and a manual restoration. There is also a third option, semi-automatic, that can be used but it is only recommended to do so in cases where automatic or manual aren't sufficient. In order to restore a clone it must first be extracted. It can be extracted on any host and in any location.
The clone can be restored on any host and in any location. Be aware that the same configuration will be used for the clone as for the original, including instance name, port configuration, database etc. Therefore it will connect to the database configured for the original IFS Home and will update Middleware Server related information. The restoration process will fail if done on the same machine as the original IFS Home unless that IFS Home is deleted (or at least completely halted) first!
Manual or semi-automatic scenario recommended if the clone needs to be recreated on the same host. In <ifs_home>/instance/<instance>/bin there is a script called 'restore_clone' that takes zero or one argument:
-instance(optional): The instance name to used instead of the old one.
The main reason for using the instance parameter is when you want to restore a clone on the same machine as the original but don't intend to run them side by side. Note: The two installations will be using the same ports!
If the IFS Home location is not the same as the original (the host is irrelevant in this case) the Middleware installation will first be moved. This is not necessary if the location remains the same. This step is done automatically.
Example
>restore_clone Will move the installation if needed (in case the IFS Home location is not the same), and create a new IFS Home with the same configuration as the original. >restore_clone -instance myNewInstace Will move the installation if needed (in case the IFS Home location is not the same), and create a new IFS Home with the same configuration as the original but with a new instance name.
The automatic restore process is a fresh install in silent mode where all the parameters are based on the original IFS Home. This also requires that any external files that have been specified in the original installation must exist when restoring. This means that keytab files or certificates and similar must exist in the originally specified places.
Review the output to make sure that everything has gone well. The process may in some cases report a positive result even though some operations failed.
A clone can be restored by running the installer from the extracted IFS Home and can then be treated as a fresh installation. However, if the IFS Home has been extracted in a location that is different from the original IFS Home location (the host is irrelevant) the Middleware installation must first be moved. Go to <ifs_home>/instance/<instance>/bin and run 'move_clone'. The script takes no arguments. After the installation is successfully moved, run the installer. The manual restore process is the same as a fresh install, with the only exception that it is based on an existing IFS Home.
The semi-automatic scenario is for "Yes, I want exactly that, but can you make it green?"-type scenarios. It is possible to make changes to the configuration before restoring the clone automatically by manually modifying the <instance>_configuration.xml in <ifs_home>/instance/<instance> folder. Some changes are easier to accomplish that others, and this scenario requires some knowledge of the installer process. This scenario is not recommended, and has limited support. When the changes has been made to the configuration file, see 'Automatic' section on how to proceed.
There is a zip file created in <ifs_home>/instance/<instance> called cluster_node. Copy this zip file to every node in the cluster. The archive contains an updated ifs.properties and
and updated mws.properties. Copy these files to the following locations:
ifs.properties: <ifs_home>/instance/<instance>/conf
mws.properties: <ifs_home>/instance/<instance>/bin
The archive also contains a file called cluster_node.cmd/sh. Run this file from anywhere and give the password when prompted.
Restart the Node Manager first and then restart the Managed Server. Verify that the Managed Server has contact with the Admin Server and that clustering works as expected.
There is no need to backup cluster nodes. If a node goes down it can be recreated from the cluster zip according to the cluster guide.
Extracting the zip encountered some problem:
It is probably nothing to worry about, it is usually files deep down the Middleware Server installation. If the IFS Home location is changed, the installation will be moved anyway
and those files will be restored. If the IFS Home location remains the same, the move_clone script can be executed manually to be on the safe side.
Move operation is not working:
The new home location might be stored when moving an installation. If the home
location has been previously used, the move operation will fail. Change the IFS
Home location or go to C:\Program Files\Oracle\Inventory\ContentsXML\inventory.xml
(on Windows) and remove the entry associated with the IFS Home to be reused.
The restore process ended with errors:
Unable to start Managed Server:
Unable to start Managed Server on nodeX: